id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.16486 | Rapidity gap distribution of diffractive small-$x_{I\hspace{-0.3em}P}$
events at HERA and at the EIC | We use the Kovchegov-Levin equation to resum contributions of large invariant
mass diffractive final states to diffractive structure functions in the dipole
picture of deep inelastic scattering. For protons we use a (modified)
McLerran-Venugopalan model as the initial condition for the evolution, with
free parameters obtained from fits to the HERA inclusive data. We obtain an
adequate agreement to the HERA diffractive data in the moderately high-mass
regimes when the proton density profile is fitted to the diffractive structure
function data in the low-mass region. The HERA data is found to prefer a proton
shape that is steeper than a Gaussian. The initial conditions are generalized
to the nuclear case using the optical Glauber model. Strong nuclear
modification effects are predicted in diffractive scattering off a nuclear
target in kinematics accessible at the future Electron-Ion collider. In
particular, the Kovchegov-Levin evolution has a strong effect on the Q 2
-dependence of the diffractive cross section. | Tuomas Lappi, Anh Dung Le, Heikki Mäntysaari | 2023-07-31T08:31:11Z | http://arxiv.org/abs/2307.16486v1 | # Rapidity gap distribution of diffractive small-\(x_{P}\) events at HERA and at the EIC
###### Abstract
We use the Kovchegov-Levin equation to resum contributions of large invariant mass diffractive final states to diffractive structure functions in the dipole picture of deep inelastic scattering. For protons we use a (modified) McLerran-Venugopalan model as the initial condition for the evolution, with free parameters obtained from fits to the HERA inclusive data. We obtain an adequate agreement to the HERA diffractive data in the moderately high-mass regimes when the proton density profile is fitted to the diffractive structure function data in the low-mass region. The HERA data is found to prefer a proton shape that is steeper than a Gaussian. The initial conditions are generalized to the nuclear case using the optical Glauber model. Strong nuclear modification effects are predicted in diffractive scattering off a nuclear target in kinematics accessible at the future Electron-Ion collider. In particular, the Kovchegov-Levin evolution has a strong effect on the \(Q^{2}\)-dependence of the diffractive cross section.
## I Introduction
Diffractive processes in deeply inelastic electron-hadron scattering (DIS) with no net color charge transfer are powerful in probing the high-energy structure of protons and nuclei. The color singlet exchange requires, at lowest order in perturbative QCD, two gluons to be exchanged, rendering diffractive cross sections more sensitive to the gluonic content of the target than inclusive ones. Consequently, high-energy diffraction can provide clear indications for gluon saturation effects, which are expected to occur in the regime of small longitudinal momentum fraction \(x\) due to non-linear QCD dynamics.
The Color Glass Condensate (CGC) effective theory provides a convenient framework to describe scattering processes at high energy [1]. Instead of (inclusive or diffractive) parton distribution functions, the target structure is described in terms of Wilson lines that describe an eikonal propagation of projectile partons in the target color field. In DIS, this CGC approach is frequently complemented with the dipole picture [2; 3; 4] in a frame where the virtual photon mediating the interaction has a large longitudinal momentum, so that its \(|q\bar{q}\rangle\) Fock state (and possibly \(|q\bar{q}g\rangle,|q\bar{q}gg\rangle\dots\)) has a long lifetime compared to the typical timescale of the interaction. The dipole model is particularly suitable to the study of gluon saturation. A particular advantage of this CGC + dipole picture is that it provides a common theoretical framework to incorporate the description of both inclusive and diffractive scattering processes in terms of the same degrees of freedom.
The CGC + dipole formalism has been widely employed in studying the diffractive dissociation of the photon off both protons and nuclei [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. One of the main advantages of this framework is that saturation effects appear naturally, consistently in both diffractive and inclusive cross sections. In practice, starting from the work in [23; 24], two quantum Fock state components \(|q\bar{q}\rangle\) at leading order and \(|q\bar{q}g\rangle\) (part of the next-to-leading order contribution) in approximative kinematics have been considered in order to compare to the available HERA diffractive data [5; 6; 7] as well as to make some predictions for future experiments [7; 8; 9; 10].
Recently, there has been a rapid progress towards next-to-leading (NLO) order accuracy. Developments that are necessary to achieve the NLO level in theoretical calculations include the tree-level diffractive \(q\bar{q}g\) production in exact kinematics [11] and loop corrections to the virtual photon wave functions describing the \(\gamma^{*}\to q\bar{q}\) splitting [25; 26]. In another aspect, there have been also attempts to resum soft gluon contributions in the regime of high-mass diffraction [12; 13; 14; 15; 16; 17; 18; 19; 20]. Such improvements in precision are particularly important for phenomenological studies related to future DIS facilities such as the future Electron-Ion Collider (EIC) [27] and the LHeC/FCC-he [28]. These future facilities are expected to provide very precise data for diffractive observables over a wide kinematical domain. In particular the first measurements for nuclear diffractive structure functions will be performed at the EIC in the 2030s. These measurements with nuclear targets are especially of interest as they are highly sensitive to the gluon saturation effects [29; 30], which are strongly enhanced by either going to smaller \(x\) or heavier nuclei.
In this work we focus on diffractive DIS in the region where the mass of the diffractively produced system is large, which requires the resummation of soft-gluon contributions by the means of the Kovchegov-Levin equation [12; 13; 15; 31]. This perturbative evolution equation requires non-perturbative input sensitive to the proton structure at moderately small \(x\), which can be constrained by HERA inclusive structure function data (see also Ref. [32] for a complementary approach starting from the proton large-\(x\) structure). The predictions for high-mass diffraction in electron-proton DIS at HERA and electron-nucleus DIS at the EIC are genuine predictions, once the initial condition for the Balitsky-Kovchegov (BK) evolution [33; 34] of the dipole amplitude has been fit to inclusive cross section data. The only additional free parameter in the calculation is the spatial
density profile of the proton, whose functional form is not probed in inclusive structure function measurements. Here we constrain this impact parameter profile with the HERA diffractive structure function data in the low-mass regime.
The paper is organized as follows. In the next section, we review the dipole picture of (diffractive) deep-inelastic scattering and the evolution equations in the CGC approach for both inclusive and diffractive processes. Both low-mass and high-mass approaches for diffraction are discussed for a more complete treatment. The application of our setup to the HERA diffractive data is then presented in Section III. In Section IV we make predictions for nuclear diffraction in kinematics accessibe at the future EIC. We finally draw some concluding remarks in Section V.
## II Diffractive deep inelastic scattering in the dipole picture
### Dipole picture and diffractive observables
Within the single-photon approximation, the deep inelastic interaction between the electron and a hadron is mediated by a photon of virtuality \(Q^{2}\). At high center-of-mass energy \(W\) of the photon-hadron sub-process, it is convenient to go to a reference frame where the photon has a large longitudinal momentum. In this frame, its coherence length in the longitudinal direction is larger than the size of the hadronic target. Hence, if the photon branches into a quark-antiquark dipole, this quantum fluctuation will occur long before traversing the target and, to a good approximation, the transverse size of the resulting dipole will remain unchanged during the interaction (see Fig. 1). Consequently, the dipole-proton scattering amplitude becomes a good degree of freedom to describe (both inclusive and diffractive) scattering processes at high energy.
In the diffractive dissociation process of interest, the diffractively produced system of invariant mass \(M_{X}\) in the final state results from the fragmentation of the dipole (possibly dressed by other partons from higher-order quantum fluctuations), while the target hadron remains intact. We only consider coherent diffraction in this work. An experimental signature of such a diffractive scattering is a rapidity gap \(Y_{\rm gap}\leq Y\), with Y being the total relative rapidity, between the diffractively produced system and the outgoing hadron, as illustrated in Fig. 1. In the theoretical point of view, this rapidity gap is due to the exchange of a color-singlet C-even pomeron in the \(t\) channel. When the momentum transfer is integrated out, the diffractive scattering process can be completely characterized by three invariants \(Q^{2}\), \(W\) and \(M_{X}^{2}\). Alternatively, one can use instead the variables \(x_{\mathbf{P}}\), \(\beta\) and \(Q^{2}\), where
\[\beta=\frac{Q^{2}}{Q^{2}+M_{X}^{2}} \tag{1}\]
and
\[x_{\mathbf{P}}=\frac{Q^{2}+M_{X}^{2}}{Q^{2}+W^{2}}. \tag{2}\]
In the pomeron exchange picture, \(x_{\mathbf{P}}\) can be interpreted as the fraction of the target longitudinal momentum carried by the pomeron (in the infinite momentum frame) and \(\beta\) is the momentum fraction of the pomeron carried by the struck parton. Note that these are related to the Bjorken variable as \(x=x_{\mathbf{P}}\beta\). By definition, the rapidity variables are linked to these momentum fractions as \(Y=\ln(1/x)\) and \(Y_{\rm gap}=\ln(1/x_{\mathbf{P}})\).
Before going to more details of the formulation, let us define the diffractive observables of interest for the current analysis. The experimentally determined diffractive structure functions \(F_{2,L}^{D(3)}\) are related to the diffractive virtual photon-hadron cross sections as
\[x_{\mathbf{P}}F_{L}^{D(3)}=\frac{Q^{2}}{4\pi^{2}\alpha_{em}}\frac{{\rm d}\sigma_{D \;(L)}^{\gamma^{*}h}}{{\rm d}\ln(1/\beta)}, \tag{3}\]
and
\[x_{\mathbf{P}}F_{2}^{D(3)}=\frac{Q^{2}}{4\pi^{2}\alpha_{em}}\left(\frac{{\rm d} \sigma_{D\;(T)}^{\gamma^{*}h}}{{\rm d}\ln(1/\beta)}+\frac{{\rm d}\sigma_{D\;(L )}^{\gamma^{*}h}}{{\rm d}\ln(1/\beta)}\right), \tag{4}\]
where \(T\) and \(L\) refer to the polarization state of the virtual photon. The most precise diffractive cross section measurements from HERA [35] are reported as a reduced diffractive cross section defined as
\[\sigma_{\rm red}^{D(3)}=F_{2}^{D(3)}-\frac{y^{2}}{1+(1-y)^{2}}F_{L}^{D(3)}, \tag{5}\]
where \(y=Q^{2}/(xs)\) is the inelasticity, and \(\sqrt{s}\) is the center-of-mass energy of the electron-proton scattering. The superscript "(3)" in the above formulae indicates that the relevant observables depend on three invariants, as mentioned above: in this work we only consider the case where the cross section is integrated over the squared momentum transfer \(t\). The diffractive cross section can also be expressed in terms of the mass of the diffractive system as
\[\frac{{\rm d}\sigma_{D}^{\gamma^{*}h}}{{\rm d}M_{X}}=\frac{2M_{X}}{Q^{2}+M_{X }^{2}}\left(\frac{{\rm d}\sigma_{D\;(T)}^{\gamma^{*}h}}{{\rm d}\ln(1/\beta)}+ \frac{{\rm d}\sigma_{D\;(L)}^{\gamma^{*}h}}{{\rm d}\ln(1/\beta)}\right). \tag{6}\]
The current investigation employs two different approaches to calculate diffractive cross sections. In the large \(\beta\) (small \(M_{X}^{2}\)) regime we use explicit results computed considering the \(q\bar{q}\) and \(q\bar{q}g\) components of the virtual photon (\(q\bar{q}g\) only in the high-\(Q^{2}\) limit), which have been extensively used in the literature, see e.g. Ref. [7]. We use these results as a baseline to fix the one remaining free parameter related to the proton spatial density profile as discussed in detail below. Then with no free parameters we calculate diffractive structure functions at
small \(\beta\) (large \(M_{X}^{2}\)) by solving the Kovchegov-Levin evolution equation which resums contributions from dipole states dressed by soft gluons. These two approaches are reviewed below.
### High-energy evolution and inclusive scattering
In the framework of the dipole picture, the strong interaction dynamics is encoded in the forward dipole-target elastic scattering amplitude \(N(\mathbf{r},Y;\mathbf{b})\), where \(\mathbf{r}\) is the transverse size of the dipole and \(\mathbf{b}\) is the dipole-target impact parameter. At a large number of colors \(N_{\mathrm{c}}\), the energy (or rapidity \(Y\)) dependence of the dipole amplitude is given by the Balitsky-Kovchegov (BK) equation [33; 34]
\[\partial_{Y}N(\mathbf{r},Y;\mathbf{b})=\int\mathrm{d}^{2}\mathbf{ r}_{1}\,\mathcal{K}(\mathbf{r},\mathbf{r}_{1},\mathbf{r}_{2})\left[N(\mathbf{r}_{1},Y;\mathbf{b}_{1})\right.\\ \left.+N(\mathbf{r}_{2},Y;\mathbf{b}_{2})-N(\mathbf{r},Y;\mathbf{ b})-N(\mathbf{r}_{1},Y;\mathbf{b}_{1})N(\mathbf{r}_{2},Y;\mathbf{b}_{2})\right], \tag{7}\]
where \(\mathbf{r}_{2}=\mathbf{r}-\mathbf{r}_{1}\), \(\mathbf{b}_{1}=\mathbf{b}-(\mathbf{r}_{2}/2)\) and \(\mathbf{b}_{2}=\mathbf{b}+(\mathbf{r}_{1}/2)\). The kernel \(\mathcal{K}(\mathbf{r},\mathbf{r}_{1},\mathbf{r}_{2})\) is related to the probability amplitude, at large \(N_{\mathrm{c}}\), for emitting a soft gluon at a point in the transverse plane characterized by two vectors \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) satisfying the triangular relation \(\mathbf{r}=\mathbf{r}_{1}+\mathbf{r}_{2}\) from the initial dipole. In this work we use the leading order BK equation (7), but include the running coupling corrections to the evolution by adopting the Balitsky running-coupling prescription [36], which reads
\[\mathcal{K}^{rc}(\mathbf{r},\mathbf{r}_{1},\mathbf{r}_{2})=\frac{ N_{\mathrm{c}}\alpha_{s}(r^{2})}{2\pi^{2}}\left[\frac{r^{2}}{r_{1}^{2}r_{2}^{2}}+ \frac{1}{r_{1}^{2}}\left(\frac{\alpha_{s}(r_{1}^{2})}{\alpha_{s}(r_{2}^{2})}-1 \right)\right.\\ \left.+\frac{1}{r_{2}^{2}}\left(\frac{\alpha_{s}(r_{2}^{2})}{ \alpha_{s}(r_{1}^{2})}-1\right)\right]. \tag{8}\]
The strong coupling constant in coordinate space is taken as
\[\alpha_{s}(r^{2})=\frac{12\pi}{(33-2N_{f})\ln\frac{4C^{2}}{r^{2}\Lambda_{ \mathrm{QCD}}^{2}}}. \tag{9}\]
To avoid the Landau pole, the running coupling is frozen at the value \(\alpha_{s}^{\mathrm{fr}}=0.7\) for \(r^{2}>r_{\mathrm{fr}}^{2}\), where \(r_{\mathrm{fr}}^{2}\) solves \(\alpha_{s}(r_{\mathrm{fr}}^{2})=\alpha_{s}^{\mathrm{fr}}\). The constant \(C^{2}\) in the above formula accounts for the uncertainty when transforming from momentum space to coordinate space. From theoretical considerations [36; 37] it should have the value \(e^{-2\gamma_{E}}\). In practice, however, the running coupling scale in coordinate space is taken as a free parameter that can absorb some dominant higher order effects that would slow down the evolution. The non-perturbative initial condition for the BK equation and the value of \(C^{2}\) are obtained from a fit to proton inclusive structure function data e.g. in Refs. [38; 39] (see also recent fits at next-to-leading order accuracy [40; 41] that however can not be used in the leading order calculation presented here). In this work we use the fits reported in Ref. [38], and consequently adopt the same setup and work with only light quarks (\(N_{f}=3,m_{f}=140\,\mathrm{MeV}\)). The considered fits initialize the BK evolution at rapidity \(Y_{\mathrm{min}}\equiv\ln(1/x_{\mathrm{init}})\) with \(x_{\mathrm{init}}=0.01\). The initial condition for the BK equation at this starting point is discussed in more detail in Sections III and IV for the scattering off protons and nuclei, respectively.
Given the forward dipole elastic amplitude \(N\), the total (inclusive) dipole-target cross section \(\sigma_{\mathrm{tot}}^{q\bar{q}h}\) can be computed straightforwardly using the optical theorem. Integrating out the \(\mathbf{b}\)-dependence one obtains
\[\sigma_{\mathrm{tot}}^{q\bar{q}h}(\mathbf{r},Y)=\int\mathrm{d}^{2}\mathbf{b} \;\;2N(\mathbf{r},Y,\mathbf{b}). \tag{10}\]
Convoluting with the photon impact factor, we eventually obtain the total (inclusive) photon-target cross section
\[\sigma_{\mathrm{tot}}^{\gamma^{*}h}(Q^{2},Y)=\sum_{f}\int\mathrm{ d}^{2}\mathbf{r}\int_{0}^{1}\mathrm{d}z\big{|}\psi^{\gamma^{*}\to f\bar{f}}(\mathbf{r},z,Q^{2}) \big{|}_{T+L}^{2}\\ \times\sigma_{\mathrm{tot}}^{q\bar{q}h}(\mathbf{r},Y), \tag{11}\]
where the photon wave functions \(\psi_{T,L}^{\gamma^{*}\to f\bar{f}}\) can be computed from QED using light cone perturbation theory [31]. Only light quark flavors are included in this work.
Figure 1: Diffractive dissociation in the dipole picture. The rapidity gap \(Y_{\mathrm{gap}}\) in the final state is due to the pomeron exchange (represented by the double wavy line) taking a momentum fraction \(x_{\mathbf{F}}\) of the hadron. Relevant kinematic variables described in the text are also shown.
### Diffraction at large and medium \(\beta\)
Now we turn to the calculation of diffractive observables. For medium to large values of \(\beta\), it is enough to consider only the two lowest order (in \(\alpha_{\rm s}\)) partonic states of the virtual photon, \(|q\bar{q}\rangle\) and \(|q\bar{q}g\rangle\). We quote here the well-known results for these contributions studied e.g. in Ref. [7]. The \(q\bar{q}\) contribution dominates at large \(\beta\gtrsim 0.5\), and the diffractive structure functions for transversely and longitudinally polarized virtual photons read
\[x_{\mathbf{P}}F_{q\bar{q},T}^{D(3)}=\frac{N_{c}Q^{4}}{16\pi^{3}\beta }\sum_{f}e_{f}^{2}\int\limits_{z_{0}}^{1/2}{\rm d}z\,z(1-z)\\ \left[\epsilon^{2}\left(z^{2}+(1-z)^{2}\right)\Phi_{1}+m_{f}^{2} \Phi_{0}\right], \tag{12}\]
and
\[x_{\mathbf{P}}F_{q\bar{q},L}^{D(3)} = \frac{N_{c}Q^{6}}{4\pi^{3}\beta}\sum_{f}e_{f}^{2}\int\limits_{z_{0 }}^{1/2}{\rm d}z\,z^{3}(1\;-\;z)^{3}\Phi_{0}. \tag{13}\]
Here we have used the following auxiliary function
\[\Phi_{n}=\int{\rm d}^{2}{\bf b}\left[2\int\limits_{0}^{\infty}{\rm d}r\,rK_{n} (\epsilon r)J_{n}(\kappa r)N({\bf r},Y_{\rm gap};{\bf b})\right]^{2}, \tag{14}\]
with \(\epsilon^{2}=z(1-z)Q^{2}+m_{f}^{2}\), \(\kappa^{2}=z(1-z)M_{X}^{2}-m_{f}^{2}\) and \(z_{0}=\left(1-\sqrt{1-4m_{f}^{2}/M_{X}^{2}}\right)/2\).
Toward smaller \(\beta\lesssim 0.5\) the contribution from one gluon emission becomes important. The diffractive \(q\bar{q}g\) production is known in exact kinematics [11], but in phenomenological applications so far only the so called Wusthoff result [42] obtained in the large \(Q^{2}\) limit has been used. In that limit the transverse polarization dominates, by the means of the \(\ln Q^{2}\) enhancement compared to the longitudinal one. Furthermore the \(q\bar{q}g\) system can be treated as an effective gluon dipole. The resulting contribution to the diffractive structure function reads
\[x_{\mathbf{P}}F_{q\bar{q}g,T}^{D(3)}=\frac{\alpha_{s}(Q^{2})\beta}{8 \pi^{4}}\sum_{f}e_{f}^{2}\int{\rm d}^{2}{\bf b}\int\limits_{0}^{Q^{2}}{\rm d} k^{2}\int\limits_{\beta}^{1}{\rm d}z\left\{\rule{0.0pt}{10.0pt}\right.\] \[\left.\rule{0.0pt}{10.0pt}\right.\] \[\left.\rule{0.0pt}{10.0pt}\right.\times\left[2\int\limits_{0}^{ \infty}{\rm d}r\,rK_{2}(\sqrt{z}kr)J_{2}(\sqrt{1-z}kr)\tilde{N}({\bf r},Y_{ gap};{\bf b})\right]^{2}\right\}, \tag{15}\]
with \(\tilde{N}=2N-N^{2}\) representing the dipole-target amplitude in the adjoint representation. Here we choose to evaluate the strong coupling constant \(\alpha_{\rm s}\) at the scale \(Q^{2}\).
Note that in Eqs. (12) to (15) the dipole-target amplitudes are evaluated at the rapidity \(Y_{\rm gap}\), since this low-mass diffraction can be treated as a quasi-elastic scattering process with \(Y\approx Y_{\rm gap}\) and \(F_{q\bar{q}}^{D(3)}\sim N^{2}\) ( or \(F_{q\bar{q}g}^{D(3)}\sim\tilde{N}^{2}\)). Recall that since we start the BK evolution at \(Y_{\rm min}\equiv\ln(1/x_{\rm init})\), then \(Y_{\rm gap}\geq Y_{\rm min}\) or \(x_{\mathbf{P}}\leq x_{\rm init}\). We will refer to these low-mass contributions as the GBW result1 hereafter.
Footnote 1: In their pioneering works [23; 24; 42], Golec-Biernat and Wüsthoff (GBW) used their saturation model for the dipole-target interaction instead of the BK-evolved dipole amplitudes used in the current study.
### Diffraction at small-\(\beta\) and the Kovchegov-Levin evolution equation
At small \(\beta\), higher-order gluonic states are essential, and it is necessary to resum soft gluon emissions to all orders. At large \(N_{c}\), this resummation can be done by using the Kovchegov-Levin (KL) evolution equation. Denoting the diffractive dipole-target cross section at fixed impact parameter \({\bf b}\) and with a _minimal_ rapidity gap \(Y_{0}\) by \(N_{D}({\bf r},Y,Y_{0};{\bf b})\), the KL equation reads2[12; 13; 31]
Footnote 2: The KL equation is known at NLO, see Ref. [15], which has the same form as the NLO BK equation [43]. Here we restrict ourselves to only the running-coupling correction consistently with our leading-log setup.
\[\partial_{Y}N_{D}({\bf r},Y,Y_{0};{\bf b})=\int{\rm d}^{2}{\bf r }_{1}\,{\cal K}({\bf r},{\bf r}_{1},{\bf r}_{2})\left[N_{D}({\bf r}_{1},Y,Y_{0}; {\bf b}_{1})\right.\\ \left.+N_{D}({\bf r}_{2},Y,Y_{0};{\bf b}_{2})-N_{D}({\bf r},Y,Y_{0};{\bf b })\right.\\ \left.+N_{D}({\bf r}_{1},Y,Y_{0};{\bf b}_{1})N_{D}({\bf r}_{2},Y,Y_ {0};{\bf b}_{2})\right.\\ \left.+N_{D}({\bf r}_{1},Y;{\bf b}_{1})N({\bf r}_{2},Y;{\bf b}_{2}) \right.\\ \left.+2N({\bf r}_{1},Y;{\bf b}_{1})N({\bf r}_{2},Y;{\bf b}_{2}) \right.\\ \left.-2N({\bf r}_{1},Y,Y_{0};{\bf b}_{1})N({\bf r}_{2},Y;{\bf b}_{ 2})\right.\\ \left.-2N({\bf r}_{1},Y;{\bf b}_{1})N_{D}({\bf r}_{2},Y,Y_{0};{\bf b }_{2})\right]. \tag{16}\]
The initial condition for the KL equation is given by
\[N_{D}({\bf r},Y=Y_{0},Y_{0};{\bf b})=N^{2}({\bf r},Y_{0};{\bf b}). \tag{17}\]
Here \(N({\bf r},Y_{0};{\bf b})\) is obtained as a solution to the BK equation. The integral kernel in Eq. (16) is the one used in the BK equation (7) for \(N({\bf r},Y;{\bf b})\). The KL equation (16) for \(N_{D}({\bf r},Y,Y_{0};{\bf b})\) can be transformed into the BK equation (7) for the quantity \(N_{I}({\bf r},Y,Y_{0};{\bf b})\equiv 2N({\bf r},Y;{\bf b})-N_{D}({\bf r},Y,Y_{0};{\bf b })\), which is the method we use to solve it numerically together with the BK evolution for \(N({\bf r},Y;{\bf b})\).
The diffractive cross section for the virtual photon-target scattering can be expressed in terms of the diffractive dipole-target cross section, similarly as in the inclu
sive case, as
\[\frac{\mathrm{d}\sigma_{D}^{\gamma^{*}h}}{\mathrm{d}\ln(1/\beta)}( \beta,x_{\mathbf{P}},Q^{2}) =\sum_{f}\int\mathrm{d}^{2}\mathbf{r}\int\limits_{0}^{1}\mathrm{d}z \left|\psi_{T,L}^{\gamma^{*}\to ff}\right|^{2}\] \[\qquad\times\frac{\mathrm{d}\sigma_{D}^{q\mathrm{ih}}}{\mathrm{d }\ln(1/\beta)}(\beta,x_{\mathbf{P}},\mathbf{r}). \tag{18}\]
The diffractive dipole-target cross section with a specific value of the gap is obtained as a derivative of \(N_{D}\), which was defined as an integral over rapidity gap sizes greater than \(Y_{0}\):
\[\frac{\mathrm{d}\sigma_{D}^{q\mathrm{ih}}}{\mathrm{d}\ln(1/\beta)}=\int\mathrm{ d}^{2}\mathbf{b}\ \left(-\frac{\mathrm{d}N_{D}(\mathbf{r},Y,Y_{0};\mathbf{b})}{\mathrm{d}Y_{0}} \right)\biggr{|}_{Y_{0}=Y_{\mathrm{gap}}}. \tag{19}\]
The minus sign in the above formula is from the definition of \(Y_{0}\) as the lower limit of possible gap sizes. Recall that the size of the rapidity gap at fixed Bjorken-\(x\) is related to the mass of the diffractively produced system, see Fig. 1 and the definitions of the kinematic variables in Eqs. (1) and (2).
The KL formulation provides an elegant way to analyse diffractive dissociation in the electron-hadron scattering at high-energy in the high-mass regime. We will hereafter treat the two cases (proton and nucleus) separately. We first apply the framework to proton targets. We then generalize the dipole-proton amplitude to the dipole-nucleus case, following Ref. [38], in Sec. IV.
## III Scattering off proton: comparison to HERA data
In deep inelastic scattering off a proton, we assume that the impact parameter dependence completely factorizes from both \(N\) and \(N_{D}\), and only the \(b\)-independent parts are evolved by the BK and KL equations. A similar factorization is assumed in Refs. [38; 39] where the initial condition for the BK evolution of the dipole-proton amplitude is fitted to inclusive structure function data. Now the dipole amplitude can be written as \(N(\mathbf{r},Y;\mathbf{b})=T_{p}(\mathbf{b})\mathcal{N}(r,Y)\), where \(T_{p}(\mathbf{b})\) is a certain transverse density profile and \(\mathcal{N}(r,Y)\) satisfies the \(\mathbf{b}\)-independent BK equation. After integrating over all impact parameters we obtain
\[\int\mathrm{d}^{2}\mathbf{b}\,T_{p}(\mathbf{b})=\sigma_{0}/2. \tag{20}\]
Here the effective transverse size of the proton is denoted by convention as \(\sigma_{0}/2\) (to compensate the factor 2 originating from the optical theorem in Eq. (10)), and is constrained by the HERA structure function data together with the initial condition for the BK equation.
Similarly the impact parameter dependence of the diffractive cross section is assumed to factorize as
\[\int\mathrm{d}^{2}\mathbf{b}\,N_{D}(\mathbf{r},Y,Y_{0};\mathbf{b})=\sigma_{0}^ {D}\mathcal{N}_{D}(r,Y,Y_{0}), \tag{21}\]
where \(\mathcal{N}_{D}(r,Y,Y_{0})\) is independent of the impact parameter and obeys the KL equation, and \(\sigma_{0}^{D}\) is a constant. The normalization factor \(\sigma_{0}^{D}\) can be deduced by noticing that at the initial condition of the KL evolution we have \(N_{D}=N^{2}\), see Eq. (17). This gives
\[\sigma_{0}^{D}=\int\mathrm{d}^{2}\mathbf{b}\,T_{p}^{2}(\mathbf{b}), \tag{22}\]
and implies that, for a given \(\sigma_{0}\), \(\sigma_{0}^{D}\) depends strongly on the shape of \(T_{p}(\mathbf{b})\). Consequently the relative normalization of diffractive and inclusive cross sections depends on the assumed shape of the proton.
The proton density profile can in principle be extracted from elastic scattering measurements. The spatial distribution of the small-\(x\) gluon field is most directly probed in exclusive vector meson (e.g. \(\mathrm{J}/\psi\)) production measurements at HERA [44; 45]. This data is compatible with a Gaussian density profile \(e^{-b^{2}/(2B)}\) with \(B\approx 4\,\mathrm{GeV}^{-2}\), although a direct comparison is only possible with the factorized \(b\)-profile and becomes more involved if this approximation is relaxed [46]. However, due to the limited squared momentum transfer \(|t|\) region covered by these measurements, also other density profiles are possible, see e.g. Refs. [47; 48; 49; 50; 51].
We parametrize the proton density profile using the regularized incomplete gamma function profile following Ref. [52], with a the parameter \(\omega\) controlling the steepness of the proton profile:
\[T_{p}(\mathbf{b})=\frac{\Gamma\left(\frac{1}{\omega},\frac{b^{2}}{R_{p}^{2} \omega}\right)}{\Gamma\left(\frac{1}{\omega}\right)}. \tag{23}\]
Here \(\pi R_{p}^{2}=\sigma_{0}/2\) and \(w\geq 0\). At \(\omega\to 0\), \(T_{p}(b)|_{\omega\to 0}=\Theta(R_{p}-b)\) (hard sphere), while at \(\omega=1\) the profile becomes Gaussian, \(T_{p}(b)|_{\omega=1}=\exp\left(-b^{2}/R_{p}^{2}\right)\). The Gaussian form corresponds to the one usually employed in the literature, e.g. in the popular IPsat parametrization for the dipole-target scattering [53]. The normalization factor for the diffractive cross sections \(\sigma_{0}^{D}\) defined in Eq. (22) will vary around the corresponding value obtained in terms of a Gaussian profile, \(\sigma_{0}^{D}(\omega=1)=\sigma_{0}/4\), depending on how steep the profile is compared to the Gaussian shape.
As mentioned above, the BK evolution starts with an initial amplitude at initial evolution rapidity \(Y=Y_{\mathrm{min}}\) corresponding to \(x=x_{\mathrm{init}}=0.01\), at which we shall employ the following parametrization [38] based on the McLerran-Venugopalan (MV) model [54]:
\[\mathcal{N}(r)=1-\exp\left[-\frac{(r^{2}Q_{s0}^{2})^{\gamma}}{4}\ln\left(e \cdot e_{c}+\frac{1}{r\Lambda_{\mathrm{QCD}}}\right)\right]. \tag{24}\]
Here \(Q_{s0}^{2}\) controls the initial proton saturation scale, \(\gamma\) is the initial anomalous dimension, and \(e_{c}\) modifies the behavior at large \(r\). Their values used in this analysis are taken from the fits to the HERA inclusive structure function data [55] reported in Ref. [38] (see also the earlier
similar study in Ref. [39]) and are summarized in Table 1. In addition, the constant \(C^{2}\) controlling the scale of the coordinate space running coupling in Eq. (9) and the effective proton area \(\sigma_{0}/2\) are also obtained from the corresponding fits. In this work we use all these three fits in order to determine the potential sensitivity on the uncertainties in the dipole-proton scattering amplitude.
For the current analysis, we consider the ZEUS FPC [56; 57] and the H1 + ZEUS combined datasets [35] for the diffractive structure functions and reduced cross sections. The combined data corresponds to coherent diffraction, as does our calculation. We use it to determine the optimal value for the proton shape parameter \(\omega\) denoted by \(\omega_{\rm opt}\). The ZEUS FPC data on the other hand contains a contribution from events where the proton dissociates to a system with relatively small invariant mass. When comparing to the ZEUS FPC data we scale the data down by a factor of 1.88 following a heuristic procedure to be specified later in order to obtain an estimate for the coherent contribution.
The optimal proton shape parameter \(\omega_{\rm opt}\) is determined as follows. We use the GBW result, Eqs. (12) and (13), to calculate the diffractive cross section at high \(\beta\) where the considered \(q\bar{q}\) component dominates [7]. The optimal \(\omega_{\rm opt}\) is then obtained by minimizing \(\chi^{2}\) to the high-\(\beta\) combined HERA data. We do not include the \(q\bar{q}g\) component here, as it gives a negligible contribution at high \(\beta\), and there is also an ambiguity in the scale of the running coupling. By fitting to the reduced diffractive cross section data at \(\beta>0.5\) (24 data points with \(\beta=0.562\) and \(\beta=0.816\), note that we only include the points with \(x_{\mathbf{F}}\leq 0.01\)), we obtain \(\omega_{\rm opt}\simeq 1.24\) (\(\chi^{2}_{\rm red}\approx 1.87\)) for the MV, \(\omega_{\rm opt}\simeq 2.32\) (\(\chi^{2}_{\rm red}\approx 1.08\)) for the MV\({}^{e}\), and \(\omega_{\rm opt}\simeq 2.31\) (\(\chi^{2}_{\rm red}\approx 1.09\)) for the MV\({}^{\gamma}\) parametrizations for the dipole-proton amplitude. Here \(\chi^{2}_{\rm red}\) is \(\chi^{2}\) per degree of freedom. The obtained good agreement with the \(\beta=0.562\) data is shown in Fig. 2. The modified MV model parametrizations MV\({}^{e}\) and MV\({}^{\gamma}\) result in almost identical cross sections and values for the proton shape parameter \(\omega\approx 2.3\) which is much steeper than the corresponding density profile with \(\omega\approx 1.2\) obtained using the MV model fit.
The density profiles corresponding to the optimal values of the \(\omega\) parameter compared to the Gaussian and step function profiles are shown in Fig. 3. In coor
\begin{table}
\begin{tabular}{l|c c c c|c|c} \hline \hline Parametrization & \(Q^{2}_{s0}({\rm GeV}^{2})\) & \(\gamma\) & \(e_{c}\) & \(\sigma_{0}/2\) (mb) & \(C^{2}\) & \(\omega_{\rm opt}\) \\ \hline \hline MV & 0.104 & 1 & 1 & 18.81 & 14.5 & 1.24 \\ MV\({}^{e}\) & 0.060 & 1 & 18.9 & 16.36 & 7.2 & 2.32 \\ MV\({}^{\gamma}\) & 0.159 & 1.129 & 1 & 16.35 & 7.05 & 2.31 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameters for the dipole-proton scattering amplitude (24) at the initial condition for the BK evolution used in the calculation (from Refs. [38; 39]). The determined optimal values for the parameter \(\omega\) in Eq. (23) controlling the shape of the proton density profile are also shown.
Figure 3: The proton impact parameter profiles given in Eq. (23) for the determined optimal values of \(\omega\) and \(\sigma_{0}\). Their corresponding squared Fourier transforms (FT) are plotted in the second row.
Figure 2: The reduced diffractive cross sections taking into account only the \(q\bar{q}\) contribution with the \(\omega\) dependent normalization factor fitted to the HERA combined data [35] at \(\beta>0.5\) and at different \(Q^{2}\) bins. Only the results at \(\beta=0.562\) are shown in this plot. The optimal values for \(\omega\) obtained with different dipole-proton amplitudes are shown in the legend.
dinate space the profile obtained with the MV model parametrization (\(\omega=1.24\)) is very close to a Gaussian one, and with \(\omega=2.32\) corresponding to \(\mathrm{MV}^{e}\) and \(\mathrm{MV}^{7}\) fits for the dipole amplitude we obtain a density profile that is much more steeply falling than a Gaussian close to the center of the proton, but which has a longer large-\(b\) tail. The corresponding two dimensional Fourier transforms are also shown in Fig. 3 as a function of \(t=-\mathbf{\Delta}_{\perp}{}^{2}\), where \(\mathbf{\Delta}_{\perp}\) is the Fourier conjugate to the impact parameter. Note that the exclusive vector meson production cross section discussed above is approximatively proportional to the squared Fourier transform. In Fourier space the \(\omega=1.24\) and the Gaussian profiles only deviate significantly in the \(|t|\gtrsim 0.5\mathrm{GeV}^{2}\) region where there is only limited data available, while for \(\omega=2.32\) the \(|t|\)-spectrum is somewhat steeper. We also note that with \(\omega>1\) we do not obtain any diffractive dips, and recall that no such minima are visible in the HERA data up to \(|t|\sim 1\mathrm{GeV}^{2}\). For a detailed discussion about the diffractive minima and their potential relation to saturation effects, see also Ref. [58].
Next we use the determined proton density profiles and compute predictions for the diffractive reduced cross section in a wide kinematical domain covered by the combined HERA data [35], now using the result obtained by solving the Kovchegov-Levin equation as disucssed in Section II.4. The reduced cross section as a function of \(Q^{2}\) in different bins of \(x_{\mathbf{P}}\) and \(\beta\) is shown in Fig. 4. The KL solutions exhibit a visible rise in \(Q^{2}\), for all values of \(\beta\) and \(x_{\mathbf{P}}\), up to a large \(Q^{2}\) where \(y\gtrsim 0.5\) and the second term in Eq. (5) becomes dominant. At \(\beta>0.1\), the data however depend weakly on \(Q^{2}\), which agrees with the known leading-twist behavior of the quark-antiquark contribution. The KL solutions cannot describe appropriately the data in this region. At smaller \(\beta\), where the effect of (soft) gluon emissions becomes important,
Figure 4: Diffractive reduced cross section as a function of \(Q^{2}\) at different values of \(\beta\) and \(x_{\mathbf{P}}\). The HERA combined dataset is taken from Ref. [35]. The bands represent the results from the KL solutions with the corresponding initial conditions for \(\omega\) varying in the region \(0.8\leq\omega<3\). The lines represent the numerical results for the optimal values of \(\omega\) as explained in the text.
a better description of the combined HERA data is obtained using the KL perturbative evolution equation, although the cross section especially at higher \(x_{I\!\!P}\) is typically slightly overestimated.
The dependence on the proton shape parameter is also illustrated in Fig. 4 (and the figures following) by varying the \(\omega\) parameter around the optimal value. Similarly to the large-\(\beta\) case, the normalization of the diffractive cross section is typically well described with \(\omega>1\), and as such also the small-\(\beta\) data prefers a density profile which is steeper than Gaussian, corresponding to a smaller overall normalization for the diffractive cross section.
The reduced diffractive cross section as a function of \(x_{I\!\!P}\) is shown in Fig. 5. Again a good agreement with the data is obtained at (moderately) small \(\beta\), although the normalization at high \(Q^{2}\) is typically overestimated as already seen above in Fig. 4. The maximum in the reduced cross section observed at small \(x_{I\!\!P}\) is again due to the longitudinal cross section \(F_{L}^{D(3)}\) becoming important when \(y\gtrsim 0.5\). The \(x_{I\!\!P}\) dependence becomes milder toward smaller \(\beta\) and smaller \(Q^{2}\). The mild \(x_{I\!\!P}\) dependence seen especially at small virtualities is compatible with the predictions from the BK and KL equations.
To directly probe the \(\ln 1/\beta\) evolution described by the KL equation we also calculate the diffractive cross section as a function of the mass of the diffractively produced system \(M_{X}\) or \(\beta\) (recall that \(M_{X}^{2}/Q^{2}\sim 1/\beta\)). The results as a function of \(\beta\) compared with the combined HERA data are shown in Fig. 6, and as a function of \(M_{X}\) compared with the ZEUS FPC dataset [56; 57] in Fig. 7. As mentioned before, the ZEUS FPC data includes some contribution from incoherent events where the proton dissociates into a low mass state (\(\gamma^{*}+p\to X+N\), \(M_{N}<2.3\) GeV). In order to approximatively remove this dissociative contribution not included in our calculation we scale down the data by a constant factor of 1.88. This factor is obtained as follows. First, the original ZEUS FPC data with \(\beta>0.5\) (154 points) are fitted using the GBW result with only the \(q\bar{q}\) contribution to obtain the optimal value for \(\omega\) for each initial condition. We then compute the ratio between \(\sigma_{0}^{D}\) at the obtained \(\omega\) and the one at \(\omega_{\rm opt}\) obtained from the fit to the HERA combined data above. The three different fits for the ini
Figure 5: Diffractive reduced cross section as a function of \(x_{I\!\!P}\) at different values of \(\beta\) and \(Q^{2}\). The HERA combined dataset is taken from Ref. [35]. The notations are same to Fig. 4.
tial conditions of the BK evolution result in very similar ratio, and the average value \(1.88\) is then chosen to be the scaling factor3
Footnote 3: We note that a slightly smaller value has been used in previous analyses e.g. in Ref. [7].
Again we find a good description of the available data, although the cross section is typically overestimated at high \(Q^{2}\). More importantly the \(\beta\) and \(M_{X}\) dependencies predicted by the KL equation are compatible with the HERA data, when we focus on the moderately high-mass regime (\(\beta\lesssim 0.1\)).
The mass spectra at fixed \(W\) and \(Q^{2}\) from the numerical calculation shown in Fig. 7 exhibit a similar trend as the data, which decreases toward the high-mass (small \(\beta\)) regime at a fixed Bjorken \(x\). Given the very mild dependence of the diffractive structure function on \(\beta\) as shown above, this behavior is predominantly due to the \(M_{X}\)-dependent prefactor in Eq. (6). Up to the chosen scaling factor, the KL evolution describes the mass dependence well in the high-mass domain. The diffractive cross section is underestimated in the low-mass domain, but we again emphasize that the KL evolution is expected to be an accurate description of the QCD dynamics only in the high-\(M_{X}\) region. However, a qualitative description of the data is also obtained when the KL results are extrapolated to the low-\(M_{X}\) region.
To complete our comparisons with the available HERA data, let us finally compare the \(Q^{2}\) and \(W\) dependencies
Figure 8: The dependence of the diffractive cross section on \(Q^{2}\) at \(W=220\)GeV and \(M_{X}=11\)GeV compared to the ZEUS FPC data [56; 57] (scaled down by a factor of \(1.88\)). The inset shows the diffractive cross section scaled by the photon virtuality, \(Q^{2}\mathrm{d}\sigma^{\gamma^{*}p}/\,\mathrm{d}M_{X}\).
Figure 6: Diffractive reduced cross section as a function of \(\beta\) at different values of \(x_{\not{P}}\) and \(Q^{2}\). The HERA combined dataset is taken from Ref. [35]. The notations are the same as in Fig. 4.
Figure 7: Mass spectrum at \(W=220\)GeV and \(Q^{2}=4\)GeV\({}^{2}\) compared to the ZEUS FPC data [56] (scaled down by a factor of \(1.88\)). The bands are the results from the solutions to the KL equations with the corresponding initial conditions, and with the b-profile parameter \(\omega\) varying in the range \(0.5\leq\omega\leq 2.0\). The lines represent the numerical results for the optimal values of \(\omega\) as explained in the text.
obtained from the solutions to the KL and BK equations to the ZEUS FPC data. The virtuality dependence at relatively high \(M_{X}\) is shown in Fig. 8, and the center-of-mass-energy \(W\) dependency is shown in Fig. 9. Similarly as when comparing to the combined HERA data, the \(Q^{2}\) and \(W\) dependencies in the ZEUS data are described fairly well especially when \(Q^{2}\) is not very large (i.e., \(\beta\) is small). While the cross section changes mildly with \(W\) in general, there is a significant decrease with increasing \(Q^{2}\). Such a decrease together with a modest variation of the scaled diffractive cross-section, \(Q^{2}{\rm d}q_{D}^{\gamma^{*}p}/{\rm d}M_{X}\) (shown in the inset of Fig. 8), for \(Q^{2}<M_{X}^{2}\) are indications for a leading twist-like behavior.
Before ending this section, let us compare the KL calculation to the GBW results including both the \(q\bar{q}\) and \(q\bar{q}g\) contributions. We emphasize that these results are strictly speaking valid in different kinematical limits: the GBW result including the \(q\bar{q}g\) contribution given by Eq. (15) is valid at high-\(Q^{2}\) and the KL evolution dominates at low-\(\beta\). The calculations are performed in the kinematics with \(\sqrt{s}=1.3\) TeV, which could be accessible in the future experiments such as the LHeC/FCC-eh, in order to have a wider phase space available. The comparison is shown in Fig. 10. The diffractive structure function scaled by \(x_{\mathbf{P}}\) rises toward small \(x_{\mathbf{P}}\), small \(\beta\) and large \(Q^{2}\) in both approaches. As for the diffractive reduced cross sections, there is however a peak in the region with \(y\gtrsim 0.5\) for the KL solutions, which does not manifest in the GBW result. This is attributed to the fact that the longitudinal contribution from gluon-dressed states is not included in the latter.
The \(\beta\) dependence from the GBW and the KL approaches is similar in the moderately small \(\beta\) region. The large-\(\beta\) structure in the GBW results originates from the different components (\(q\bar{q}\) from longitudinal or transverse photon, or \(q\bar{q}g\)) dominating at different \(\beta\) values [7]. At very small \(\beta\lesssim 10^{-2}\) the higher Fock states resummed in the KL evolution become important and result in faster increase of the cross section with decreasing \(\beta\) compared to the GBW approach.
The more obvious differences between the two results can be seen in the \(x_{\mathbf{P}}\) and \(Q^{2}\) spectra. To understand these discrepancies, let us return the formalism of the two approaches. The KL evolution is basically a BK evolution with a small delay at \(Y_{0}\). This delay will not change the dominant shape of the BK front in the dilute regime, meaning that the solutions to the KL in such regime scale as \(\mathcal{N}_{D}({\bf r},Y,Y_{0})\sim\left[{\bf r}^{2}Q_{s,D}^{2}(Y,Y_{0}) \right]^{\gamma_{c}}\) as for the BK, where \(\gamma_{c}\approx 0.85\) is the anomalous dimension generated by the running-coupling BK evolution [59]. Here \(Q_{s,D}^{2}\) refers to the saturation scale extracted from the diffractive cross section obtained as a solution to the KL equation. Note that here the delay does modify the saturation scale, which turns out to be its main effect, so that the saturation scale now depends very mildly on \(Y_{0}\), as shown numerically in Ref. [16]. Convoluting with the squared photon wave functions (see Ref. [21] for the detailed treatment of the \({\bf r}\)-integration) and considering \(Q^{2}>Q_{s,D}^{2}\) (which is relevant to our analyses), the diffractive structure function behaves as
\[\left[F_{2}^{D(3)}\right]_{\rm KL}\sim Q^{2}\left(\frac{Q_{s,D}^{2}}{Q^{2}} \right)^{\gamma_{c}}, \tag{25}\]
with the extra \(Q^{2}\) from Eq. (4). In this case, the dominant contribution to the \({\bf r}\)-integration comes from the dipole sizes \(r\sim 1/Q\). Again, Eq. (25) can explain the \(Q^{2}\) behavior of \({\rm d}\sigma_{D}^{\gamma^{*}p}/{\rm d}M_{X}\) (without the extra \(Q^{2}\)) shown in Fig. 8.
Now we turn to the GBW result. Taking the \(q\bar{q}\) contribution, the diffractive cross section scales as the dipole-proton amplitude squared \(\mathcal{N}^{2}({\bf r},Y)\sim\left[{\bf r}^{2}Q_{s}^{2}(x_{\mathbf{P}})\right]^{2 \gamma_{c}}\), with \(Q_{s}\) now being the normal saturation momentum from the BK evolution evaluated at \(x_{\mathbf{P}}\). The \({\bf r}\)
Figure 9: \(W\) dependence of the diffractive cross section at different diffractive masses \(M_{X}\) and photon virtualities \(Q^{2}\) compared to the ZEUS FPC data [56] (scaled down by a factor of 1.88). For simplicity, we show only the results with obtained with the optimal values of \(\omega\).
integration leads to
\[\left[F^{D(3)}_{q\bar{q}}\right]_{\rm GBW}\sim Q^{2}\left(\frac{Q_{s}^{2}}{Q^{2}} \right)=Q_{s}^{2}. \tag{26}\]
Meanwhile, the contribution of the \(q\bar{q}g\) component is given by [5; 21]
\[\left[F^{D(3)}_{q\bar{q}g}\right]_{\rm GBW}\sim Q^{2}\left(\frac{Q_{s}^{2}}{Q^{ 2}}\right)\ln\frac{Q^{2}}{Q_{s}^{2}}=Q_{s}^{2}\ln\frac{Q^{2}}{Q_{s}^{2}}. \tag{27}\]
Unlike the KL case, the \({\bf r}\)-integration leading to Eqs. (26) and (27) is dominated by \(r\sim 1/Q_{s}\).
Some remarks are in order concerning Eqs. (25) to (27). First, the diffractive structure function from the KL evolution has a power-law behavior in \(Q^{2}\), which grows faster than the logarithmic shape of the same observable calculated from the GBW approach. Furthermore, the KL evolution results in a milder dependence on \(x_{I\!\!P}\) of the diffractive structure function compared to the GBW calculation. Such behaviors can be indeed observed in the numerical comparison shown in Fig. 10. Finally, it is interesting to note that the further additions of gluons to the dipole wave function make the \(Q^{2}\)-dependence become steeper, which manifests itself in the the transition between the two approaches when varying \(\beta\).
To conclude this comparison, we note that the resummation of soft gluons included in the KL evolution has a significant effect on the \(\beta\) dependence of the cross section only in the very small \(\beta\lesssim 10^{-2}\) region which is only accessible in very high-energy nuclear DIS experiments such as the LHeC/FCC-he. On the other hand, the KL evolution also has a significant effect on the \(x_{I\!\!P}\) and \(Q^{2}\) systematics already in the EIC energy range, and as such the future EIC measurements will be able to (at least indirectly) probe the KL evolution dynamics.
## IV Electron-nucleus scattering: predictions for the future EIC
Now let us move from a proton to a nuclear target. Unlike in the proton case, we do not assume that the impact parameter dependence factorizes from the dipole-nucleus scattering amplitude. However, instead of investigating the fully impact-parameter-dependent BK and KL evolution equations, we follow Ref. [38] and solve these equations at each impact parameter \(b=|{\bf b}|\) independently. This approximation both simplifies the numerical calculation and also automatically avoids the problem of unphysical Coulomb tails which need to be regularized if finite-size effects are included in the evolution [60; 61; 62].
The initial condition for the BK evolution of the dipole-nucleus amplitude at fixed impact parameter is obtained by generalizing the dipole-proton scattering amplitude using the optical Glauber model following Ref. [38] to obtain
\[N_{A}(r,b)=1-\exp\left[-AT_{A}(b)\frac{\sigma_{0}}{2}\frac{(r^{ 2}Q_{s0}^{2})^{\gamma}}{4}\right.\\ \left.\times\ln\left(e\cdot e_{c}+\frac{1}{r\Lambda_{\rm QCD}} \right)\right]. \tag{28}\]
Here the subscript "A" is used to distinguish with the same quantities in the proton case. The nuclear thickness function \(T_{A}(b)\) is obtained from the Wood-Saxon (WS) distribution
\[\rho_{A}({\bf b},z)=\frac{\rho_{0}}{1+\exp\left[\frac{\sqrt{{\bf b}^{2}+z^{2}} -R_{A}}{d}\right]} \tag{29}\]
by integrating over the longitudinal coordinate \(z\). The nuclear geometry is controlled by the parameters \(d=0.54\) fm and \(R_{A}=(1.12A^{1/3}-0.86A^{-1/3})\) fm, and \(\rho_{0}\) is obtained from the normalization condition \(\int{\rm d}^{2}{\bf b}\,T_{A}(|{\bf b}|)=1\). As discussed in Ref. [38], this approach results in nuclear effects vanishing for small dipoles at the initial condition of the BK evolution. The other parameters in Eq. (28) are from the fits to the inclusive HERA data discussed in Section III. We will hereafter denote these by Glauber-MV, Glauber-MV\({}^{e}\) and Glauber-MV\({}^{\gamma}\) initial conditions originating from the MV, MV\({}^{e}\) and MV\({}^{\gamma}\) proton fits, respectively.
Following Ref. [38] we note that the nuclear saturation scales fall below the proton saturation scales at \(b\gtrsim 6.45\) fm (Glauber-MV) and \(b\gtrsim 6.3\) fm (Glauber-MV\({}^{e}\) and Glauber-MV\({}^{\gamma}\)). The BK evolution would result in a gluon density increasing rapidly in this low density region, which would lead to unphysically rapid growth of the nuclear size. Consequently in this dilute regime (\(b>b_{\rm cut}\)) we do not use the solutions to the evolution equations for the nuclear target, but assume that the nuclear scattering is an incoherent sum of the scatterings off nucleons which is also known as the impulse approximation (IA). This gives
\[N_{A}(r,Y;b>b_{\rm cut})=AT_{A}(b)\frac{\sigma_{0}}{2}\mathcal{N}(r,Y). \tag{30}\]
The scaling of the diffractive dipole-nucleus cross section \(N_{D,A}\) (see Eq. (19)) in this regime can be deduced from the initial condition of the KL equation, Eq. (17), and reads
\[N_{D,A}(r,Y,Y_{0};b>b_{\rm cut})=A^{2}T_{A}^{2}(b)\frac{\sigma_{0}^{2} \mathcal{N}_{D}(r,Y,Y_{0})}{4}. \tag{31}\]
The nuclear effects can be quantified by comparing the nuclear cross sections to the ones obtained in the impulse approximation. The impulse approximation corresponds to including the effect of the nuclear geometry (form factor) that controls the \(t\) distribution in diffractive scattering, but no other nuclear effects. Thus any deviation from the impulse approximation result in our calculation can, in the dipole picture, be attributed to enhanced saturation effects in nuclei.
In the impulse approximation the diffractive \(\gamma\)A cross section can be expressed in terms of the diffractive proton cross section at \(t=0\) and the nuclear form factor as
\[\sigma_{D,IA}^{\gamma^{*}A}=\frac{\mathrm{d}\sigma_{D}^{\gamma^{*}p}(|t|=0)}{ \mathrm{d}|t|}\ \Phi_{A}. \tag{32}\]
The nuclear form factor integrated over the squared mo
Figure 10: Comparison of the KL and the GBW results for diffractive structure function (first row) and the diffractive reduced cross section (second row) at \(\sqrt{s}=1.3\,\mathrm{TeV}\) using the MV\({}^{e}\) initial condition. Three columns show the dependences on \(\beta\), \(x_{I\!\!P}\) and \(Q^{2}\), respectively.
Figure 11: Nuclear modification ratio \(F_{2,A}^{D(3)}/F_{2,IA}^{D(3)}\) as a function of \(\beta\), when either \(x\) (first row) or \(x_{I\!\!P}\) (second row) is kept fixed and at different \(Q^{2}\). Only the results with \(\beta<0.5\) are shown.
mentum transfer \(-t=\mathbf{\Delta}_{\perp}{}^{2}\) reads
\[\begin{split}\Phi_{A}&=A^{2}\int_{0}^{\infty}\mathrm{d} \lvert t\rvert\left|\int\mathrm{d}^{2}\mathbf{b}\,e^{-i\mathbf{b}\cdot\mathbf{ \Delta}}T_{A}(\mathbf{b})\right|^{2}\\ &=4\pi A^{2}\int\mathrm{d}^{2}\mathbf{b}\,T_{A}^{2}(\mathbf{b}). \end{split} \tag{33}\]
We note that the impulse approximation in practice corresponds to using \(b_{\mathrm{cut}}=0\) in Eq. (31), i.e. always using a scaled dipole-proton scattering amplitude when calculating diffractive dipole-nucleus interaction. In terms of the diffractive dipole-proton scattering amplitude the diffractive dipole-nucleus cross section in the impulse approximation reads
\[\sigma_{D,IA}^{q\bar{q}A}=\frac{\sigma_{0}^{2}\mathcal{N}_{D}(r,Y,Y_{0})}{4}A^ {2}\int\mathrm{d}^{2}\mathbf{b}\,T_{A}^{2}(b). \tag{34}\]
This can be used in Eq. (18) to calculate impulse approximation results for the \(\gamma^{*}A\) scattering. Note that the impulse approximation only involves the \(t\)-differential proton cross section. As a consequenceit can be written in terms of \(\sigma_{0}\), not involving the proton shape parameter \(\omega\).
The diffractive structure function as a function of \(\beta\) normalized by the impulse approximation result is shown in Fig. 11 both at fixed Bjorken-\(x\) and at fixed \(x_{I\!\!P}\). We will refer to this ratio as the nuclear suppression factor, and with the KL evolution we obtain very strong suppression \(\sim 0.15\ldots 0.21\) in our chosen kinematics which are accessible at the EIC. The ratios obtained using the MV\({}^{e}\) and MV\({}^{\gamma}\) parametrizations are in practice identical, and a slightly larger suppression is predicted using the MV fit. This can be compared to predictions for the (much weaker) nuclear suppression in inclusive hadron production in proton-nucleus collision at the LHC shown in Ref. [38], where identical suppression factors are obtained with MV\({}^{e}\) and MV\({}^{\gamma}\) fits, with slightly weaker suppression obtained with the MV parametrization.
The suppression obtained for the diffractive structure functions in the KL approach is much stronger than what is obtained from the GBW setup, which gives \(\sim 0.34\ldots 0.48\) at the same kinematics as shown in Fig. 12. This strong suppression in the KL approach can again be explained by noticing that the KL evolution modifies the anomalous dimension of the diffractive scattering cross section: the scaling changes as \(\mathcal{N}_{D,A}\sim[r^{2}Q_{s,A}^{2}(Y_{0},b)]^{2\gamma_{c}}\to\left[r^{2}Q_ {s,D(A)}^{2}(Y,Y_{0},b)\right]^{\gamma_{c}}\). Convoluting with the squared photon wave functions (see the previous section), the nuclear suppression factor at the cross-section level from the KL approach eventually scales as
\[\left(\frac{\sigma_{D}^{\gamma^{*}A}}{\sigma_{IA,D}^{\gamma^{*}A}}\right)_{ \mathrm{KL}}\sim\frac{\int\mathrm{d}^{2}\mathbf{b}\left(\frac{Q_{s,D(A)}^{2} (b)}{Q^{2}}\right)^{\gamma_{c}}}{\sigma_{0}^{2}A^{4/3}\left(\frac{Q_{s,D(A)}^{ 2}(b)}{Q^{2}}\right)^{\gamma_{c}}}\sim A^{-\frac{1}{3}-\delta(\gamma_{c})} \sigma_{0}^{\gamma_{c}-2}, \tag{35}\]
where \(\delta(\gamma_{c})\approx 0.11\) for \(\gamma_{c}\approx 0.85\), using \(Q_{s,A}^{2}\sim\sigma_{0}AT_{A}(b)\). A similar evaluation applied for the \(q\bar{q}\) contribution leads to
\[\left(\frac{\sigma_{D}^{\gamma^{*}A}}{\sigma_{IA,D}^{\gamma^{*}A}}\right)_{ \mathrm{GBW}-q\bar{q}}\sim\frac{\int\mathrm{d}^{2}\mathbf{b}\left(\frac{Q_{s, A}^{2}(b)}{Q^{2}}\right)}{\sigma_{0}^{2}A^{4/3}\left(\frac{Q_{s,p}^{2}}{Q^{2}} \right)}\sim A^{-\frac{1}{3}}\sigma_{0}^{-1}. \tag{36}\]
We can obviously see that the latter is less suppressed than the former. Furthermore, it is interesting to recall that, while the large dipoles close to the inverse saturation scales dominate the \(\mathbf{r}\)-integration in the GBW approach, the dominant contribution in the KL approach comes from the smaller dipoles \(r\sim 1/Q\). Resummation, which is important at low-\(\beta\), leads to a stronger nuclear suppression, while the effect of the non-linear saturation region is diminished! As a side note: this effect depends on the fact that we are starting the evolution for both protons and nuclei at the same rapidity where the nuclear saturation scale is larger than the proton one. If one were to start at the same value of \(Q_{\mathrm{s}}\), i.e. at a higher rapidity for protons than nuclei, the effect would be different.
The suppression factor calculated from the KL approach is almost independent of \(\beta\) at fixed \(x\), and decreases very slowly with decreasing \(\beta\) at fixed \(x_{I\!\!P}\). The weak \(\beta\)-dependence could be understandable by noticing that, in the KL evolution, both \(Q_{s,D(A)}^{2}\) and \(Q_{s,D(p)}^{2}\) have the same dependence on \(Y_{0}\) and on \(Y\), and the former dependence is very mild as mentioned in the previous section. Hence, the nuclear suppression ratio would be almost flat in \(\beta\), see Eq. (35). A weak-\(\beta\) variation, particularly when \(x_{I\!\!P}\) is kept fixed, is due to the subleading behavior when including also other possible factors in ad
dition to the leading scaling factor \((r^{2}Q_{s,D}^{2})^{\gamma_{e}}\) in the solutions to the KL equation. When keeping \(x_{\mathbf{F}}\) (or equivalently \(Y_{\text{gap}}\)) fixed, a similar weak \(\beta\)-dependence should be observed for the \(q\bar{q}\) and \(q\bar{q}g\) components of the GBW result (Eqs. (12), (13) and (15)) separately. However, the sum of the \(q\bar{q}\) and \(q\bar{q}g\) contributions has a stronger \(\beta\)-dependence, since the nuclear modification of these two components is different, and their relative weight in the cross section has a significant dependence on \(\beta\).
The virtuality dependence of the nuclear suppression factor computed from the KL setup is shown in Fig. 13. As expected a somewhat stronger suppression is obtained towards lower \(Q^{2}\), but even in the large \(Q^{2}\sim 10^{3}\,\text{GeV}^{2}\) significant suppression factor \(\sim 0.25\) is obtained. This rather weak \(Q^{2}\) -dependence of the suppression can again be understood by considering how the KL evolution changes the anomalous dimension of the diffractive scattering cross section as discussed above.
The nuclear-to-proton diffractive structure function ratio \(F_{2,A}^{D(3)}/(AF_{2}^{D(3)})\) is shown in Fig. 14. This ratio again depends weakly on \(\beta\), similarly as the case where the impulse approximation is used as a reference. Note that as the (\(t\)-integrated) diffractive cross section scales as \(\sim A^{4/3}\), this ratio is not normalized such that nuclear effects would vanish in the dilute region. The advantage of this structure function ratio is that it depends only on experimentally measurable quantities and there is no need to model the nuclear form factor. It is also directly related to the nuclear modification of the diffractive-to-total cross section ratio, which we will discuss shortly. The normalization factor \(A\) (which differs from the parametric \(A^{4/3}\) dependence of the nuclear cross section) allows direct comparisons to earlier works [29; 7; 7]. Unlike the ratio to the impulse approximation, this ratio also depends on the shape of the proton as the normalization of the proton cross section depends on \(\omega\). This dependence on the proton shape is illustrated in Fig. 14 by showing the results using both the optimal shapes and the Gaussian shape with \(\omega=1\). The slow increase of this ratio towards larger \(\beta\) is qualitatively in agreement with the prediction using the \(q\bar{q}g\) component (with or without \(q\bar{q}\)) presented in Ref. [7] in the region of \(\beta\lesssim 0.1\).
The large \(\beta\)-region of \(\beta>0.1\) has more significant differences between different approaches. In Ref. [7], the diffractive structure functions were calculated using the GBW formalism. The IPsat and bCGC models were employed for the \(\mathbf{b}\)-dependent proton scattering cross-section, and the nuclear cross-section was obtained directly from the proton case using the Glauber model. For comparison, the result using the GBW approach, but with the BK-evolved dipole amplitudes used in this work, is shown in Fig. 15. One can see that it produces a rather different prediction from Ref. [7]. In particular we predict a much larger cross section ratio in the large-\(\beta\) region, and additionally in this regime the two calculations have slightly different \(\beta\) dependences. These differences can be understandable since the two calculations use different setups for both the scattering off protons and nuclei. Furthermore, the Gaussian profile was used in the cited reference for the proton impact parameter dependence, while in the current calculation, we use the significantly steeper shapes as constrained by HERA data. Note also that our results are closer to prediction using the bCGC set-up than the IPsat one, as the former uses a parameterization for the dipole cross-section based on the solutions to the BK evolution.
With these dipole amplitude-related differences between results in the GBW formulation in mind, let us then return to the differences between the KL and GBW formalisms. Comparing the KL result in Fig. 14 (the top right panel) to the GBW formalism results in Fig. 15 and in Ref. [7], there is a clear difference in the \(\beta\)-dependence in the region of \(\beta>0.1\). For the same dipole amplitude (compare the top right panel in Fig. 14 to Fig. 15), the GBW result predicts a larger nuclear enhancement than our present KL approach. Independently of the dipole amplitude, the \(\beta\)-dependence of the nuclear enhancement is stronger in the GBW approach than in the KL result. We emphasize again, however, that the KL approach is not fully reliable in the \(\beta\gtrsim 0.1\) case. In the large-\(\beta\) regime, the \(q\bar{q}\) component dominates, with \(F_{2}^{D(3)}\sim N^{2}(x_{\mathbf{F}})\), and the GBW result treats the kinematics of the small-\(M_{X}\)\(q\bar{q}\) state more accurately than the KL approach.
Finally we study the diffractive-to-total cross section ratio, as the non-linear nuclear effects are expected to enhance the diffractive cross section relative to the inclusive one [63]. This ratio as a function of \(M_{X}^{2}\), and the double ratio
\[\frac{\text{eA}}{\text{ep}}\equiv\left[\frac{1}{\sigma_{\text{tot}}^{\gamma^{*} A}}\frac{\text{d}\sigma_{D}^{\gamma^{*}A}}{\text{d}M_{X}^{2}}\right]/\left[\frac{1}{ \sigma_{\text{tot}}^{\gamma^{*}p}}\frac{\text{d}\sigma_{D}^{\gamma^{*}p}}{ \text{d}M_{X}^{2}}\right] \tag{37}\]
are shown in Fig. 16. This ratio can also be seen as the nuclear-to-proton diffractive structure function ratio \(F_{2,A}^{D(3)}/(AF_{2}^{D(3)})\) divided by the nuclear-to-proton inclu
sive structure function ratio. A generic feature of gluon saturation is that the fraction of diffractive events in the total cross section should increase when going from protons to nuclei, i.e. the double ratio should be larger than unity. This can be contrasted with the prediction of leading twist shadowing, which would predict a double ratio significantly below one [30]. Thus, this observable is one of the clearest experimental signals for saturation at the EIC.
The result in Fig. 16 confirms that the double ratio is significantly larger than unity. Again the predictions obtained using the MV\({}^{e}\) and MV\({}^{\gamma}\) fits are practically identical, and a clear nuclear enhancement of \(50\%\ldots 100\%\) is predicted depending on the applied fit. This enhancement is stronger than the GBW prediction shown in Ref. [30], which can be explained by noting that the double ratio again depends on the proton shape parameter \(\omega\), and in this analysis, we indeed have a steeper proton profile rather than the Gaussian shape. The almost-flat behavior of the mass spectrum of the double ratio again resembles the \(\beta\) spectrum of the above-mentioned nuclear modification ratios for the diffractive structure function.
## V Conclusions
We have presented the first calculation of diffractive cross sections in the HERA kinematics describing the mass dependence by solving the perturbative Kovchegov-Levin (KL) evolution equation4. Predictions for the future EIC measurements with nuclear targets are also presented. The non-perturbative initial condition for the small-\(x\) and high-\(\beta\) evolutions is constrained by the HERA structure function data, and the only remaining free parameter describing the shape of the proton (and controlling the overall normalization) is determined from the large-\(\beta\) diffractive cross section data.
Footnote 4: Note that in Ref. [19], the authors could describe rather well the HERA combined data using their analytical solution to the leading-order KL equation in the double-log region.
Given this input, we find a good description of the precise HERA diffractive structure function and reduced cross section data. The HERA data is found to prefer proton density profiles that are steeper than the commonly-used Gaussian profile. Although in the HERA energy range it is not possible to reach very low \(\beta\) (high \(M_{X}^{2}\)) kinematics where the KL evolution dynamics dominates, we find that already a small amount of KL evolution in the HERA kinematics has a significant effect on
Figure 14: Diffractive structure function ratio \(F_{2,A}^{D(3)}/AF_{p,2}^{D(3)}\) as a function of \(\beta\) when \(x_{I\!\!P}\) is kept fixed. Only the results with \(\beta<0.5\) are shown. The results in the first row use the optimal values \(\omega_{\rm opt}\) for the steepness of the proton impact parameter profile (see Section III), while it is the Gaussian value, \(\omega_{\rm gauss}=1\) in the second row. |
2309.06339 | Unraveling biochemical spatial patterns: machine learning approaches to
the inverse problem of Turing patterns | The diffusion-driven Turing instability is a potential mechanism for spatial
pattern formation in numerous biological and chemical systems. However,
engineering these patterns and demonstrating that they are produced by this
mechanism is challenging. To address this, we aim to solve the inverse problem
in artificial and experimental Turing patterns. This task is challenging since
high levels of noise corrupt the patterns and slight changes in initial
conditions can lead to different patterns. We used both least squares to
explore the problem and physics-informed neural networks to build a
noise-robust method. We elucidate the functionality of our network in scenarios
mimicking biological noise levels and showcase its application through a
prototype involving an experimentally obtained chemical pattern. The findings
reveal the significant promise of machine learning in steering the creation of
synthetic patterns in bioengineering, thereby advancing our grasp of
morphological intricacies within biological systems while acknowledging
existing limitations. | Antonio Matas-Gil, Robert G. Endres | 2023-09-12T15:58:01Z | http://arxiv.org/abs/2309.06339v1 | Unraveling biochemical spatial patterns: machine learning approaches to the inverse problem of Turing patterns
###### Abstract
The diffusion-driven Turing instability is a potential mechanism for spatial pattern formation in numerous biological and chemical systems. However, engineering these patterns and demonstrating that they are produced by this mechanism is challenging. To address this, we aim to solve the inverse problem in artificial and experimental Turing patterns. This task is challenging since high levels of noise corrupt the patterns and slight changes in initial conditions can lead to different patterns. We used both least squares to explore the problem and physics-informed neural networks to build a noise-robust method. We elucidate the functionality of our network in scenarios mimicking biological noise levels and showcase its application through a prototype involving an experimentally obtained chemical pattern. The findings reveal the significant promise of machine learning in steering the creation of synthetic patterns in bioengineering, thereby advancing our grasp of morphological intricacies within biological systems while acknowledging existing limitations.
## Introduction
Spatial patterns are prevalent in biological systems, including gene expression in microbial communities, developing embryos, as well as skin and fur patterns in adult animals. A leading mechanism for pattern formation is the diffusion-driven instability in reaction-diffusion models, as proposed by Alan Turing [1] and extended by others [2, 3, 4, 5, 6]. These models typically describe interacting and diffusing activator and inhibitor chemical species through sets of coupled partial-differential equations (PDEs), although simulation methods are also available. While Turing patterns were originally observed in chemical systems [7, 8] it has been challenging to conclusively demonstrate this mechanism in developmental systems such as digit formation [9], zebra-fish skin pigment patterning [10] and hair spacing in mice [11] (for recent reviews see [12, 13]). Additionally, building Turing patterns from bottom-up with synthetic circuits has proven difficult [14, 15]. Identified issues are that the Turing mechanism is exceedingly simple compared to biological regulatory pathways, highly sensitive to changes in model parameters [16], and experimental control over parameter tuning is limited. Even if patterns can be produced experimentally, the challenge remains of linking them to the corresponding parameters of a candidate model to support it as an actual Turing pattern rather than an experimental artifact. Further complications arise from the relatively limited amount of data available in developmental systems, which is often corrupted by measurement and imaging noise, as well as the strong pattern variability observed in microbial systems. Addressing these issues would greatly benefit from the ability to robustly estimate parameters from given patterns and candidate models.
To address the issues that arise in an experimental setting, the inverse problem can be initially approached using artificial data from numerical simulations instead of actual experimental data. This involves generating random initial conditions and evolving the reaction-diffusion model in space and time, allowing for a systematic study of the effects of data size and noise. However, solving the inverse problem remains a challenging task even with this simplification, as the same model parameters can produce different patterns, resulting in a many-to-one inversion problem [17]. This occurs because patterns are highly sensitive to initial conditions, and even slight alterations can cause noticeable changes in the final pattern obtained. These changes do not affect the type of pattern obtained, such as spots or stripes, but do alter the location and shape of the pattern elements. As a result, previous research has ruled out direct minimization of the mean-squared displacement between data and model output for parameter
fitting, considering it an ill-posed problem [18]. Instead, focus has been on more advanced approaches, including Bayesian inference and other statistical tools [18, 19] and different machine-learning techniques such as support vector machines, Kernel-based methods, and neural networks [20], as well as optimal control theory [21]. However, despite some success, these approaches suffer from requiring thousands of training images [19], sensitivity to noise in the patterns [20], ad hoc cost or loss functions for quantifying the quality of fit [18], or the requirement to fix some parameters and have knowledge of initial conditions and the pattern evolution [21].
As the inverse Turing problem remains largely unresolved, there is a need to develop new robust approaches, particularly when dealing with small and noisy data. Although conceptually straightforward, the least squares (LS) approach has not been extensively explored in this context. This method requires the definition of a parameter-dependent loss function, which can be the residuals of the model equations [22]. In contrast, deep learning methods, such as neural networks, are usually more reliable when it comes to noise, even though they are computationally expensive to train. A key property that makes neural networks a promising tool for approaching this problem is the universal approximation theorem, which states that given enough parameters, neural networks are capable of approximating any continuous function, including spatial patterns [23, 24]. Additionally, physics-informed neural networks (PINNs) incorporate physical constraints, such as the model equations that should hold for the data, thereby helping to regularize the training [25] and making PINNs more intuitive than black-box neural networks. An added benefit of PINNs is that they do not require large amounts of data; in one case, a single simulation output was sufficient to solve the inverse problem [26]. Model parameters are regarded as parameters of the neural network, resulting in the computational cost of the whole method being of the same order as the standalone function approximation. This makes PINNs superb candidates for solving the inverse problem in Turing patterns.
Here, we explore basic LS applied to the PDE residuals and advanced PINNs to address the inverse problem - given Turing patterns and candidate models, we aim to recover the model parameters. Using our first approach, we find that LS is computationally inexpensive, but that it requires mostly exact (albeit little) data and hence does not allow significant noise or using a similar pattern produced from a different model. Applying this methodology to small regions of patterns still allows us to recover the parameters, and we identify the minimum number of necessary pixels. Our second approach uses physics-informed neural networks with a radial basis function (RBF) architecture to approximate the patterns, referred to as RBF physics-informed neural networks (RBF-PINNs). This method remedies many of the issues from the first method at a two-order of magnitude higher computational costs, allowing us to obtain accurate results up to a 10-20% relative noise given a single snapshot of a Turing pattern, even in an experimental setting with chemical Turing patterns. Our least-squares results show a new perspective on the Turing robustness problem, that a pattern can act like a barcode to a specific parameter combination, given the key of the correct model. Our RBF-PINNs are a promising method to solve the inverse problem, and to guide future experiments in bioengineering in the development of synthetic tissues.
## Results
### Models
Both of our approaches, LS and RBF-PINNs, utilize discrete Turing patterns as inputs to infer the parameters of a candidate model. In this work, the candidate model refers to the true model that was numerically solved to produce the images using the finite-differences algorithm (see Methods). We explored several models, all following the idea of activator-inhibitor dynamics shown in Fig. 1A but before delving into specific ones, we present the general two-component reaction-diffusion model for concentrations \(u\) and \(v\), which depend on space and time, as follows:
\[u_{t} =D_{u}\Delta u+f(u,v) \tag{1}\] \[v_{t} =D_{v}\Delta v+g(u,v)\]
where \(u_{t}\) and \(v_{t}\) denote partial differentiation with respect to time and the functions \(f(u,v)\) and \(g(u,v)\) are non-linear reaction kinetics. Depending on the specific \(f\) and \(g\) functions, different models can be identified. We will focus on three of these models. First, the Schnakenberg model [2], which has a total of 6 parameters with 4 of them inside the functions \(f\) and \(g\) and the other two given by the diffusion coefficients:
\[f(u,v) =c_{1}-c_{2}u+c_{3}u^{2}v, \tag{2}\] \[g(u,v) =c_{4}-c_{3}u^{2}v\]
Second, we have the FitzHugh-Nagumo model [5, 6], which has 5 parameters:
\[f(u,v) =c_{1}(v-c_{2}u), \tag{3}\] \[g(u,v) =-u+c_{3}v-v^{3}\]
Third, the Brusselator model [4], with a total of 4 parameters:
\[f(u,v) =c_{1}-(c_{2}+1)u+u^{2}v, \tag{4}\] \[g(u,v) =c_{2}u-u^{2}v\]
We numerically solve these PDE models on a square domain with zero-flux boundary conditions, resulting in patterns similar to Fig. 1B. There are different types of patterns, e.g. dots (Fig. 1B, top left), and labyrinths (Fig. 1B, top right). The type is mostly dependent on the model that we choose: FitzHugh-Nagumo produces labyrinths while Schnakenberg produces dots; but some models, like the Brusselator, can produce several types of patterns depending on the parameters. Parameters for the Brusselator and the FitzHugh-Nagumo models can be found in [18], while those for the Schnakenberg model are in [27]. Different initial conditions will produce different patterns of the same type, but fixing these will produce the same one. Hence, we can conceptually think of the parameters of the model and the initial conditions as the 'variables' that produce a given pattern. Since we are mostly interested in the parameters, we used the same initial conditions for all patterns. This eases comparison between the patterns produced by different methods. As a result, there is a more direct relationship between the parameters and the patterns. The inverse problem consists of inverting this relationship by recovering the parameters that generate a given pattern, as outlined in Fig. 1C.
### Non-dimensionalization
Depending on the model, the inverse problem described above may be unsolvable because it may have many solutions. For example, take the Schnakenberg model, Eq. 1 with reaction kinetics given by Eq. 2, and assume \(u\) and \(v\) are steady-state patterns in time satisfying the PDE (Eq. 1) for a given set of parameters. As a result, both the left-hand side (LHS) and right-hand side (RHS) of the PDE are zero. If we multiply all parameters by a constant \(k\) and substitute them back in the equation, the resulting set of parameters is also a solution of the inverse problem; since we can take \(k\) as a common factor, we again obtain \(RHS=0\). Furthermore, if we let \(k=0\) we arrive at a trivial solution where all parameters are zero. To avoid this, one remedy is to fix some parameters so that there is only a single solution. However, we found that non-dimensionalizing the equations to decrease the number of parameters is preferable, since this does not require us to make any assumptions on the parameters. As an added benefit, this reduces the number of parameters of the model.
We can write the previous models in non-dimensional form as follows. For the Schnakenberg model we obtain:
\[u_{t}=\Delta u+c_{1}-u+c_{2}u^{2}v\qquad v_{t}=d\Delta v+c_{3}-c_{2}u^{2}v, \tag{5}\]
The FitzHugh-Nagumo model can be rewritten as:
\[u_{t}=\Delta u+v-c_{1}u\qquad v_{t}=d\Delta v+c_{2}v-c_{3}u-v^{3}, \tag{6}\]
And the non-dimensional form of the Brusselator model is:
\[u_{t}=d\Delta u+1-u+c_{1}u^{2}v\qquad v_{t}=\Delta v+c_{2}u-c_{1}u^{2}v \tag{7}\]
where in all equations \(d\) is the ratio of the diffusion coefficients and the rest of parameters are non-dimensional. Note that even though we have not changed the notation of the parameters, they are different from the ones in Eqs. 2, 4 and 3. For the definition of the new variables and parameters for the three models see Supplementary Information section 2.
### Least squares
Our first approach to solve the inverse problem is based on fitting the parameters to the PDE equations, by fixing the concentrations \(u\) and \(v\) as given by the Turing patterns we aim to reproduce. If we assume the pattern to be a steady state of the PDE in e.g. Eq. 5, then \(u_{t}=v_{t}=0\). We can now consider the RHS of Eq. 5. Because we assume we have access to the patterns \(u,v\) satisfying the PDE, we can fix them and treat the RHS as a function dependent only on the parameters, which will be zero for the combination of parameters that we want to find. Since our patterns are discrete in space (we can think of them as images with pixels), \(\mathbf{u}\) and \(\mathbf{v}\) are matrices, so we can write its elements as \(u_{ij}\) and \(v_{ij}\) for \(i,j=1,2,...,N\), where \(N\) is the number of rows and columns in the pattern. This makes it possible to write and solve the problem using LS, as we can now formulate the problem as \(X\beta=Y\), where \(X\) is the design matrix, \(\beta\) is the vector of parameters and \(Y\) are all terms that are not parameter dependent. (For an example, see the Methods section.) This LS minimization is equivalent to minimizing the squared of the Frobenius norm (element-wise \(L_{2}\) norm for matrices) for both equations, which we can write as:
\[L(\mathbf{\beta})=||\beta_{1}\Delta\mathbf{u}+f(\mathbf{\beta})||_{F}^{2}+||\Delta \mathbf{v}+g(\mathbf{\beta})||_{F}^{2} \tag{8}\]
The LS method has previously been regarded in the literature [23], but to our knowledge it has not been thoroughly investigated. The Laplacian is approximated using a second-order finite difference method, and the no-flux boundary conditions are taken to be the same as used to simulate the pattern. Since we assume the Turing pattern is at steady state, so that \(u_{t}=0\), we refer to this method as a steady-state approximation. The LS method can also be extended to dynamic patterns, e.g when having two samples of the time evolution of the system, say \(u_{1}\) and \(u_{2}\) at \(t_{1}\) and \(t_{2}\), respectively. In this case, we can no longer say that \(u_{t}=0\), but instead approximate this partial derivative using finite differences so that we obtain \(u_{t}\approx\frac{u_{2}-u_{1}}{t_{2}-t_{1}}\) (see Discussion).
Figure 1: **Turing patterns and methods for the inverse problem.****(A)** Network of canonical Turing pattern with a short-range activator and a long-range inhibitor. **(B)** Turing patterns from different models. The top-left pattern showing spots is produced with the Schnakenberg model, the top-right pattern showing labyrinths with FitzHugh-Nagumo and the bottom row, also showing spots and labyrinths, with the Brusselator and two different parameter sets. **(C)** Methodology of the paper. Starting from a Turing patterns, we build a PDE loss which is minimized with respect to the parameters \(\beta\). This is done using two different methods, LS and RBF-PINNs.
To apply the LS method, we first select a model and a set of parameters, \(\beta_{orig}\), that can produce a Turing pattern. Then, using the model and \(\beta_{orig}\) we numerically solve the PDEs to produce a pattern similar to the ones shown in Fig. 1B. Once we have the patterns, we minimize \(L(\mathbf{\beta})\). This method produces very accurate results without noise in the pattern, with an average relative error in parameters of the order of \(O(10^{-15})\) or lower, which can be considered artifacts given its closeness to numerical precision. When we add noise to the pattern before applying LS, the 'true' minimum shifts from \(\beta_{orig}\), producing an error which is no longer a numerical artifact (Fig. 2A). The way we incorporate noise is by adding a matrix of normally distributed random variables with zero mean and varying standard deviation \(\sigma\) to the concentration matrices \(\mathbf{u}\) and \(\mathbf{v}\). Since different \(\mathbf{u}\) and \(\mathbf{v}\) will have different ranges, the standard deviation of the noise must be relative to the range (maximum minus minimum value in the pattern) of each concentration matrix. To achieve this we employ what we called'relative noise'. If we let \(R_{p}(\mathbf{u})\) be the range of the concentration matrix \(\mathbf{u}\), so that
\[R_{p}(\mathbf{u})=\max_{1\leq i,j\leq N}\{u_{ij}\}-\min_{1\leq i,j\leq N}\{u_ {ij}\}\]
we can define a \(s\%\) relative noise to correspond to a standard deviation \(\sigma=\frac{s}{100}R_{p}\). Once we add noise to the pattern, we can use these 'noisy patterns' as input for our methods, and by changing the level of noise we can systematically investigate the difference in performance and accuracy. This will yield a
Figure 2: **Least squares for parameter inference.****(A)** In LS, noise changes the loss function that we are minimizing and the optimal value shifts, leading to some error in the inferred parameters. **(B)** Resulting patterns obtained from LS with different noise levels. After corrupting the original pattern with relative noise of different levels, parameters are obtained using LS and the model is solved with the inferred parameters to obtain the new patterns shown. The upper row corresponds to the FitzHugh-Nagumo model, the lower row to the Schnakenberg model. Enlarged patterns on the left are the original ones. It can be observed that at \(1\%\) noise, the predicted patterns are noticeably different from the original ones across the models, and that we can obtain several patterns at the same level of noise. **(C)** Radially averaged power spectra (RAPS) obtained for different patterns recovered at different noise levels (shown in (B)) from the Schnakenberg model. Red corresponds to \(1\%\) noise, green to \(0.6\%\), orange to \(0.3\%\) and blue is the original pattern. Inset shows the same plot but the \(y\) axis in log-scale. **(D)** Scatter plot of RAPS differences for different relative noise levels with noticeable sudden increases due to discrete changes in the predicted Turing patterns. **(E)** Average relative error of inferred parameters for different relative noise levels in the Schnakenberg and FitzHugh-Nagumo models, with a sketch of the parameter space explaining the difference in spread. Green point represents original set of parameters values and orange points are the different optimal sets resulting from the minimization. The bigger standard deviation in relative error occurs when the optimal sets are close to the original so the scale of the error changes drastically (can be very close or far), while the smaller standard deviation occurs when the optimal points are further away and the error is always on a similar scale.
different parameter set for each noisy pattern, which can be used to solve the system and obtain a new pattern. The new patterns obtained and the respective level of noise from which their parameters were inferred are shown in Fig. 2B. Before delving into our results, we remark that the figures only show the obtained results for the Schnakenberg and FitzHugh-Nagumo models. All the corresponding figures for the Brusselator model can be found in Supplementary Information section 3 and will be referenced when necessary.
We considered several measures to assess the accuracy of this method. First, we looked at the average relative error in the parameters, which gives us a measure of how close the newly obtained parameters are to the original ones. A drawback of this measure is that, depending on the model, very distinct parameters can give us the same pattern, or conversely similar parameter values can yield very different patterns. Hence, a better measure for parameter accuracy would be one that quantifies how similar the pattern produced with an inferred parameter vector is to the original pattern. We found that measures like the mean squared error (MSE) are not very useful, since we do not expect the patterns to be exactly the same. This is because our main objective is to be able to recover the parameters of the model, but the final pattern is not determined solely by the parameters, but also by the initial conditions; the parameters and model determine the type of Turing pattern and its wavelength, but the position and shape are determined by the initial conditions. Hence, we do not put emphasis on obtaining the exact positions and shapes. In order to focus on the type of pattern and wavelength, we instead compare the patterns in the frequency domain. Specifically, we use the Fourier power spectrum and take the radial average to obtain a one-dimensional profile which should have a main dominant frequency for each of our patterns. This is called radially averaged power spectrum (RAPS). Example RAPS curves for different profiles are shown in Fig. 2C. Then, we can compute this RAPS for each pattern and use the MSE between these two profiles as a similarity measure. We will refer to this measure as RAPS difference or RAPSD for short.
Using these two measures, we can analyze how LS performs with different levels of noise. As can be seen in Fig. 2B, E for both the FitzHugh-Nagumo and Schnakenberg models and in Fig. S1A and S2A, at relative noise levels below around 0.5% we obtain very similar patterns, even when the corresponding relative error in parameters is around 0.1%. For larger levels of noise, both examples in Fig 2B begin to fail, resulting in parameters that do not produce patterns. In Fig. 2E we can see that the standard error in parameters (the shaded region encompassing the solid lines) behaves very differently for the two
Figure 3: **Effect of the number of pixels in the least squares method.****(A)** Pattern from the FitzHugh-Nagumo model with three different cropped regions given by small squares of different size. It can be seen that a region of \(3\times 3\) pixels is sufficient to recover accurate enough parameters such that the predicted and original patterns (right and left respectively) are indistinguishable. **(B)** Schematic of choosing \(N\) randomly selected points (black) on the Turing pattern. **(C)** Effect of increasing the number of randomly selected pixels on the average relative error in the inferred parameters with (orange) and without (blue) added noise to the original pattern for the FitzHugh-Nagumo model. We used \(N\) in the range 5 to 2000 (\(50\times 50=2500\) being the maximum possible) and sampled 10 different sets of pixels for each \(N\), and we measured the relative error for each of the inferred parameters. There is almost no effect without noise, but with noise there is a steady reduction in the relative error. Also shown is the slope of the line of best fit to the data (orange).
models, and its width seems to reduce at different values of relative error, so it is not caused by the scale of the error. We found that the point where this error is reduced is when the LS results are not centered around the original set of parameters anymore. To visualize this, it is helpful to note that the LS estimator is dependent on noise. Hence, different realizations of the corrupted pattern at the same level of noise will produce different LS estimators, which will produce different patterns, as can be seen with 1% noise in Fig. 2B. We can either obtain a pattern or a constant solution, showing how the obtained parameters are at the boundary of the Turing region. These different estimators can be thought of as a set of points in parameter space. Subsequently, the reduction in standard error occurs as the set of points moves further away from the original set of parameters, as described in the sketch in Fig. 2E). Note that since we are using LS, instead of computing the relative error in \(\hat{\beta}\), it could be argued that we can use the statistical properties of the LS estimator and obtain an expression for its variance. Indeed, the relative error we are computing is proportional to the standard deviation of \(\hat{\beta}\). Hence, if we had an expression for the variance, then the errors would be easily computed. However, this is not an easy task since the usual LS assumptions do not hold in our case (noise is not additive and does not have a constant variance, and the design matrix is noisy). Applying the statistical properties of the LS estimator results in errors that do not match our data, so we decided to numerically estimate the relative error.
In Figs. 2D and S2B we notice discrete jumps when comparing RAPS profiles, which we can match to behaviors in the patterns. By comparing with different noise levels in Fig. 2B, we found that the last jump separates inferred parameters that produce a pattern and inferred parameters that do not. The other jumps are less clear and have to do with the patterns changing wavelength and scale. For example, for the Schnakenberg model we can notice a slow linear trend, which shows that the range of the patterns is slowly changing. This is followed by a jump around 0.4%, which is the point where the spots have grown too much and the pattern changes to fewer dots (see difference in spot number between 0.3% and 0.6% noise). For the FitzHugh Nagumo model, we notice more jumps and it is less clear what these represent. We saw that the general behavior is conserved for different patterns with the same parameters, but the position of jumps seem to be dependent on the initial conditions. Note that the main advantage of this measure is that it allows us to see when changes in the pattern occur. Nevertheless, this measure is computationally more expensive than computing the relative error in parameters, since we need to numerically solve for the pattern. Hence, we still use the relative error in parameters.
Our results show that the LS estimators are not robust to the levels of noise we aim for, but in the absence of noise the method works well with little data. As can be seen in Fig. 3A, even if we crop the original pattern and use a \(4\times 4\) pixel region, we still obtain the same patterns. A natural question is to find the minimum number of pixels. Given the simplicity of the method, to answer this question we need only consider the formulation of LS. A full explanation is provided in the Methods section but we give a brief explanation here: given the design matrix \(X\), which is determined by the model, we need \(X^{T}X\) to be invertible. This condition will be satisfied when \(X\) has enough rows (a pixel corresponds to two rows) so that it is a full rank matrix, making the minimum amount of pixels be model dependent. For the Schnakenberg and Brusselator models, 2 pixels are enough, whereas 3 pixels were required for the FitzHugh-Nagumo model.
Next, we investigated how much the addition of new pixels changes the relative error in the parameters. To do this, we sampled \(N\) pixels from the pattern using a two dimensional uniform distribution (Fig. 3B), which we used to infer a new set of parameters. In Fig. 3C we show a scatter plot of the mean relative errors for each \(N.\) We observed that the effect of the number of pixels on the relative error is different depending on whether the original pattern was corrupted with noise or not. Without noise (blue) we hardly see any improvement, but with noise (orange) there is a more drastic improvement in the parameters, which is expected as the noise has less effect the larger the sample. As the amount of randomly selected pixels is increased, the relative error approaches that of no added noise with a power law, as can be seen in the log-log plot in the upper right corner of Fig. 3C. The exponent of this power law is close to \(-\frac{1}{2}\), which agrees with previous results on the LS convergence with \(N\)[28]. For an intuitive mathematical explanation of this convergence see Supplementary Information Section 4. We remark that the fluctuations in relative error without noise (or for large \(N\) with noise) are numerical instabilities, since they are also observed if we use all the data and merely change the order of the input. Similarly to this, we also investigated this effect in the RAPSD for the Schnakenberg model in Fig. S3. We saw that the observed behavior was dependent on the initial level of noise chosen: for a noise around 0.03%, we observe a similar power law as with the relative error, with a slope again close to \(-\frac{1}{2}\). For a noise of 0.25%, we observed the same jumps as can be seen in Fig. 2D, with a convergence towards the same value shown in the figure for this level of noise. (For more discussion see Supplementary Information Section 5.)
We also attempted to apply LS to a design matrix defined by a model not used to generate the pattern. For example, we produce a pattern with the Brusselator model but we define the LS minimization using the Schnakenberg model. This is what we called'mixing' the models, and the goal is to check if the models are flexible enough to produce the same pattern from one another. We found that this did not work even between models capable of producing the same type of pattern, and instead produced parameters that gave no Turing patterns at all (see the Discussion section). We can see that this result enforces the idea that a model is like a key, and when we have the key, the pattern is simply a barcode which is easy to read and gives us the model parameters. In summary, the LS method works well without noise in the the data, as it returns the exact parameters (with some numerical error), even when we have very little data (2-3 pixels depending on the model).
### Physics-informed neural networks
Physics-informed neural networks (PINNs) are a neural network architecture in which the network serves as a function approximator with two loss functions: the first compares the output of the network to the numerical simulation of the PDE (the pattern in our case) and makes sure the network outputs the correct values; the second tries to make the output of the network a solution of the PDE (the physical law) by both optimizing the network and optimizing the parameters of the PDE (the estimation)[25]. Using these two losses, the network approximates the solution as well as learns the optimal set of parameters for which the Turing pattern is a steady-state solution. This approach is computationally very efficient, since it only adds a few parameters to the network which usually trains thousands. Hence, the extra computational cost is minimal, leading to excellent scaling with the number of parameters in the PDE. A peculiarity of this method is that if we have no noise in our input patterns, we can let the approximation overfit the data. This is because the approximation will become better and this will improve the parameter optimization as well.
As function approximation we used a radial basis function neural network (RBFNN), since it is
Figure 4: **Physics-informed neural networks for parameter inference.****(A)** Architecture of RBF-PINNs, where the input is space coordinates \((x,y)\) and the output is the pattern. Input is shown in green, variables which are trained in blue and input to the losses in yellow. Red arrows show denote usage in the loss and yellow arrows backpropagation. From the network the partial derivatives can be efficiently computed using automatic differentiation and used in the PDE loss, where the PDE parameters are also network parameters. **(B)** Illustration of the three parameters of the Gaussian kernel and their interpretation. **(C)** Results from RBF-PINNs with different levels of added noise. After adding noise to the Turing pattern, the network is used to obtain a parameter set, which is subsequently used to predict the pattern. Patterns to the left are the original ones. **(D)** Relative error in the parameters and the RAPS difference for the parameters and patterns used for (C).
especially suitable for data with regularities such as evenly spaced peaks and valleys like our Turing patterns. This network only has one hidden layer (aside from input and output) defining the kernel. We can think of each kernel as adding a (unnormalized) distribution (Gaussian in our case) at a given location with a given variance and weight. These three are the only types of parameters of this network. Hence, the amount of nodes is the amount of kernels that we have, and during training these will change their location and variance, shrinking or growing to approximate the pattern. A representation of the three parameters and their effects are shown in Fig. 4B.
As with our previous method, without added noise the network can approximate the pattern up to an arbitrary accuracy [24]. This means that we can reach the same levels of accuracy as with the other method, but with a caveat: we need to train the network for a long time to reach this level of accuracy and we would need a large number of nodes in the hidden layer. This is the reason behind the large error in Fig. 4D without added noise, since we use the same amount of nodes for all noise levels. Note the performance can easily be improved without noise, but as our main focus is robustness to noise, we did not tune any parameters to any specific level of noise, except for the number of nodes and the variance of the kernels, and we use exactly the same network. Once we apply noise, our results with this technique show an improved robustness when compared to LS, as can be seen in Fig. 4C. For both the FitzHugh-Nagumo (top) and Schnakenberg (bottom), we obtain very similar patterns up to a noise level of 12-16%. For the Brusselator model, shown in Fig. S1B, we can see similar results. Comparison with Fig. 4D and Fig. S2C and D shows that the relative error in parameters is higher in the Schnakenberg than in the FitzHugh-Nagumo model, but this error is much lower in the Brusselator model, indicating that this model is more robust to noise. Nevertheless, from Fig. 4C we can see that there is not a direct relationship between relative error and similarity between the patterns (RAPSD), since for low noise levels (0-4%) the relative error in the Schnakenberg model is higher and the RAPSD is lower than the FitzHugh-Nagumo model, but for higher levels the RAPSD of the Schnakenberg model becomes higher. For convergence plots of some of the model parameters and losses of the network see Fig. S4 in Supplementary Information Section 6.
By looking at the RAPS profiles we find the same discrete jumps that we saw with the LS method. In particular, a well pronounced final jump is observed, which we argued before indicates the transition from pattern to no pattern. By comparing Figs. 2D and 4D, RBF-PINNs with 0-4% noise results in RAPS values corresponding to around 0.3-0.5% for LS while the points with 8-12% noise correspond to the ones around 0.6-0.9%. This means that RBF-PINNs can infer parameters and predict patterns with around 10-20 times more noise than LS for the Schnakenberg and FitzHugh-Nagumo models. In the case of the Brusselator this number is even higher, as can be seen by comparing Fig. S2B and D, with RBF-PINNs being able to predict patterns with up to 40 times higher noise levels.
### Application to data from chemical patterns
Having demonstrated how inference by PINNs shows strong robustness to noise, the next step is to apply this method to experimental data. We chose chemical Turing patterns as an example because patterns are more stable and robust than biological patterns, and corresponding models are well established. Specifically, we consider here the chlorine dioxide-iodine-malonic acid (CDIMA) system used to study the impact of 2D growth on patterns formation [8]. This reaction shows a photosensitivity that was utilized to produce a time series of snapshots of the pattern at different times in a radially growing domain using a mask. Since our method focuses on the steady state pattern, we discarded all the time points except for the last one, focusing on the stable central region away from the boundary.
The experiments were modeled using a Lengyel-Epstein two-variable model [29], modified to incorporate the effects of illumination [30]. Since we are only interested in the pattern, which is shown in the dark region, we will use the original model without illumination. We found that this model, although already non-dimensional, was problematic for parameter inference. This was because in one of the equations all terms have a parameter, which can lead to a trivial solution where all parameters are zero. To solve this, we derived a new non-dimensionalization:
\[u_{t}=d\Delta u+c_{1}-u-c_{2}\frac{4uv}{1+u^{2}}\qquad v_{t}=\Delta v+u-c_{2} \frac{4uv}{1+u^{2}}, \tag{9}\]
which we used to fit the data with our RBF-PINN. Before stating our results, there are a few complications worth mentioning. First and foremost, the scale of the patterns is unknown, since the original data is an image with pixel values ranging from 23 to 255. Secondly, we have two concentrations on the model (\(u\) and \(v\)) exhibiting a pattern, but only one experimental pattern. In order to solve both these problems, we define a free-scale variable \(W\), which is a rescaled version of the original pattern in the
range \([0,1]\). In order to obtain the model concentrations, we assume that there is a linear map from \(W\) to \(u\) and \(v\), which we can write as:
\[u=W\kappa_{u}+\gamma_{u}\qquad v=W\kappa_{v}+\gamma_{v}, \tag{10}\]
where \(\kappa_{x}\) and \(\gamma_{x}\) for \(x=u,v\) are scale and shift parameters respectively. The assumption of the existence of this linear map can be justified by the fact that patterns usually are either in phase (positive \(\kappa\)) or out of phase (negative \(\kappa\)).
As before, our network will consist of two independent subnetworks which will approximate \(W\) individually. From these two approximations we will recover \(u\) and \(v\) using the scaling in Eq.10, so we will call these approximations \(W_{u}\) and \(W_{v}\), respectively. The reason for using two approximations for the same variable is that usually patterns are not perfectly in phase or out of phase. Hence, by separating these into two variables we allow for flexibility in the method to make adjustments to each pattern individually. Another difference with the network used previously is that the scaling parameters are added as part of our PDE loss, and hence are optimized with the rest of parameters, as portrayed in Fig. 5B.
We firstly tried our inference method using numerical 'data' from a simulation of the pattern. This gave us two patterns, but we only used one of them to make this more similar to the experimental case. As a proof of concept, we started by fixing the scaling parameters to the best combination possible for both \(u\) and \(v\), and obtaining the rest. The predicted pattern is shown in the first column of Fig. 5C (bottom), together with the original one (top), and we can see a good correspondence between them. Specifically, both show labyrinths on a similar scale. We obtained a RAPS of 0.2941 for the two numerical patterns, which is close to the value obtained for the FitzHugh-Nagumo model in Fig. 4D below 16% of noise.
Encouraged by the numerical result, we tried our approach on the experimental pattern. We initialized
Figure 5: **Application to chemical patterns.****(A)** Explanation of scaling procedure, showing numerical and experimental patterns and how the scaling to the free-scale variable and the rescaling using the shift and scale parameters are performed. **(B)** Architecture of RBF-PINNs for the experimental case, with the division into the \(u\) and \(v\) approximation, the rescaling and the different losses. **(C)** Results from RBF-PINNs to the numerical pattern (first column) and the experimental pattern (second column). The top images show the original patterns, while the bottom images show the predicted patterns using the inferred parameters and our numerical solver. **(D)** Time evolution of simulated experimental pattern showing how labyrinths are present at the initial time points and plot of convergence of scaling parameters.
all parameters close to the values from the numerical pattern and also used the same spatial dimensions. The results are shown in the second column of Fig. 5C. We can see that the predicted pattern (bottom) is not particularly close to the experimentally observed one (top). The obtained pattern also seems to show spots instead of labyrinths, like the experimental one. Nevertheless, when inspecting the time evolution of this pattern in Fig. 5D, we can observe that at the beginning the spots are connected and later on they move apart. By looking at the experimental videos, we can actually see this happening too - at the beginning we can only observe labyrinths but later on spots begin to appear. This seems to point to the fact that the experimental pattern has not completely converged and that if left to evolve for longer, it would end up in a steady state with only spots. Furthermore, in Fig. 5D (bottom) we see the convergence of the scaling parameters for \(u\) (solid lines) along with the values from the original numerical pattern (dotted lines). We can notice that they match quite well, with a slight mismatch of the shift parameter which is probably due to feedback in the training with other parameters. We could also infer that the chemical pattern that was produced experimentally was most likely an activator, \(I^{-}\), which we later confirmed [31]. This is because depending on whether the initial concentration belongs to \(u\) or \(v\) in the model, one non-dimensionalization will work better than the other; in our case only \(u\) worked. The network training is further discussed in the Supplementary Information section 7.
In summary, our method can be applied to experimental data with only minor manual intervention. Furthermore, we were able to obtain insight into the chemical pattern e.g. its closeness to a spotty steady state, due to the parameters we obtained from the method. Our method predicted the correct identification of the pattern with the activator chemical species.
### Discussion
We investigated two methods to solve the inverse problem in Turing patterns. The first was based on the least squares (LS) method, and was found to work very accurately, with relative errors under \(10^{-15}\), when no noise is added to the patterns. We also proved that it works even with very small quantities of data: 2-3 pixels from the discrete patterns were enough, and that this depends on the model equations. We also proposed another method based on neural networks, that we called RBF-PINNs since it is a mixture of PINNs and RBF neural networks. Without noise, this method can be compared with LS, since the universal approximation theorem assures that we are able to approximate any function up to any given accuracy [24]. The main advantage of this method comes when we consider a noisy pattern, since in this case our method was capable of recovering similar parameters to the original ones with up to 12-16% relative noise.This allowed us to use this method for experimental chemical patterns, and obtain a set of parameters that gave us new insight about the possible future behavior of the patterns obtained experimentally.
Each of these methods has its strengths and weaknesses. The main disadvantage of the LS method is its sensitivity to noise, which is usually present in most biological systems and data. This was remedied by our RBF-PINN method at the expense of a computational cost several orders of magnitudes larger than LS, taking nearly an hour to train on a HP Z8 G4 Workstation. Interestingly, the inferred parameters without noise are not as accurate as LS, which can be observed by comparing the relative errors in Figs. 2D and 4D. This performance can be further optimized by using more nodes in the network. Also the robustness to noise can be improved. Both our approaches used a square grid of \(50\times 50\) pixels, but as we increase the amount of pixels, the performance of both methods in presence of noise becomes more robust. We can also run the optimization of the loss functions for longer to obtain a better convergence, or we can alter the architecture to make it more specific to our patterns. In this paper, we use exactly the same network architecture for all patterns, only changing the variance of the starting RBF kernels since the patterns have a different wavelength and scale, and the number of nodes. However, we could get more accurate results if we were to increase the number of nodes in the network, although we would have to be careful about overfitting to noisy data.
When comparing with other attempts at solving the inverse problem for Turing patterns, we should first make the distinction between approaches that treated noise and those which did not. For the approaches that did not consider noise [18], the simple LS performs better than this Bayesian-based approach since it does not need any training and it is more generalizable, since we only need knowledge of one pattern and not a library of patterns. Compared to the ones that did consider noise [21, 19], the main difference is that we do not require knowledge of the initial conditions or of transient data (dynamics). These are assumptions that would not hold for biological data, which usually is scarce and with high levels of noise, so our approach is to the best of our knowledge the first one that tackles all of these issues (small data and noise) at the same time. Recently, the inverse problem was addressed
with neural networks but with some limitations: in [32] PINNs were applied to Turing patterns but without considering noise and in [33] an architecture based on recurrent neural networks was developed but without testing it on actual data.
Despite the successes of our methods to deal with small data and noise, there are some limitations worth mentioning. First, we did not use domain growth for our work, which would be more biologically relevant. This could be implemented by considering more traditional PINNs that take time into account. We did not focus on time series data, although our LS approach easily extends to this (see Supplementary Information section 9), and the more traditional PINN structures are very adequate for data changing in time. Our approach to incorporate the scaling in the chemical patterns, although effective for this case, was found to be very sensitive, since parameters could interfere with other parameters during training. A more sophisticated encoding of the scaling used could potentially provide a better distinction in the parameters and yield better and more robust results. Looking at possible biological applications, a problem would be the simplicity of the models, which are just cartoon representations of the actual biological networks as encountered e.g. in developmental biology. To inform biological experiments, future models need to be complex enough to have meaningful connections to experimental observables, while being simple enough to avoid the 'curse of dimensionality'. At the same time, experiments should show a good connection with the simulations in the models, as otherwise the approaches shown in this work would not be useful. As a remark, we only focused on finding the parameters, but it would also be interesting to extend this to model selection. Existing methods include graph networks [34], embedding theory and diffusion maps [35, 36].
Our approaches open up new ways of connecting mathematical models to experimental patterns. In particular, RBF-PINNs worked for noise levels comparable to biological data, and proved to be useful at elucidating properties of chemical patterns. Hence, this method could potentially be applied to infer parameters for biological candidate models, "proving" that a model is capable of reproducing observed patterns. This type of model selection could ultimately aid the rational design of synthetic tissues with patterns for downstream templating and added functionality [37, 38]. In conclusion, we hope our machine learning approaches to solving inverse problems stimulate new research into unraveling pattern formation in biology, chemistry, and bioengineering.
## Methods
All algorithms and methods were run in Python on a HP Z8 G4 workstation with a Intel(r) Xeon(R) Gold 6128 CPU @ 3.40GHz \(\times\) 24 processor and a Quadro RTX 6000/PCIe/SSE2 GPU. Packages and requirements of the environments are explained in the Supplementary Information section 8.
### Numerical simulations
All numerical simulations of the models were performed on a \(50\times 50\) discrete grid by applying a center difference approximation to the second-order derivatives in the Laplacian, hence transforming the PDE Eq. 1 into an ODE model:
\[\begin{split}\frac{du}{dt}=& D_{u}\frac{u_{i,j-1}+u_{ i,j+1}+u_{i-1,j}+u_{i+1,j}-4u_{i,j}}{(\Delta x)^{2}}+f(u,v)\\ \frac{dv}{dt}=& D_{v}\frac{v_{i,j-1}+v_{i,j+1}+v_{i-1,j}+v_{i+1,j}-4v_{i,j}}{(\Delta x)^{2}}+g(u,v)\end{split} \tag{11}\]
Here, we assumed that the step in the \(x\) direction is the same as the step in the \(y\) direction, i.e \(\Delta x=\Delta y\). With this approximation, we use a numerical solver to forward integrate these equations for a sufficiently large amount of time that ensures that the system has converged to a steady state, which is the Turing pattern.
### Least squares
As described in the section on LS, in order to write down the solution we first need to write our problem in matrix form. We will do this here with the Schnakenberg model as an example. Using the same notation as before, we let \(u_{ij}\) and \(v_{ij}\) be our discretized Turing patterns in matrix form, for \(i,j=1,2...,N\), with \(N\) being the number of columns and rows (as we have a square grid). Writing Eq. 5 using this notation and at steady state we obtain:
\[u_{ij}-\Delta u_{ij}=c_{1}+c_{2}u_{ij}^{2}v_{ij}\qquad 0=d\Delta v_{ij}+c_{3}-c_{2}u_{ ij}^{2}v, \tag{12}\]
where we collected the terms not multiplied by a parameter to the left. This is a total of \(2\times N^{2}\) equations, which we can write in matrix form as:
\[\left(\begin{array}{c}u_{11}-\Delta u_{11}\\ u_{12}-\Delta u_{12}\\ \vdots\\ u_{NN}-\Delta u_{NN}\\ 0\\ \vdots\\ 0\end{array}\right)=\begin{bmatrix}1&u_{11}^{2}v_{11}&0&0\\ 1&u_{12}^{2}v_{12}&0&0\\ \vdots&\vdots&\vdots&\vdots\\ 1&u_{NN}^{2}v_{NN}&0&0\\ 0&-u_{11}^{2}v_{11}&1&\Delta v_{11}\\ 0&-u_{12}^{2}v_{12}&1&\Delta v_{12}\\ \vdots&\vdots&\vdots&\vdots\\ 0&-u_{NN}^{2}v_{NN}&1&\Delta v_{NN}\end{bmatrix}\left(\begin{array}{c}c_{1} \\ c_{2}\\ c_{3}\\ d\end{array}\right)\]
We will write this equation as \(y=X\beta\), and will call \(y\) the vector of independent variable and \(X\) the matrix of dependent variables. It is also customary when performing regression to call \(y\) the vector of outputs and \(X\) the design matrix. This is the starting form of the LS formulation [22]. By defining the error vector and finding its minimum, the solution for the optimal parameters can be written as:
\[\beta=(X^{T}X)^{-1}X^{T}y\]
Note that for this approach to work we need \(X^{T}X\) to be invertible, which is equivalent to \(X\) having full rank. Based on this necessary condition, we can obtain the minimum amount of pixels needed for the solution to be well defined. In the case of the Schnakenberg model, we can see that two pixels are enough, since it gives us a \(4\times 4\) matrix in the form:
\[M=\begin{bmatrix}1&a&0&0\\ 1&b&0&0\\ 0&-a&1&c\\ 0&-b&1&d\end{bmatrix}\]
which for most \(a,b,c,d\) will be full rank. There are cases (e.g. \(c=d\) or \(a=b\)) where \(M\) will not be full rank, but for randomly selected pixels in the pattern these cases are unlikely. Likewise, we can easily see that this condition gives two pixels for the Brusselator model (since we have two independent \(2\times 2\) submatrices) and three for the FitzHugh-Nagumo model (since we have a submatrix of \(3\times 3\)).
### Neural networks
For the neural network, as described in the Results section, we combined PINNs (i.e. using the PDE equations in order to improve the approximation and having the parameters as trainable variables in the network) with RBF neural networks to obtain what we called RBF-PINNs. The architecture is very simple since the network is made of only three layers. The input of the network is a vector representing a spatial location, in our case \((x,y)\), since we only have two dimensions, and the output is \(u(x,y)\). We have one network for each \(u\) and \(v\), to allow for more flexibility in the model, but we train them together. Between the input and the output layer we have a single hidden layer, with an activation layer representing the RBF kernels, and the input being fed directly to this activation layer by having all weights set to 1, which means that the amount of trainable parameters is in the order of \(3N\) if we have \(N\) nodes. These RBF kernels can be written as:
\[\phi_{i}(\mathbf{x})=w_{i}e^{-\beta_{i}||\mathbf{x}-\mathbf{c}_{i}||^{2}}\quad \text{for}\quad i=1,\,2,...\,M \tag{13}\]
where \(M\) is the total number of nodes in the network. As explained previously and portrayed in Fig. 4, \(\beta_{i}\) works as the variance, \(c_{i}\) is the location of the \(i^{th}\) kernel and \(w_{i}\) its weight or importance. Each node in the network represents a different kernel, so the only way to change the architecture of the network to improve the accuracy of the approximation is to increase the amount of kernels by increasing the amount of nodes. The kernel parameters were set randomly, \(\mathbf{c}\) was set to a 2D uniform random variable in the spatial range of the pattern, and the network weights were initialized using Glorot initialization. The variance was initialized depending on the pattern, since the variance of the Gaussian kernels has to be on a similar scale as the pattern wavelength.
Our network has two main losses. One is an approximation loss, which compares the output of the network to the pattern. If we consider our network as a function, we can write it as \(\Phi^{u}(\mathbf{X})\) where \(\mathbf{X}\) is the space dimensions where we have the pattern values \(\mathbf{u}\). We will from now on just write this as \(\Phi^{u}\) and similarly for \(\mathbf{v}\). Subsequently, we can write the approximation loss for \(\mathbf{u}\) as:
\[L_{app,u}=\frac{1}{N^{2}}\sum_{i,j}||\Phi^{u}_{i,j}-\mathbf{u}_{i,j}||^{2} \tag{14}\]
Note that this is equivalent to computing the mean squared error (MSE) between the data and the approximation. We mentioned the RAPS loss before, and it could be used here for the network training as well, but since we care about the pattern being accurate, the MSE works fine. However this could be a possible improvement since it should cancel the noise to some extent.
The second loss is the PDE loss, which enforces that the output has a small PDE residual and at the same time improves the parameters so that the data gives a better fit to the equation:
\[L_{PDE}=\frac{1}{N^{2}}\sum_{i,j}||\beta_{1}\Delta\Phi^{u}_{i,j}+f(\Phi^{u}_{i,j},\Phi^{v}_{i,j},\mathbf{\beta})||^{2}+||\Delta\Phi^{v}_{i,j}+g(\Phi^{u}_{i,j}, \Phi^{v}_{i,j},\mathbf{\beta})||^{2} \tag{15}\]
We also have a loss for the diffusion term, to make sure that the second derivatives of the approximation are accurate. The final structure used for the results in the paper has 60-120 nodes depending on the model. For training, we use batch training with batches of 128 elements for 200,000 iterations and Adam optimizer, taking a total of less than an hour to run. The first 10,000-20,000 iterations are used to approximate the function, since we only update the weights using \(L_{app}\), and after this we change to minimizing both losses, taking care of using weights before each loss so that the approximation does not get completely changed by the addition of the new losses. When training the approximation, we use information on the whole pattern, but when we switch to the \(L_{PDE}\), we only use interior points so that inaccuracies in the diffusion do not present a problem.
The structure of our network is relatively simple and the RBF network guarantees a smooth output. However, to avoid overfitting we can use a small number of nodes or kernels. This is because overfitting is only a problem for the approximation part of our network (that is where our data is used). Furthermore, our patterns have a larger wavelength than the noise. By selecting a variance or amplitude parameter to be of the same order as our pattern, we keep the network from incorporating most of the noise. On top of that, we redefine the weights of the network every 2,000 iterations so that all of them are on a similar scale. This is useful since it is possible that some weights converge faster than other, and so would eventually have a very small loss if the original weights were used.
## Acknowledgments
We thank Roozbeh H. Pazuki for invaluable support and technical comments, Martina Oliver Huidobro for stimulating discussions and Milos Dolnik for explanations of his experiments. This research was funded through a studentship from the Department of Life Sciences at Imperial College London.
## Declaration of interests
The authors declare no competing interests.
## Author contributions
A.M.G. and R.G.E. designed, and A.M.G. implemented and performed the theoretical computational approach and data analysis. Both authors wrote the paper.
|
2301.13670 | What Makes Good Examples for Visual In-Context Learning? | Large-scale models trained on broad data have recently become the mainstream
architecture in computer vision due to their strong generalization performance.
In this paper, the main focus is on an emergent ability in large vision models,
known as in-context learning, which allows inference on unseen tasks by
conditioning on in-context examples (a.k.a.~prompt) without updating the model
parameters. This concept has been well-known in natural language processing but
has only been studied very recently for large vision models. We for the first
time provide a comprehensive investigation on the impact of in-context examples
in computer vision, and find that the performance is highly sensitive to the
choice of in-context examples. To overcome the problem, we propose a prompt
retrieval framework to automate the selection of in-context examples.
Specifically, we present (1) an unsupervised prompt retrieval method based on
nearest example search using an off-the-shelf model, and (2) a supervised
prompt retrieval method, which trains a neural network to choose examples that
directly maximize in-context learning performance. The results demonstrate that
our methods can bring non-trivial improvements to visual in-context learning in
comparison to the commonly-used random selection. | Yuanhan Zhang, Kaiyang Zhou, Ziwei Liu | 2023-01-31T14:40:05Z | http://arxiv.org/abs/2301.13670v2 | # What Makes Good Examples for Visual In-Context Learning?
###### Abstract
Large-scale models trained on broad data have recently become the mainstream architecture in computer vision due to their strong generalization performance. In this paper, the main focus is on an emergent ability in large vision models, known as in-context learning, which allows inference on unseen tasks by conditioning on in-context examples (a.k.a. prompt) without updating the model parameters. This concept has been well-known in natural language processing but has only been studied very recently for large vision models. We for the first time provide a comprehensive investigation on the impact of in-context examples in computer vision, and find that the performance is highly sensitive to the choice of in-context examples. To overcome the problem, we propose a prompt retrieval framework to automate the selection of in-context examples. Specifically, we present (1) an unsupervised prompt retrieval method based on nearest example search using an off-the-shelf model, and (2) a supervised prompt retrieval method, which trains a neural network to choose examples that directly maximize in-context learning performance. The results demonstrate that our methods can bring non-trivial improvements to visual in-context learning in comparison to the commonly-used random selection. The code and models are available at [https://github.com/ZhangYuanhan-AI/visual_prompt_retrieval](https://github.com/ZhangYuanhan-AI/visual_prompt_retrieval).
Machine Learning, ICML
## 1 Introduction
In recent years, large-scale models have emerged in computer vision: they have enormous parameter size and are pre-trained on broad data to gain wide-ranging knowledge. These models have demonstrated remarkable generalization performance and have great potential for numerous downstream applications (Bommasani et al., 2021). However, due to the large model size and the potentially proprietary data used for training, entities able to develop large-scale models typically only provide users with APIs, known as Model-as-a-Service (Maas). Representative examples include the prominent text-to-image generation models, DALL-E (Ramesh et al., 2021) and Imagen (Saharia et al., 2022), and OpenAI's powerful language models like GPT-3/ChatGPT (Radford et al., 2021). As a result, users are unable to apply full fine-tuning or some parameter-efficient tuning techniques, such as prompt learning (Li and Liang, 2021; Lester et al., 2021; Zhou et al., 2022c;b; Zhang et al., 2022; Pan et al., 2022), for model adaptation, largely limiting downstream performance.
_In-context learning_, which is a "hidden" capability originally found in large autoregressive language models (Radford et al., 2021), has recently been investigated for large vision models (Bar et al., 2022), and more importantly, has the potential to become the mainstream approach for MaaS applications in the near future. Without the need to update any parameter for previously unseen tasks, in-context learning simply prepends some domain-specific input-output pairs, called in-context examples or prompt,1 to a test example, which together guide the model to produce an ideal result. For instance, in natural language processing one could prepend a French-English sentence pair to a French sentence, and the model would produce an English translation of the French sentence. In computer vision, Bar et al. (2022) pre-trained a neural network to fill missing patches in grid-like images, which allows the model to perform in-context learning for unseen tasks like image segmentation (see the grid images in Fig. 1(a) bottom).
Footnote 1: These two terms are used interchangeably in this paper.
In this work, we focus on _visual in-context learning_, a relatively new concept with little existing research regarding how to better apply it in practice. We for the first time conduct a comprehensive investigation on the impact of in-context examples for large vision models, and identify a critical issue: downstream performance is highly sensitive to the choice of in-context examples. This is evidenced by the large variances observed for a variety of test examples shown in Fig. 1(a) top. By visualizing the results in Fig. 1(a) bottom, it seems to suggest that the closer the in-context
example to the query, the better the result. For example, the best prompt image is closer to the query as they are similar in object pose and background; on the other hand, the worst prompt image has a drastically different style than the query image, which might explain why the predicted mask focuses on the wrong region, i.e., the white pillar instead of the cat.
Clearly, designing a proper prompt containing the optimal in-context example(s) by hand would be extremely difficult. To overcome the problem, we propose a prompt retrieval framework where the core component is a score function, which aims to give each source instance a score to indicate the level of suitability for being included in the prompt. Once the scoring process is done, we can simply pick one or multiple examples with the highest score(s) to construct a prompt. An overview of our framework is depicted in Fig. 1(b).
We provide two implementations for the prompt retrieval framework, both interpreting the score as the cosine distance measuring similarity between a query and a source example. The first is an unsupervised method based on nearest example search using an off-the-shelf model. The second is a supervised method, which learns a neural network to choose examples that directly maximize in-context learning performance. Since there is no ground-truth score to be used as the supervisory signal, we resort to a contrastive learning paradigm: source examples that result in better (or worse) in-context learning performance should get closer (or farther) to the query in feature space.
Our contributions and the main findings are summarized as follows. (1) We present the first comprehensive study concerning how to select good examples for the emerging visual in-context learning, and reveal a critical issue that the choice of in-context examples has a huge impact on performance. (2) From the technical perspective, we present a prompt retrieval framework that can automate the prompt selection process, and provide two simple implementations: an unsupervised method and a supervised method. (3) By conducting extensive experiments on three visual in-context learning tasks (which have not been seen during pre-training), namely foreground segmentation, single object detection and image colorization, we share valuable insights with the community on how to find good visual in-context examples, e.g., the supervised method performs the best and often finds examples that are both semantically close and spatially similar to a query.
## 2 Methods
### Visual In-Context Learning
In-context learning is a new paradigm that originally emerged from large autoregressive language models pre-trained on broad data, such as GPT-3 (Brown et al., 2020). Unlike traditional learning methods, in-context learning
Figure 1: (a) Different choices of in-context examples (outlined in green) often lead to significantly different results. Here we show 30 random query images (x-axis) from Pascal-\(5^{i}\)(Shaban et al., 2017) split 0, and measure the performance range using 50 different in-context examples. (b) We propose a prompt retrieval framework aiming to automate the selection of in-context examples. We provide two implementations of the idea: one is unsupervised while the other is supervised, both outperforming random selection by a clear margin.
does not require any parameter update and instead conditions prediction on some in-context examples in the form of input-output pairs. For example, in natural language processing one might give a French-English sentence pair and a test French sentence as input to the model, which then produces the English version of the sentence. In computer vision, such a paradigm has only been studied very recently. For example, Bar et al. (2022) trained a neural network to fill missing patches in grid-like images, which in turn allows the model to perform in-context learning on unseen tasks.
Formally, given a dataset \(\mathcal{D}=\{(x_{n},y_{n})\}_{n=1}^{N}\) containing \(N\) image-label pairs (e.g., an image and its segmentation mask), a query example \(x_{q}\), and a model \(g_{\tau}\), in-context learning can be formulated as:
\[y_{q}=g_{\tau}(\mathcal{P},x_{q}), \tag{1}\]
where \(\mathcal{P}\) is called a prompt, which consists of \(K\) input-output pairs, \(\mathcal{P}=\{x_{c_{1}},y_{c_{1}},...,x_{c_{K}},y_{c_{K}}\}\subset\mathcal{D}\). In particular, the prompt \(\mathcal{P}\) provides some _context_ for guiding the model to produce the ideal \(y_{q}\) for \(x_{q}\) without updating the large model's parameters \(\tau\).
**Problem.** The most common approach for designing the prompt \(\mathcal{P}\) in the vision domain is (within-class) _random selection_ proposed by Bar et al. (2022): one or multiple image-label pairs (with the same label as the test example) are randomly chosen from the training dataset. As illustrated in Fig. 1(a), the performance is highly sensitive to the selection of in-context examples--the gap between the best and worst prompt could reach over 70% mIoU. Below we propose two automatic prompt selection methods to tackle this problem.
### Prompt Retrieval
Our goal is to automatically select the most suitable example(s) from the training dataset for a query \(x_{q}\). To this end, we propose a prompt retrieval framework in the following form,
\[x^{*}=\arg\max_{x_{n}\in\mathcal{D}}f_{\theta}(x_{n},x_{q}), \tag{2}\]
where \(f_{\theta}\) is a function parameterized by \(\theta\), aiming to produce a score for a pair of \(x_{n}\) and \(x_{q}\). When \(K=1\), we choose the optimal example pair as the prompt, \(\mathcal{P}=\{x^{*},y^{*}\}\). When \(K>1\), we rank the training examples by their scores and choose the top-\(K\) example pairs. An overview of our methods is provided in Fig. 1(b).
In this work, we implement \(f_{\theta}\) as a combination of a neural network for feature extraction and the cosine distance function for measuring similarity between two feature vectors.
#### 2.2.1 Unsupervised Prompt Retrieval
Our first method is _unsupervised prompt retrieval_ where the key idea is to use an off-the-shelf feature extractor for extracting image features so that we can compare the cosine distance between the query \(x_{q}\) and each training example \(x_{n}\in\mathcal{D}\). In this case, the parameters \(\theta\) for the score function \(f_{\theta}\) correspond to the off-the-shelf feature extractor, which are kept fixed.
#### 2.2.2 Supervised Prompt Retrieval
The unsupervised method discussed above is not explicitly optimized for in-context learning; instead, it relies on how the feature extractor was pre-trained and the objective (function) used in pre-training may well not align with that of in-context learning. We propose a second method based on
Figure 2: Overview of the supervised prompt retrieval method. The main idea is to compute the in-context learning result for each source example, and pick those with the highest/lowest results to form a positive/negative set for contrastive learning.
_supervised prompt retrieval_ where we assume the source data contains labels. The goal is to directly optimize the score function \(f_{\theta}\) such that the chosen in-context example(s) can maximize the log-likelihood,
\[\max_{\mathcal{P}}\quad\log p(y_{q}|\mathcal{P},x_{q}). \tag{3}\]
In this work, we present a simple implementation for the supervised method, which simply turns the unsupervised method into a supervised one by making the feature extractor learnable. In other words, we directly optimize Eq. 3 with respect to the feature extractor. Below we explain in detail how we train the feature extractor (see Fig. 2 for an overview).
**Data.** Recall that we interpret the score \(f_{\theta}(\cdot,\cdot)\) as the cosine distance between two images in feature space. We would like to learn a space such that an image pair \((x_{n},x_{q})\) with high in-context learning performance is close to each other, or far away from each other if the performance is low. Since there is no label defining how close a distance should be, we resort to contrastive learning for training the feature extractor. The goal is then to find a positive and a negative set for each training example \(x_{n}\in\mathcal{D}\) treated as a query. Specifically, for each example \(x_{n}\) we compute the prediction \(\hat{y}_{n}=g_{\tau}((x_{m},y_{m}),x_{n})\) where \(g_{\tau}\) is the large vision model defined in Sec. 2.1 and \(x_{m}\in\mathcal{D}\) but \(x_{m}\neq x_{n}\). Since we have the ground truth \(y_{n}\) for \(x_{n}\), we can measure the performance by comparing the prediction \(\hat{y}_{n}\) with the ground truth \(y_{n}\). Then, for each \(x_{n}\) we choose the top-5 examples with the highest/lowest performance to form a positive/negative set.
**Training.** Let \(z_{n}\) denote the features of \(x_{n}\) extracted by the neural network we aim to optimize. At each iteration, we sample a mini-batch \(\mathcal{B}\) from the training dataset. Then, for each example in \(\mathcal{B}\), we sample one example from the top-5 positive and negative sets, respectively. The contrastive loss is computed as
\[\ell=-\frac{1}{|\mathcal{B}|}\sum_{x_{n}\sim\mathcal{B}}\log\frac{e^{cos(z_{n},z_{n}^{+})}}{e^{cos(z_{n},z_{n}^{+})}+\sum\limits_{z_{n}^{-}\in\mathcal{N}}e ^{cos(z_{n},z_{n}^{-})}}, \tag{4}\]
where \(cos(\cdot,\cdot)\) is the cosine distance function, \(z_{n}^{+}\) denotes the feature representation of a positive example, and \(z_{n}^{-}\) denotes the feature representation of a negative example. It is worth noting that for mini-batch training, the negative set \(\mathcal{N}\) contains a negative example of \(x_{n}\) sampled from the top-5 negative set and other examples within the same mini-batch.
## 3 Experiments
In this section we conduct a comprehensive evaluation using different prompt selection methods (Sec. 3.1) and compare their robustness to distribution shifts (Sec. 3.2). We also provide extensive quantitative and qualitative analyses in Sec. 3.3 to help understand why our methods work and how to better apply them in practice. Source code will be released to the community for reproducing the full experiments.
**Methods.** All experiments are based on the image inpainting model pre-trained by Bar et al. (2022) on a dataset consisting of academic figures.2 We mainly compare the following methods: (1) _Random_, the baseline method that randomly samples in-context examples from the source training dataset; (2) _Unsupervised prompt retrieval (UnsupPR)_, our first proposed method that uses off-the-shelf features for nearest example search. The main experiments are based on CLIP's vision encoder (Radford et al., 2021), which was pre-trained using multimodal contrastive learning; (3) _Supervised prompt retrieval (SupPR)_, our second proposed method that fine-tunes CLIP's vision encoder by directly optimizing in-context learning performance on downstream datasets. A variety of backbones are evaluated in Sec. 3.3.
Footnote 2: [https://github.com/amirbar/visual_prompting](https://github.com/amirbar/visual_prompting)
**Training details for the supervised model.** The supervised model is trained for 200 epochs using SGD. The initial learning rate is set to 0.005, decayed by the cosine annealing rule.
### Main Results
**Setup.** Following Bar et al. (2022), we evaluate our methods on three computer vision tasks, which have not been seen during the training of the image inpainting model. We provide the details about the datasets used for these tasks as follows. (1) _Foreground segmentation_: We use Pascal-\(5^{i}\)(Shaban et al., 2017), which has four non-overlapping splits each containing five categories. The results are averaged over all splits. (2) _Single object detection_: The experiments are done on Pascal VOC (Everingham et al., 2015). (3) _Colorization_: We use ImageNet-2012 (Russakovsky et al., 2015), where the original validation set containing 50,000 images is used as our test set. The training data used to learn our supervised prompt retrieval model is created by randomly sampling 50,000 images from ImageNet's 1.2M training set. For all experiments, in-context examples come from the training set.
**Results.** Table 1 shows the results on the three benchmarks covering foreground segmentation, single object detection, and colorization. We summarize our findings as follows. _First, prompt retrieval clearly outperforms random selection._ In particular, the improvements of prompt retrieval over random selection are significant in foreground segmentation and single object detection: more than 6% on the former
and 1% on the latter. However, the gains on colorization are only marginal (0.63 vs. 0.67), suggesting that the image inpainting model is probably weak at image colorization. _Second, the supervised prompt retrieval method performs the best_. This is not surprising as the supervised method optimizes in-context learning performance concerning the prompt selection module. In contrast, the unsupervised method relies more on the off-the-shelf feature extractor. Overall, the results well justify the design of the prompt retrieval framework, which can serve as a strong baseline for future research.
### Experiments on Distribution Shifts
**Setup.** Distribution shifts are commonly seen in real-world applications, and therefore AI models need to be robust to distribution shifts (Zhou et al., 2022). To test this ability in visual in-context learning, we create a new protocol focusing on foreground segmentation where the source dataset is Pascal while the target dataset is MSCOCO (Lin et al., 2014). Specifically, we follow the design of Pascal-\(5^{i}\) and create MSCOCO-\(5^{i}\), which also has four splits, each having the same set of categories as in the corresponding split in Pascal-\(5^{i}\). Note that such a shift mainly affects the supervised prompt retrieval method that requires training but not the unsupervised UnsupPR and Random.
**Results.** The results are shown in Table 2. First of all, the unsupervised prompt retrieval method beats the random selection method by a clear margin. By comparing the two prompt retrieval methods, we find that the supervised method again performs better than the unsupervised one despite being a learning-based approach--this is an exciting finding as it means the supervised method does not have the overfitting problem here. Nonetheless, we observe that the gains achieved by the prompt retrieval methods here are generally smaller than the gains achieved on the standard foreground segmentation benchmark: here SupPR is only around 3% better on average than Random (19.95% vs. 16.78%) while the improvement in Table 1 reaches 8% (35.56% vs. 27.56%). One potential solution to reduce the gap might be to improve the image inpainting model, which is beyond the scope of this paper.
### Further Analysis
**What are good in-context examples?** To answer this question, we visualize the in-context examples found by UnsupPR and SupPR in Fig. 3. We focus on foreground segmentation and choose two categories from Pascal (person and cow).3 In each grid, the first row corresponds to the retrieved in-context example (i.e., an input-output pair) while the second row contains the query and model prediction. By comparing the in-context examples picked by UnsupPR and those picked by SupPR, we find the reason why SupPR performs better than UnsupPR: the examples found by SupPR are more similar to the queries in terms of semantics (e.g., Fig. 3(e)), background (e.g., Fig. 3(a)), object pose (e.g., Fig. 3(b), object appearance (e.g., Fig. 3(i)), viewpoint (e.g.,
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline \hline & \multicolumn{5}{c|}{**Seg. (mIoU) \(\uparrow\)**} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} \\ & Split-0 & Split-1 & Split-2 & Split-3 & Avg & **Det. (mIoU) \(\uparrow\)** & **Color. (mse) \(\downarrow\)** \\ \hline Random & 28.66 & 30.21 & 27.81 & 23.55 & 27.56 & 25.45 & 0.67 \\ UnsupPR & 34.75 & 35.92 & 32.41 & 31.16 & 33.56 & 26.84 & **0.63** \\ SupPR & **37.08** & **38.43** & **34.40** & **32.32** & **35.56** & **28.22** & **0.63** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Main results. The two prompt retrieval methods outperform random selection, and the supervised method achieves the best performance.
\begin{table}
\begin{tabular}{c|c|c c c c} \hline \hline & \multicolumn{5}{c}{**Seg. (mIoU) \(\uparrow\)**} \\ & Split-0 & Split-1 & Split-2 & Split-3 & Avg \\ \hline Random & 12.17 & 18.47 & 20.55 & 15.94 & 16.78 \\ UnsupPR & 12.67 & 19.62 & 21.33 & 18.44 & 18.02 \\ SupPR & **13.62** & **21.25** & **24.46** & **20.44** & **19.95** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on distribution shifts (from Pascal to MSCOCO). Despite being a learning-based approach, SupPR shows stronger robustness than UnsupPR and Random, which do not require any training.
\begin{table}
\begin{tabular}{c|c|c c c c c} \hline \hline & \multicolumn{5}{c}{**Seg. (mIoU) \(\uparrow\)**} \\ & & Split-0 & Split-1 & Split-2 & Split-3 & Avg \\ \hline \multirow{3}{*}{UnsupPR} & CLIP & 34.75 & 35.92 & **32.41** & 31.16 & 33.56 \\ & EVA & 34.75 & 36.09 & 32.11 & **31.61** & 33.64 \\ & ViT & **35.10** & **37.37** & 32.05 & 30.80 & **33.83** \\ \hline \multirow{3}{*}{SupPR} & CLIP & **37.08** & 38.43 & 34.40 & 32.32 & 35.56 \\ & EVA & 36.11 & 39.14 & 34.31 & **33.30** & 35.71 \\ \cline{1-1} & ViT & 36.80 & **39.70** & **34.71** & 33.25 & **36.12** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison between different backbones pretrained using different methods: multimodal contrastive learning for CLIP, self-supervised learning for EVA, and supervised learning for ViT. Overall, the performance is insensitive to the choice of different backbones.
Fig. 3(k)), and so on. We also observe similar patterns in other categories/tasks (please refer to the supplementary).
**Backbone.** To understand if using a different backbone than CLIP would make a big difference, we further evaluate our prompt retrieval methods, UnsupPR and SupPR, on the foreground segmentation benchmark using two other backbones: EVA (Fang et al., 2022) pre-trained using self-supervised learning (i.e., masked image modeling) and ViT (Dosovitskiy et al., 2020) pre-trained using supervised learning. The results are reported in Table 3. Although these three backbones perform differently on image recognition under the fine-tuning setting--EVA performed the best--the gap between them for both UnsupPR and SupPR is less than 1%. Therefore, we can conclude that the backbone for visual in-context learning does not matter much.
**Size of retrieval set.** Recall that in-context examples are sampled from the training dataset, namely the retrieval set. We are interested to know whether the size has any impact on performance, especially for the supervised prompt retrieval method. To this end, we build seven subsets for each split in Pascal-\(5^{i}\), which cover a wide range of sizes (see the x-axis in Fig. 4 left). The results are plotted in Fig. 4 left. For random selection, the size does not matter at all. In contrast, the two prompt retrieval methods clearly benefit from a bigger size. But their performance plateaus when the size reaches a certain level. It is worth noting that for the supervised method, 20% of the total data is sufficient for achieving a decent performance.
Figure 3: In-context examples retrieved by UnsupPR and SupPR. In each grid, the first row contains the prompt while the second row contains the query and prediction. The in-context examples found by SupPR are more similar than those found by UnsupPR to the queries in a number of ways: semantics (e.g., (e)), background (e.g., (a)), object pose (e.g., (b), object appearance (e.g., (i)), viewpoint (e.g., (k)), etc. More examples can be found in the supplementary.
**Number of in-context examples.** We follow Bar et al. (2022) and create a large grid enough to fit 8 examples at maximum (as shown in Fig. 5 right). By varying the number of in-context examples from 1 to 7, we obtain a set of results and plot them in Fig. 5 left. Clearly, more in-context examples lead to better performance for all three methods, including SupPR, UnsupPR, and Random. This is probably because in-context examples can be viewed as "training data", and having more training data typically benefits performance--in visual in-context learning, more training data gives a more comprehensive "context." We show a few example cases in Fig. 5 right to explain this observation.
**Order of in-context examples.** To understand if changing the order of in-context examples makes a difference, we fix the number of in-context examples to 3, evaluate all possible combinations, and compute the mean and standard deviation. As shown in Table 4, the standard deviation is generally small, so the order is not a concern as long as good examples are chosen.
**Distance metric.** We use the cosine distance by default to compute the score function in Eq. 2. Here we evaluate other design choices including Euclidean distance and Manhattan distance. As shown in Fig. 4 right, the results are very similar for different distance metrics.
## 4 Related Work
### In-Context Learning
In-context learning is a novel paradigm that emerged in large language models, such as GPT-3 (Brown et al., 2020). It allows an autoregressive language model to perform inference on unseen tasks by conditioning the input on some target-specific input-output pairs serving as "context." Such a powerful paradigm allows users to customize a model's output according to their downstream datasets without changing the internal model parameters, which are often inaccessible. Recent research in natural language processing has shown that in-context learning can be applied to numerous language tasks, such as machine translation (Garcia and Firat, 2022), sentiment analysis (Min et al., 2021), and question answering (Press et al., 2022).
In computer vision, in-context learning is still a relatively new concept. One of the earliest works tackling in-context learning is Flamingo (Alayrac et al., 2022), a large visual language model taking language as instruction and allowing the processing of both images and videos. More relevant to our work is a pure vision model developed by Bar et al. (2022), which was pre-trained to fill missing patches in images made of academic figures and infographics. Bar et al. (2022) found that such an image inpainting model can solve problems unseen during training, like foreground segmentation and image colorization.
Our work follows Bar et al. (2022) but studies visual in-context learning from a different dimension: how to find
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline & \multicolumn{5}{c}{**Seg. (mIoU) \(\uparrow\)**} \\ & Split-0 & Split-1 & Split-2 & Split-3 & Avg \\ \hline Random & 17.93 \(\pm\) 0.20 & 25.48 \(\pm\) 0.27 & 21.34 \(\pm\) 0.73 & 21.12 \(\pm\) 0.53 & 21.46 \(\pm\) 0.43 \\ UnsupPR & 20.22 \(\pm\) 0.31 & 27.58 \(\pm\) 0.40 & 22.42 \(\pm\) 0.38 & 23.36 \(\pm\) 0.42 & 23.39 \(\pm\) 0.37 \\ SupPR & **20.74**\(\pm\) 0.40 & **28.19**\(\pm\) 0.37 & **23.09**\(\pm\) 0.34 & **24.22**\(\pm\) 0.48 & **24.06**\(\pm\) 0.40 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Impact of the order of in-context examples.
Figure 4: (Left) Impact of the size of retrieval set. (Right) Ablation study on distance metric used to compute the score function in Eq. 2. It can be observed that different metrics perform similarly.
good visual in-context examples that benefit downstream performance.
### Prompt Retrieval in NLP
The natural language processing community has found that the choice of in-context examples has a huge impact on performance (Agrawal et al., 2022; Liu et al., 2021). Moreover, the way how in-context examples, also called prompts, are constructed can also affect performance, e.g., prompt length and the order of in-context examples, as reported in the literature (Agrawal et al., 2022). These findings prompted the community to study how to find good in-context examples for large language models, which has inspired our research. Liu et al. (2021) assumed that good in-context examples should be semantically close to query sentences, based on which they proposed to select nearest neighbors in the training set measured by a sentence encoder like RoBERTa (Liu et al., 2019). Rubin et al. (2021) first used an unsupervised method to retrieve some candidates, among which top examples were chosen using a supervised prompt retriever to maximize downstream performance.
## 5 Discussion and Conclusion
Our research presents a timely study on an emergent ability termed in-context learning for large vision models. We systematically investigate how the choice of in-context examples impacts downstream performance, exposing a critical issue that different in-context examples could lead to drastically different results. We then propose an effective prompt retrieval framework for visual in-context learning, with two simple implementations provided: one based on unsupervised learning and the other based on supervised learning. Our methods obtain significant improvements over random selection under various problem settings, showing the potential of using prompt retrieval in vision applications with a Model-as-a-Service (MaaS) business structure.
Our research also unveils some intriguing phenomena. For instance, we show that a good in-context example should be semantically similar to the query and closer in context, _e.g._, viewpoint, background, and appearance. As such, state-of-the-art vision models like CLIP would not be sufficient because these models often emphasize semantics but not other elements critical to finding good visual in-context examples. A model that can better balance spatial and semantic closedness in feature space would be more ideal for visual in-context learning. We hope the insights presented in this work could pave the way for developing more effective prompt retrieval methods.
Our experiments show that our methods are not strong enough to cope with distribution shifts. Though our methods outperform random selection under distribution shifts, the gap is much smaller than that on a standard benchmark, suggesting huge room for improvement.
Figure 5: (Left) Impact of the number of in-context examples. (Right) More in-context examples can lead to better performance. The query in each grid is shown in the bottom right. |
2309.07027 | High performance Boson Sampling simulation via data-flow engines | In this work, we generalize the Balasubramanian-Bax-Franklin-Glynn (BB/FG)
permanent formula to account for row multiplicities during the permanent
evaluation and reduce the complexity of permanent evaluation in scenarios where
such multiplicities occur. This is achieved by incorporating n-ary Gray code
ordering of the addends during the evaluation. We implemented the designed
algorithm on FPGA-based data-flow engines and utilized the developed accessory
to speed up boson sampling simulations up to $40$ photons, by drawing samples
from a $60$ mode interferometer at an averaged rate of $\sim80$ seconds per
sample utilizing $4$ FPGA chips. We also show that the performance of our BS
simulator is in line with the theoretical estimation of Clifford \& Clifford
\cite{clifford2020faster} providing a way to define a single parameter to
characterize the performance of the BS simulator in a portable way. The
developed design can be used to simulate both ideal and lossy boson sampling
experiments. | Gregory Morse, Tomasz Rybotycki, Ágoston Kaposi, Zoltán Kolarovszki, Uroš Stojčić, Tamás Kozsik, Oskar Mencer, Michał Oszmaniec, Zoltán Zimborás, Péter Rakyta | 2023-09-13T15:32:07Z | http://arxiv.org/abs/2309.07027v2 | # High performance Boson Sampling simulation via data-flow engines
###### Abstract
In this work, we generalize the Balasubramanian-Bax-Franklin-Glynn (BB/FG) permanent formula to account for row multiplicities during the permanent evaluation and reduce the complexity of permanent evaluation in scenarios where such multiplicities occur. This is achieved by incorporating n-ary Gray code ordering of the addends during the evaluation. We implemented the designed algorithm on FPGA-based data-flow engines and utilized the developed accessory to speed up boson sampling simulations up to 40 photons, by drawing samples from a 60 mode interferometer at an averaged rate of \(\sim 80\) seconds per sample utilizing 4 FPGA chips. We also show that the performance of our BS simulator is in line with the theoretical estimation of Clifford & Clifford [1] providing a way to define a single parameter to characterize the performance of the BS simulator in a portable way. The developed design can be used to simulate both ideal and lossy boson sampling experiments.
## 1 Introduction
The main idea behind quantum supremacy [2, 3, 4] is to use a quantum processor to solve a mathematical problem that is intractable for the classical computers, that is the solution of the problem on classical hardware would require resources (like the execution time) scaling exponentially with the problem size [5, 6, 7, 8, 9, 10, 11, 12]. Undoubtedly, it is fundamental for the researchers to ascertain that the quantum device used in the experiment indeed solves its designed task in the way it is expected. Without such confirmation, one could not measure the strength of the quantum supremacy claims. For this reason, the development of powerful validation protocols is of high importance.
One of the proposed quantum algorithms to surpass classical computing devices is the so-called boson sampling (BS) [5, 6] in which bosons are drawn from a distribution of indistinguishable particles. In case the relation \(m\gg n^{2}\) between the number of the output modes \(m\) and the number of input photons \(n\) holds on, the total execution time of the simulation scales exponentially with \(n\), since there are no efficient classical methods to simulate the quantum correlations originating from the peculiar nature of multi-body bosonic systems consisting of indistinguishable particles. For this reason, BS has been subjected to numerous theoretical and experimental investigations. Due to the intensive interest shown towards this topic, several flavors of BS have been formulated such as the
Gaussian BS [13, 14, 15], scattershot BS [16, 17, 18], translating BS into time domain [19, 20], or implementing the BS with trapped ion technology [21, 22], ultra cold atoms [23] and with microwave cavities [24]. From an experimental point of view, after the proof-of-principle realizations of small-scaled photonic interferometers [25, 26, 27, 28], researchers were experimenting to increase the number of modes on the photonic chip [29] and by multiplexing on-chip single-photon sources [30, 31, 32]. Currently, the largest conventional BS experiment was reported in the work of Wang et. al. [33] via a 60 mode interferometer with 20 input photons and 14 measured photons. In 2020, a quantum computational advantage was claimed via sampling from a Gaussian state at the output of a 100-mode ultralow-loss interferometer with threshold detection [34] and an average of around 45 photons. Subsequently, this was extended to 144 modes with (partially) programmable input states [35]. However, these experiments were shown to be vulnerable to spoofing by drawing samples from distribution governed by classical heuristics [36]. In 2022, a further important milestone was achieved on a fully programmable photonic chip by carrying out Gaussian BS on 216 squeezed modes using a time-multiplexed and photon-number-resolving architecture [37] that was resistant against the spoofing approach of [36]. Because of the dedicated race to experimentally demonstrate proven quantum advantage the validation protocols also increased in their importance [38]. Shortly after the BS proposal researchers tried to find a way to verify the results of the samplers. Initially, the validators used statistical tests to show that the samplers draw samples from distribution being different from classical counterparts [39] such as the mean-field approach or uniform distribution. During recent years, we could see the emergence of more sublime BS validation approaches [40, 41]. Some of those new ideas arise from the techniques used in a different branch of computer science, like pattern recognition [42] or computer vision [43]. These validators require permanent computation or access to the samples drawn from a bona fide boson sampler, which means they can benefit from a high-performance permanent computation technique.
From a mathematical point of view, the key element in the simulation of BS is a fast evaluation of the permanent function. The reason for that is the connection between the permanents of specific matrices and probabilities of BS outcomes. Thus, in BS simulation many permanent computations are required even with the most efficiently known Clifford & Clifford algorithm [44, 1]. The permanent of an \(n\times n\) matrix \(A\) is defined by the formula
\[\text{perm}(A)=\sum_{\sigma\in S_{n}}\prod_{i=1}^{n}a_{i,\sigma(i)}, \tag{1}\]
where \(S_{n}\) is a set of non-repeating \(n\) element permutations of \(S=\{1,...,n\}\). The formula has a factorial time complexity of \(\mathcal{O}(n!n)\leq\mathcal{O}(n^{n+1})\). Valiant et. al.[45] proved that the evaluation of the permanent of a matrix containing only zeros and ones belongs to complexity class #P (sharp P)-complete, a class which is at least as hard as NP-complete with many counting problems e.g. perfect matchings, graph colorings of size \(k\), satisfiable assignments to general Boolean formula, etc. Unlike the determinant having quite a similar definition to Eq. (1), there is not any linear algebra simplification to calculate the permanent in polynomial time. Currently, the most efficient approaches to calculate the permanent have a computational complexity of \(\mathcal{O}(n^{2}\cdot 2^{n})\) which can be further reduced to \(\mathcal{O}((n\cdot 2^{n})\) if data recycling of intermediate results is implemented via Gray code ordering. The \(\mathcal{O}(n^{2}\cdot 2^{n})\) variants of the so-called Ryser[46] and the Balasubramanian-Bax-Franklin-Glynn (BB/FG) [47] formulas were benchmarked on the Tiahne 2 supercomputer in Ref. [48], implying better numerical accuracy of the BB/FG method.
In this work, we designed a novel scalable implementation to calculate the permanent via the BB/FG formula. In order to account for a collision of photons on the optical modes of the interferometer we introduce an extended version of the BB/FG formula based on the concept of the n-ary Gray code ordering [49] to take into account data repetition during the permanent evaluation and improve the performance of the BS simulation. We also implemented the designed computational model on FPGA-based data-flow engines (DFEs) further improving the computational performance of the BS simulation provided by the Piquasso package [50]. The so-called data-flow programming model [51] utilizes a fundamentally different approach to increase the computational concurrency than traditional parallel programming models on GPU and CPU hardware. The basic concept of data-flow programming can be explained by thinking of a stream of computational data flowing through hardware. Each of the utilized resources performs a fixed logical transformation
on the elements of the stream, transforming a single data element in each clock cycle. By chaining up hardware resources we end up with a data-flow computing model: while one hardware element is transforming the \(i\)-th element in the data stream, the next hardware element in the chain is already working on the previously transformed, \(i-1\)-th element of the stream. By providing a long enough data stream to the hardware one can realize an efficient parallel execution, even if the computational tasks have a high degree of data dependency.
The high-level DFE development framework of Maxeler Technologies provides an efficient programming background to design data-flow applications in terms of high-level building blocks (such as a support for fix-point arithmetic operations with complex numbers, automatic pipeline scheduling, stream-holds, memory controllers, etc.) instead of the lingering work of programming low-level VHSIC Hardware Description Language (VHDL) components. By combining a novel permanent computational approach based on the BB/FG formula with the data-flow programming model we developed a permanent computing DFE capable of supporting exact BS simulations (with and without photon losses) up to 40 photons. The computational complexity of the BS simulation might be significantly reduced if photon occupation multiplicities on the optical modes are taken into consideration. Our DFE permanent calculator is adapted to this generalization by streaming the addends in n-ary Gray code ordering directly on the FPGA chip. With this accessory, it took \(\sim 90\) seconds per sample to draw 40 photon samples from a random 60 mode interferometer via four concurrently operating DFEs.
The manuscript is organized as follows: in Sec. 2 we discuss the n-ary Gray code implementation to evaluate the permanent function with the BB/FG formula, accounting for photon occupation multiplicities on the photonic modes. Then, in Sec. 3 we describe the DFE implementation of the permanent calculation engine developed for FPGA chips. Finally, in Sec. 4 we provide the results of our numerical experiments on the classical simulation of BS incorporating DFE permanent calculation implementations.
## 2 Evaluation of the permanent function: a novel scalable approach
The two lowest complexity methods of computing the permanent involve the formulas of Ryser [46] and BB/FG [47] when using Gray code ordering of the addends. Ryser's approach computes the permanent as
\[\text{perm}(A)=(-1)^{n}\sum_{S\subseteq\{1,\ldots,n\}}(-1)^{|S|}\prod_{i=1}^{n }\sum_{j\in S}a_{i,j}. \tag{2}\]
The formula has an exponential computational complexity of \(\mathcal{O}(n^{2}\cdot 2^{n})\), which is significantly reduced when the subsets \(S\subseteq\{1,...,n\}\) are ordered in a way that only a single element of \(S\) is changed in the subsequent \(S\). In this case, the value of the inner sum of the matrix elements can be reused, i.e. only a single matrix element should be added or subtracted from the reused sum to obtain the new value corresponding to the next \(S\). By reducing the complexity of the inner sum from \(\mathcal{O}(n)\) to \(\mathcal{O}(1)\) this way, the Ryser formula can be evaluated by an overall \(\mathcal{O}(n\cdot 2^{n})\) complexity. By a later technique reported in Ref. [52] the complexity can be further reduced by a factor of 2 in the outer sum using the expression:
\[\text{perm}(A)=(-1)^{n-1}\cdot 2\cdot\sum_{S\subseteq\{1,\ldots,n-1\}}(-1)^{|S| }\prod_{i=1}^{n}\left(x_{i}+\sum_{j\in S}a_{i,j}\right), \tag{3}\]
where \(x_{i}\) is defined by \(x_{i}=a_{i,n}-(a_{i,1}+a_{i,2}+\cdots+a_{i,n})/2\). Another highly efficient method to calculate the permanent is provided by the BB/FG formula:
\[\text{perm}(A)=\frac{1}{2^{n-1}}\sum_{\delta}\left(\prod_{k=1}^{n}\delta_{k} \right)\prod_{j=1}^{n}\sum_{i=1}^{n}\delta_{i}a_{i,j}, \tag{4}\]
where \(\delta=(\delta_{1},\delta_{2},...,\delta_{n})\) with \(\delta_{1}=1\) and \(\delta_{i}=\pm 1\) for \(1\leq i\leq n\). Notice, that in contrast with the traditional definition of the BB/FG formula, in the inner sum of Eq. (4) we rather calculate the column sums of the input matrix instead of row sums. This choice is motivated by a practical
reason implied by the following reasoning: regarding the unitary matrix describing the scattering in the optical interferometer, the rows of the unitary are associated with the output states, while the columns are related to the input modes. Non-trivial photon multiplicities on the output modes (expected to occur more often than multiplicities in the input modes) are described by repeated rows in the unitary. As we will show in Sec. 2.2, by accounting for these multiplicities one can significantly reduce the complexity of the permanent evaluation.
The computational complexity of the BB/FG formula is \(\mathcal{O}(n^{2}\cdot 2^{n-1})\), while it can be reduced to \(\mathcal{O}(n\cdot 2^{n-1})\) if Gray code ordering is applied in the evaluation of the outer sum. The Ryser and the BB/FG formulas follow quite different approaches to evaluate the permanent, also resulting in dissimilar numerical properties. In the benchmark comparison of the two methods reported in [48] the authors found that the BB/FG formula gives numerically more precise results than Ryser's formula in the context of bounded bit-sized data types. This finding was further justified by our numerical analysis reported in Sec.3.2 comparing the numerical accuracy of the individual algorithms with the numerical results obtained by multiple-precision floating-point number arithmetics provided by the GNU MPFR library [53].
### Parallel BB/FG implementation to calculate the permanent
In this section we discuss the technical foundations of our CPU implementation to calculate the permanent of a square matrix, that can be further improved to account for photon multiplicities discussed in Sec.2.2. Our algorithm implementing a Gray code ordered BB/FG formula (4) has a computational complexity of \(\mathcal{O}(n\cdot 2^{n-1})\) while minimizing the overhead of processing the logic associated with the generation of auxiliary data needed for the Gray code ordering. Table 1. shows an example of a so-called binary reflected Gray code sequence encoded by 3 bits. Via Gray code ordering in each cycle of the outer sum of Eq. (4) only one element of the \(\boldsymbol{\delta}\) vector is changed, making it possible to reuse the column sum (i.e. the inner sum in Eq. (4)) in the next cycle and subtract/add only elements of a single row of the input matrix (see Ref. [48] for details)
Here we note some important properties of reflected Gray code counting allowing us to efficiently implement the algorithm in a parallel environment. First of all, the Gray code corresponding to a decimal index \(0\leq i<2^{n-1}\) is \(g_{i}=i\oplus(i>>1)\) where \(\oplus\) is a bit-wise logical XOR operation and \(>>\) is a bitshift operation. The ability to determine the Gray code corresponding to any decimal index \(i\) in a constant time implies an efficient way to evaluate the permanent in parallel execution. Simply, we divide the set of \(0\leq i<2^{n-1}\) decimal indices into smaller contiguous subsets that can be processed concurrently on the available CPU threads. For the first element in the subset, the column sum is initialized with \(n\) arithmetic operations. Whether an \(a_{i,j}\) element is subtracted or added to the column sum is determined by the elements \(\delta_{i}\) which are mapped to the individual bits of the Gray code: \(\delta_{i}\) is considered to be \(+1\) (\(-1\)) if the corresponding Gray code bit is 0 (1). After initializing the initial column sums on each CPU thread, additional addends to the permanent are calculated via sequential iterations on the elements of the given subset of the decimal indices \(\{i_{min},\ldots,i_{max}\}\). To this end, we need to determine the changing bit in the Gray code and its position. In principle, the bit mask of the changed bit is given by a bit-wise comparison of the Gray code with its prior value \(g_{i}\oplus(g_{i}>>1)\oplus(g_{i}-1)\oplus((g_{i}-1)>>1)\). However, according to
\begin{table}
\begin{tabular}{|c|c|} \hline decimal index & Gray code \\ \hline \hline
0 & 000 \\ \hline
1 & 001 \\ \hline
2 & 011 \\ \hline
3 & 010 \\ \hline
4 & 110 \\ \hline
5 & 111 \\ \hline
6 & 101 \\ \hline
7 & 100 \\ \hline \end{tabular}
\end{table}
Table 1: An example showing a 3-bit reflected Gray code sequence. In each row a single bit is changed in the Gray code compared to the previous row.
the generation rule of the Gray code \(g_{i}\) from the decimal index \(i\), we can use a more efficient way to determine the changed bit. Namely, in each cycle, the position of the changed bit in the Gray code is determined by the position of the lowest bit in the decimal index which is 1. Also, since in each iteration only one element of the Gray code is changed we can keep track of the "parity" of the \(\mathbf{\delta}\) vector (i.e. whether the sum of the elements \(\delta_{i}\) is odd or even) from cycle to cycle in an efficient way without iterating over the elements of the vector. We provide a reference implementation of our recursive permanent algorithm in the Piquasso Boost library [54] providing high-performance C++ computing engines for the Python-based universal bosonic quantum computer simulator framework Piquasso. We also notice that the described method can be used to scale up the computation of the permanent over multiple computing nodes (using Message Processing Interface (MPI) protocol), so in contrast with the claim of Ref. [48] the Gray coded permanent evaluation can be efficiently turned into parallel execution.
### Handling row and column multiplicities in the input matrix
In the general case, multiple photons might occupy the same optical mode at the output of the interferometer. Since the output modes are associated with the row indices of the unitary describing the scattering process of the photons, the multi-photon collision at the output modes is mathematically described by repeated rows in the unitary. In particular, the number for which a specific row is repeated in the unitary is identical to the number of photons occupying the corresponding optical mode. Following the permanent evaluation strategy encoded by the BB/FG formula it becomes clear that by having row multiplicities in the input matrix some of the addends to the permanent would show up multiple times during the calculation. Such a complexity reduction was already reported in several previous works of Ref. [55, 56, 1] by generalizing the Ryser formula, or Ref. [57] by turning the permanent calculation into the evaluation of the Hafnian function accounting for multiplicities [58]. Here we argue that it is possible to make use of the addend multiplicities and reduce the overall complexity of the BB/FG permanent calculation as well. According to our best knowledge, this improvement to the BB/FG algorithm was not reported before.
For example, let's assume that the \(k\)-th output mode (\(k>1\)) is occupied by \(M_{k}\) photons, resulting in \(M_{k}\) identical rows in the unitary at row indices \(i_{k},i_{k}+1,\ldots,i_{k}+M_{k}-1\). Consequently, the
\[\sum_{i=i_{k}}^{i_{k}+M_{k}-1}\delta_{i}a_{i,j}=\left(\sum_{i=i_{k}}^{i_{k}+M _{k}-1}\delta_{i}\right)a_{i_{k},j}=\left(M_{k}-2\Delta_{k}\right)\ a_{i_{k},j} \tag{5}\]
column sum might result in \(M_{k}+1\) different outcomes depending on the number \(\Delta_{k}\) of \(-1\) values and the number \(M_{k}-\Delta_{k}\) of \(+1\) values among the \(\delta_{i}\) (\(i_{k}\leq i\leq i_{k}+M_{k}-1\)) elements of the \(\mathbf{\delta}\) vector. The individual outcomes occur explicitly \(\binom{M_{k}}{\Delta_{k}}\) times during the permanent evaluation. Taking the \(M_{k}\) multiplicities for each optical output mode labeled by \(k\), in total
\[C=\prod_{k=1}^{\#modes}(M_{k}+1) \tag{6}\]
different outcomes of the inner column sum show up during the calculation, determining the overall complexity of the permanent evaluation. According to the BB/FG formula, the first row (\(k=1\)) of the matrix is always taken by a coefficient \(\delta_{1}=1\), hence we always treat the first row with a multiplicity of 1, even if it is repeated in the matrix. Computationally the best practice is to move one of the non-repeating rows (if exists) to the top of the input matrix. The possible computational speedup achieved via the outlined combinatorial simplification depends on the specific use case. The exponential factor in the evaluation complexity is determined by the number of the involved rows of the input matrix. In general, the complexity of the calculations can be reduced to
\[\mathcal{O}\left(n\cdot 2^{M^{*}}\right),\qquad M^{*}=\log_{2}(C)=\sum_{k=1}^{ \#modes}\log_{2}(M_{k}+1), \tag{7}\]
with \(n=\sum M_{k}\) over the optical modes. In order to account for row multiplicities within the BB/FG formula we make use of reflected n-ary Gray code counting instead of the binary Gray
code counting described in Sec. 2.1:
\[\text{perm}(\mathbf{A},\mathbf{M},\mathbf{N})=\frac{1}{2^{n-1}}\sum_{\mathbf{\Delta}}\bigg{(} \prod_{k=1}^{\#modes}\,(-1)^{\Delta_{k}}\binom{M_{k}}{\Delta_{k}}\bigg{)}\prod_ {j=1}^{\#modes}\,\bigg{(}\sum_{k=1}^{\#modes}\,(M_{k}-2\Delta_{k})\,a_{k,j} \bigg{)}^{N_{j}}, \tag{8}\]
where \(\mathbf{A}=a_{ij}\) is a square matrix describing the interferometer, \(\mathbf{M}\) and \(\mathbf{N}\) are the row and column multiplicities respectively such that the photon count \(n=\sum M_{i}=\sum N_{j}\) and \(\mathbf{\Delta}\) is the n-ary Gray code, required for efficient computation. [For the physical meaning of \(\Delta_{k}\) see Eq. (5).] The n-ary Gray code is also known as the non-Boolean Gray code or Guan code [59]. As the name implies, the individual "digits" of the n-ary Gray code are encoded by numbers from \(0,1,\ldots,n-1\) instead of the binary values \(0\) and \(1\). While the limits of the individual Gray code digits can be different from each other, one can construct a specific n-ary Gray code counter in which the \(k\)-th digit counts the number of \(+1\) values of the \(\delta_{i}\) elements corresponding to the repeated rows describing the \(k\)-th output mode. Thus, according to our reasoning, the \(k\)-th digit of the Gray code counts from \(0\) to \(M_{k}\). In order to construct such a reflected n-ary Gray code counter, we followed the work of Kurt et. al. [49]. In each iteration, only a single digit changes its value, enabling one to reuse the calculated column sums in the next iterations, similarly to the case of binary-reflected Gray code ordering.
Due to the reflected nature of the counter, it is possible to determine the Gray code corresponding to a specific decimal index \(0\leq i<C\) in a constant time (for details see [49]). Thus, the algorithm can be executed in parallel in a similar fashion then we described it for a binary-reflected Gray code counter. In the Piquasso Boost library, we provide our implementation of the BB/FG algorithm accounting for row multiplicities.
## 3 DFE Design and Implementation to evaluate the permanent
In order to further increase the computational speed of permanents, we developed a DFE implementation of the algorithm described in the previous section. Here in this section, we discuss the technical aspects of the developed FPGA (Field Programmable Gate Arrays) based permanent calculation engines realized via a static data-flow programming model. Since the configuration of the programmable hardware elements on the FPGA chip (or in other words the uploading of the program to the FGPA card) is time-consuming, the implementation needs to be general enough to avoid the need to re-configure the FPGA card during the BS runtime. In addition, by accounting for the particular needs implied by physics working behind the simulated BS architecture, we encountered further optimization requirements for the DFE implementation, such as the possibility to calculate the permanents of multiple matrices in one shot to amortize the initialization overhead even at small matrix sizes. In order to provide a thorough description of our implementation, while sparing the reader from too many low-level technical details, we will discuss these optimization details separately in the Appendix. Here we rather focus on the description of the basic concept to evaluate permanents on DFEs.
Due to high resource requirements associated with floating point operations in the FPGA chip, we focused on designing a scheme where fixed point number representation was used to do arithmetic operations with gradually increasing the bitwidth of the number representation from the initial 64 bits (sufficient to represent the elements of the input unitary) up to 128 bits used to derive an accurate final result up to a unitary size of \(40\times 40\). (While the floating point units on modern CPUs are highly specialized and optimized, the FPGA with the aid of generic look-up tables (LUTs) and digital signal processing (DSP) units scales better with fixed point operations. For example, double precision floating point additions require a delay of 14 clock ticks, while 1 tick is sufficient for fixed point addition.) Starting with 64 bit double precision format of the input unitary on the CPU side, we perform a conversion of floating point numbers (\(f=s\cdot 2^{c}\)) into fixed point representation encoded by \(b=64\) bit variable \(a\), such that \(a\) is the nearest integer to \(f\cdot 2^{b_{f}}\), with \(b_{f}=b-2\) standing for the number of fractional bits. The remaining two bits are used to store the integer part (for the case if the matrix element is unity) and the sign of \(a\) in two's complement encoding. (Since the input matrix is expected to be unitary, the magnitude of the matrix elements would never be larger than unity.)
In order to ease the congestion on the FPGA chip and avoid timing issues associated with long routing paths, we split the input matrix into four blocks (see the left side of Fig. 1), and stream the column blocks into 4 distinct "column summing" DFE kernels indicated by green colored boxes in Fig. 1). This way the computations could be partitioned over the Xilinx Alveo U250 FPGA chip used in our numerical experiments while increasing the computational concurrency at the same time. The purpose of these kernels is to (i) calculate the initial column sums \(\sum\delta_{i}a_{i,j}\) for each column in the first \(n\) tick of the kernel (here \(n\) is the matrix size). Secondly, (ii) as the initial column sums are calculated, a gray code counter \(g^{(k)}\in(0,2^{n-3}-1)\) is launched (in the \(n\)-th clock tick) to stream the last \(n-3\) elements of the \(\mathbf{\delta}^{(k)}\) vectors defined in the BB/FG permanent formula of Eq. (4). (The corresponding \(\delta_{i}\) elements are given by the individual bits of \(g^{(k)}\). Further details to generate the gray code counter logic via bit-wise operations are discussed in Sec. 2.1). The first element of each \(\mathbf{\delta}^{(k)}\) vector is 1 by the definition, while the second and third elements become fixed by the following reasoning: in order to increase the computational concurrency on the DFE the gray code counter is multiplexed in order to create 4 concurrent streams of the \(\mathbf{\delta}^{(k)}\) vectors by fixing the second and third elements to \((0,0)\), \((0,1)\), \((1,0)\) or \((1,1)\) as indicated by the colored \(\mathbf{\delta}^{(k)}\) streams in Fig. 1. Consequently in each of the 4 "column sum" kernels 4 concurrent \(\mathbf{\delta}^{(k)}\) vector streams are used to calculate the \(\mathbf{\delta}^{(k)}\) weighted column sums. In each clock cycle 4 new column sums are concurrently calculated for each column index \(0\leq j<n\) (distributed between the 4 column sum kernels) corresponding to the 4 multiplexed \(\mathbf{\delta}^{(k)}\) streams: from the value of the most recent column sum the new value is determined by adding or sub-tracking twice the matrix element taken from row \(i\). The row index \(i\) corresponds to the element index where a change occurred in \(\mathbf{\delta}^{(k)}\) compared to \(\mathbf{\delta}^{(k-1)}\) (due to the design this index is the same for all of the multiplexed \(\mathbf{\delta}^{(k)}\) streams).
The calculated column sum streams are gathered into groups according to the streams \(\mathbf{\delta}^{(k)}\) to provide an input for the next layer of computing kernels. The red-colored kernels in Fig. 1. calculates the products of the incoming column sum streams (each of them corresponding to a specific column index \(0\leq j<n\)), i.e. the stream data arriving at the same clock cycle are multiplied with each other over a binary tree reduction with a depth of \(\lceil\log_{2}n\rceil\).
The most expensive operations in the design are the multiplications of the complex numbers associated with the column sums. According to the most widely used approach, the complex multiplications are broken down as 4 fixed point multiplications and 2 additions as \((a+bi)(c+di)=(ac-bd)+(ad+bc)i\). The formula suggested by Knuth in Ref. [61] allows the computation of the product as \((c(a+b)-b(c+d))+(c(a+b)+a(d-c))i\) which can be implemented with 3 multiplications and 5 additions. There is also the Karatsuba-style formula of \((ac-bd)+((a+b)(c+d)-ac-bd)i\) which Knuth credits to Peter Ungar from 1963. This approach uses the same amount of operations
Figure 1: The structure of the computing kernels realized on the FPGA chip. The Xilinx Alveo U250 chip is organized into four super logic regions (SLRs) [60], with limited inter-SLR connectivity. The kernels are organized accounting for this specific hardware setup introducing 4 concurrent kernel blocks (mostly) bounded to the individual SLRs. The individual kernels are indicated with colored regions, also showing the mathematical operation they are evaluating. Further details of the data-flow scheme are discussed in Sec.3.
as the prior one. From a pipelining perspective on the FPGA chip, the first formula is more balanced as the multiplications and additions to get the real and imaginary parts can be implemented concurrently. However, it was shown in work [62] that the numeric stability of the Ungar formula is better, as only the imaginary part would be affected by additional inaccuracy. Fortunately, in the context of fixed point numbers used in our design, this aspect is of less importance. It should be also noted that the addition resulting in the final real and imaginary components would fall within the range \([-2,2]\) requiring an extra leading bit in the intermediate computations. In addition, the fixed point multiplications need to be further broken down to deal with efficient tiling on the hardware units of the FPGA chip, such as digital signal processing (DSP) multipliers.
In our implementation, we followed the pioneering results of Refs. [63, 64] and [65]. According to [63] one can use the Karatsuba multiplication formula by optimally splitting the input multiplicands into smaller bitwidth parts being the least common multiple of the width of the utilized input bit ports of the DSP units. (Thus, the most optimal selection of the tiling factors depends on the width of the input multiplicands.) A slightly different approach is to work out a DSP-number optimized recursive Karatsuba-like formula for the individual input bit size configurations by following the reasoning of [64] and [65] being applicable for rectangular-sized tilings. (The Virtex 5 DSP units embedded inside our Xilinx Alveo U250 FPGA cards have \(18\times 25\) wide input ports to perform signed multiplications.) Since our DFE implementation is designed to handle at most \(40\times 40\) matrices, one needs to calculate the product reduction of 40 complex numbers. The tree-like reduction is performed over 6 levels, providing a balance between resource utilization and pipelining. On the first level 20 products are calculated from 64 bit wide input fixed point values (streamed from the columns sum kernels) resulting in 79 bit results (the remaining bits are discarded). On the second level, the 20 results are paired up to 10 pairs. The multiplications are performed again with 79 bit-wide results. The third level calculates the pair-wise product of 10 numbers resulting in 93 bit results. The remaining 3 levels to reduce the final 5 numbers are done with results of \(110,158\) and 189 bit precision. All values are fixed points with 2 integer bits (including the sign bit) and the remaining as fractional bits. The final values are all accumulated at 192 (or 256) bits of precision, 6 of which are integer bits. During the development, numerous technical details were addressed to end up with an efficient strategy to multiply large bitwidth complex numbers implemented in our calculations. We solved many issues related to the correct management of the fan-outs of intermediate computations or to the careful choice between LUTs or DSP units to perform multiplications on small bitwidth tiles in order to reach the limits of our design being still decomposable on the FPGA chip. The most cumbersome limiting factor was the requirement to keep up with the numerical accuracy comparable to the CPU implementations using extended precision floating point number representation (see the Appendix for the comparison of DFE implementation to the CPU ones). To this end, we needed to increase the bitwidth of the fixed point numbers in the multiplication binary tree as much as possible by utilizing more and more hardware resources, while keeping up with the timing constraints by designing a suitable pipeline structure on the chip.
Finally, the results of the four product kernels are streamed toward the final "sum up" kernel indicated by the blue-colored region in Fig. 1. Due to the multiplexed four gray code streams four different column product reductions are entering the kernel in each tick. The incoming streams are summed up and added to the partial permanent labeled by \(Perm\) in Fig. 1. To sum up the permanent addends into a single scalar result, a single clock tick deep data-flow loop is designed (corresponding to the addition of two fixed point numbers) by removing any registers from the underlying logic. Finally, the result encoded by a \(128+128\) bit complex number is streamed back to the CPU.
Beyond the discussed layout issues, many other optimizations were implemented to increase the potential usability of the developed permanent calculator DFE in quantum computing simulations. The most important issue in our work was to generalize the implementation for matrices of variable size. Since the initial uploading of the DFE program onto the FPGA card takes about \(\sim 1\) minute, using the engine in realistic BS simulations without this generalization is infeasible. In Sec. A.1, we provide the details of our solution to generalize the DFE implementation to calculate the permanent of arbitrary-sized matrices. By preserving the computational accuracy (see Sec. 3.2), however, the maximal size of the input matrix for which the layout could still fit onto the chip turned to be \(48\times 48\) at clock frequency 300 MHz (and 330 MHz at maximal matrix size of \(40\times 40\)). For cases
where the FPGA would not tick enough to get a result, i.e. the input matrix is smaller than \(3\times 3\), the permanent computations are performed solely on the CPU side. (While such cases could be computed in one tick on the FPGA with some additional logic, the communication delay with DFE would likely exceed the cost of the computation by the CPU side.) Also, we would like to highlight at this point another important optimization of the design. While feeding the permanent calculator DFE with more matrices sequentially (i.e. one after another) in a single execution, one can retrieve the calculated permanents for all of the matrices in one DFE execution. Since the execution of our program on the DFE takes up an overhead of about a millisecond (including I/O data transfer and other initialization overhead while the DFE program is already uploaded to the FPGA card), it is expected to provide a further advantage for the DFE, if one can calculate the permanents for more matrices in a single execution amortizing the overhead between the matrices. Following this ideology from Sec. A.4, we show that significant speedup can be realized in permanent evaluation even for small matrices, speeding up the simulation of BS if a large number of smaller matrices needs to be processed.
Depending on the specific needs of individual simulation tasks, it might be advantageous to introduce more concurrency into a single evaluation of permanents by splitting the calculations between multiple DFEs. This might be especially useful when dealing with larger matrices (up to \(48\times 48\) in our case) during the calculations. In Sec. A.2 we provide further details on the technical background related to scaling up permanent calculation over multiple DFEs. In order to utilize DFEs to support the simulation of photonic quantum computers (when multiple photons can collide onto the same output mode), one also needs to implement the necessary logic accounting for row multiplicities reducing the permanent calculation complexity. This optimization is discussed in Sec. 2.2.
Finally, we can not finish this section without mentioning that in order to prevent integer overflow during the arithmetic operations on the DFE, we need to keep all of the intermediate results of calculations within certain bounds correspondingly to the limits of the used fixed point number representations. This can be achieved by a well-formulated normalization strategy of the input matrices, while the normalization factors can be used to re-scale the output result received from the DFE to end up with the correct result of permanent. We provide further details on the applied normalization strategy in Sec. A.5.
### Performance benchmark of the permanent calculator implementations
In this section, we provide the details regarding our performance benchmark of the DFE permanent calculator implementation compared to the fastest CPU implementations available today. The numerical experiments were conducted using Xilinx Alveo U250 FPGA cards implementing a bitstream generated from Very High Speed Integrated Circuit Hardware Description Language (VHDL) generated from Java source by Maxeler's MaxCompiler 2021.1. Thus the heart of the developed permanent calculator DFE is a static data-flow computation model translated into VHDL by the MaxCompiler. In the performance benchmark, we compared the DFE implementation to the Piquasso Boost library's BB/FG parallel C++ implementation and TheWalrus v0.16.2 library's Ryser double and quad precision (implementing 128 bit floats when GNU or Intel compiler is used to build the source, and long double precision for other compilers). The measurements were computed by averaging the execution time of 5 computations for matrices of size smaller than \(24\times 24\), otherwise, just a single execution measurement was taken.
The CPU computations were performed on a two-socket CPU system consisting of AMD EPYC 7542 32-Core processors with two logical threads on each core and two non-uniform memory access (NUMA) nodes where we bound the computations to one of the NUMA nodes. (Thus the CPU benchmark was measured by the performance on 64 computing threads.) The FPGA communication uses a high-speed PCI Express 3.0 channel over which the input matrix and series of other input parameters are uploaded, like scalar initialization parameters for each kernel, clock synchronization data, etc. The frequency of the DFE implementation was 320 MHz The initialization time of programming the DFE when uploading the compiled circuit from the generated VHDL program takes 56.2 seconds for a single DFE while it takes 112.9 seconds for a dual DFE mode (i.e. splitting the permanent evaluation over two DFEs). The uploaded bitstream program is preserved on the DFE until it is reprogrammed, hence the initial uploading time is needed to
be spent only at the very start of the usage. The initialization of the TheWalrus library is around 6 milliseconds, while the BB/FG implementation of the Piquasso Boost library exhibits the most negligible loading delay with about 19.5 microseconds.
We compared the performance of our implementation provided in the Piquasso Boost library to the implementation of TheWalrus version 0.16.2 [57] package also having implemented parallelized C++ engines to evaluate the permanent function. (Newer versions on TheWalrus do not contain C++ engines, nor extended precision implementations.) The results are plotted in Fig.2. showing the performance of the different libraries. The red-colored data points show the execution time of the single and dual DFE implementations. (For details related to the dual mode see in Sec. A.2.) For matrices of size less than \(20\times 20\) the initialization overhead dominates the execution of the DFE resulting in a constant execution time. (The permanent evaluation of the smallest matrices is done solely on the CPU side explaining the first 3 data points in the set.) Above the crossover region, the advantage of the DFE execution becomes evident. Only the double precision BB/FG implementation comes close to the performance of the DFE. However, while the single DFE implementation is only about 1.4 times faster than the double precision BB/FG implementation, the accuracy of the DFE implementation is significantly better, having better or identical accuracy as the long double precision BB/FG implementation up to a matrix size \(40\times 40\). (For details see Sec. 3.2.) The long double precision CPU implementations are significantly slower than the DFE (see the indicated speedup factors in the legend of Fig. 2), implying the advantage of DFE over CPU implementations when high numerical accuracy is required. The total run-time to evaluate the permanent of an \(n\times n\) unitary on a single (\(k=2\)) and dual (\(k=3\)) DFE can be calculated as \(t=t_{0}+\frac{n-1+2^{n-1-k}}{f}\) where \(t_{0}\) is a small PCI Express communication delay plus the pipeline delay of the kernels (approximately half a millisecond), and \(f\) is the frequency in Hertz. The \(n-1\) clock cycles are spent to initialize the column sum, and in the remaining \(2^{n-1-k}\) ticks the permanent in evaluated. Our design is compiled for 330 MHz frequency so the execution time for a \(40\times 40\) input matrix on the dual DFE build is in total 207 seconds. (equivalent to 337 GOPS (\(10^{9}\) operations per seconds) For such "long-run" execution, the \(t_{0}\) initialization overhead is negligible. For comparison, Ref. [48] reported the required time to compute the permanent of a \(40\times 40\) matrix on 98304 CPU cores of the Tianhe 2 supercomputer in 24 seconds using Ryser's formula in double precision. Consequently, one CPU server with two DFE engines calculates a \(40\times 40\) permanent only \(8.6\times\) slower than 4096 CPU nodes (each node containing of 24 cores) in the benchmark of Ref. [48].
Figure 2: Performance comparison of the permanent calculator DFE to the CPU side BB/FG implementations accounting for row and column multiplicities in the input unitary. The input unitaries, row, and column multiplicities are constructed randomly, such that they describe \(20\) (left) 30 (middle), and 40 photons (right) in total. As the photon count increases above \(20\), the DFE implementation becomes systematically faster than the CPU implementations at matrix size larger than \(n\approx 13\). The speedup factors compared to the long double precision BB/FG implementation are indicated in the legend.
From a practical usability point of view, we also examine the performance of the permanent calculation implementations when photon multiplicities on the photonic modes induce repeated rows and columns in the unitary describing the photonic scattering process. (See details in Sec.2.2.) In this case, the average photon multiplicity on the photonic modes is controlled by the unitary size \(n\) and the total photon number. In Fig. 3, we compare the performance of the DFE engine to the CPU implementations of the Piquasso Boost library for photon counts \(20\) and \(30\). The figure shows the execution time measured at permanent evaluation for random unitaries constructed for randomly distributed photons over the photonic modes. From our numerical results, we see that at a photon count larger than \(20\) the DFE turns out to be more efficient than the CPU implementations in the evaluation of permanents. While keeping up to the numerical accuracy of the long double precision BB/FG implementation, the DFE is about \(3.6\) times faster than the double precision BB/FG implementation at photon count \(30\) and about \(7.4\) times faster at photon count \(40\). (The speedup compared to long double precision implementations at the same photon counts are \(35.4\times\) and \(27.9\times\), respectively.) By using dual DFE implementation the speedup is further doubled. At lower photon count, however, the PCIe delay time of the DFE dominates the execution. (See the left side of Fig. 3.) In the repeated row DFE variant of the permanent calculator, the columns sum initialization achieves the same performance by staggering computations the loop tiling/staggering technique. This requires the CPU to pre-compute and upload some additional data which scales on the order of the loop length. The loop length in practice is small or \(9\) cycles. The cycle delay is based upon multiplication in computing the binomial updates. To keep the parallelism of \(4\) or \(8\), we also force the first \(3\) or \(4\) row multiplicities to be \(1\) (the original formula already requires one such reduction and \(2\) or \(3\) extra for parallelism), which can slightly increase the operation count. However, by choosing the smallest multiplicities for row expansion, the consequence of maintaining power-of-\(2\) parallelism is largely mitigated. The DFE implementation accounting for row/column multiplicities is also compiled for \(330\) MHz frequency.
Finally, in Sec. A.4, we show that by streaming multiple input matrices to the DFE in a single execution, one needs to pay the PCIe delay time only once and divide it between the permanent calculation instances, thus lowering the crossover in the DFE-CPU performance down to a matrix size of \(n\approx 13\).
### Numerical accuracy of permanent calculator engines
As pointed out earlier by Ref. [48] the numerical accuracy becomes a notable issue with increasing the size of the input matrix. The final numerical precision of an implementation depends on the interplay of various factors. The number representation used in the individual arithmetic operations and the conversion between them, or the computational design of mathematical operations have
Figure 3: Performance comparison of the permanent calculator DFE to the CPU side BB/FG implementations accounting for row and column multiplicities in the input unitary. The \(n\times n\) input unitaries, row, and column multiplicities are constructed randomly, such that they describe \(20\) (left) and \(30\) photons (right) in total. As the photon count increases above \(20\), the DFE implementation becomes systematically faster than the CPU implementations at matrix size larger than \(n\approx 13\). The speedup factors compared to the long double precision BB/FG implementation are indicated in the legend.
both great effect on the accuracy of the final result. For example, Ref. [48] found that the BB/FG formula shows bigger numerical fidelity than the Ryser formula in the experiment of reproducing the analytical result evaluated for the identity matrix According to our reasoning, the different numerical properties of the two approaches can be explained by the difference in the number of addends in the inner sums of Eqs. (2) and (4). While in Eq. (4) the sum involves always the same number of matrix elements before calculating the products of the column sums, in Eq. (2) the number of the summed matrix elements varies according to the actual partitioning \(S\) resulting in a possible wider range of magnitude in the calculated products before summing them up. In order to increase the accuracy of the calculated permanent in the Piquasso Boost library, we evaluate the outer sum of the BB/FG formula by classifying the individual addends according to their magnitude, split them into "pockets" and always calculate the sum of those partial results that are close to each other in magnitude. (We also applied this strategy in our previous work [66].)
Now we examine the numerical accuracy of the individual calculation methods. We implemented the designed scalable BB/FG algorithm with several levels of floating point number representations provided by the Piquasso Boost library and compared the result to the Ryser formula implemented in TheWalrus package (version 0.16.2). Among them, the least accurate variant turned out to be the Ryser formula implemented by double precision (i.e. 64 bit) floating point arithmetic operations, followed by the double-precision BB/FG formula evaluated by the Gray code strategy. As default, the Piquasso Boost library implements extended precision floating point arithmetic operations to calculate the permanent providing high numerical accuracy for photonic quantum computer simulations up to photon numbers still accessible on traditional HPC nodes.
To establish a proper ground truth, we also incorporated the GNU Multiple Precision (GMP) arithmetic library's [53] extension of Multiple Precision Floating-Point Reliability (MPFR) to compute the permanent with "infinite precision" using the designed recursive BB/FG algorithm. (The correctness of the "infinite precision" implementation was tested on special input cases. namely, when evaluating the BB/FG formula on \(m\times n\) rectangular shaped matrices with \(m\geq n+2\), the final result should be inevitably 0 due to the exact cancellation of the addends. Such a computation, where approximation with normal floating or fixed point numbers would never return a proper 0 result, provides very convincing evidence for the correctness of the implementation.)
Figure 4: The numerical accuracy of different implementations to calculate the permanent function. The relative error defined by Eq. (9) was calculated for the permanent calculator engines implementing the Ryser formula implemented in TheWalrus package, the recursive BB/FG formula implemented in the Piquasso Boost library and the BB/FG formula implemented on the DFE. (For details of the DFE implementation see Sec.3) Among the CPU implementations, the highest accuracy was achieved by the recursive BB/FG implementation utilizing extended precision floating point number representation. The implementations designed for the DFE achieve almost identical precision, we experienced deviation from the ideal result only at larger matrices. However, even in this regime, the relative error is smaller than \(\varepsilon\approx 10^{-8}\). The CPU benchmark was done on an _AMD EPYC 7542 32-Core Processor_ platform, for DFE calculations Xilinx Alveo U250 FPGA cards were used.
Figure 4 shows the relative error
\[\varepsilon=\frac{\text{abs}\big{(}\text{perm}_{INF}(\mathbf{A})-\text{perm}( \mathbf{A})\big{)}}{\text{abs}(\text{perm}_{INF}(\mathbf{A}))} \tag{9}\]
of several benchmarked permanent calculation implementations with \(\text{perm}_{INF}(\dots)\) standing for the infinite precision implementation. Our numerical experiments carried out on random unitary matrices justify our expectations. (This choice is justified as our primary goal is to use the developed permanent calculation engines to support photonic quantum computer simulations, where the physical system is described by unitary matrices.) The 64 bit floating point representation is significantly less accurate than the extended precision counterparts. Secondly, our results revealed that the Ryser formula is less accurate than the BB/FG variant. (The accuracy of the double precision Gray coded BB/FG implementation is close to the Ryser formula evaluated with extended precision.) A reduction in accuracy associated with the extended precision Ryser method first appears at a matrix size of \(n\approx 20\) and the difference increases with the matrix size. (See the green circles in Fig. 4.) Though we leave the discussion of the implementation details of the DFE permanent calculator engine for Sec. 3, here we just notice that the numerical accuracy of the DFE implementation stays close to the extended precision CPU implementation of the BB/FG formula, we experienced a deviance only at matrix size \(n\geq 36\). However, even at such matrix size the accuracy of the DFE implementation still remains better than the extended precision Ryser formula.
According to our reasoning, the main source of the numerical error in the CPU implementations can be explained by the fact that in Gray code variants of the Ryser and BB/FG formula, some computational data are reused from cycle to cycle, in each turn modifying their value by addition/subtraction. Consequently, some addends to the permanent are derived via exponentially many arithmetic operations. This results in an accumulation of numerical error compared to the naive \(\mathcal{O}(n^{2}\cdot 2^{n})\) implementations, where each added to the permanent is a result of only \(n^{2}\) arithmetic operations. This reasoning leads us to the conclusion that the \(\mathcal{O}(n^{2}\cdot 2^{n})\) implementations are expected to be more accurate than the Gray-coded \(\mathcal{O}(n\cdot 2^{n})\) variants. We justified our expectation by evaluating the numerical accuracy of the BB/FG formula without the Gray code strategy. The associated orange-colored data points in Fig. 4 revealed, that the double precision, non-Gray-coded BB/FG formula indeed gives as good accuracy as the Gray coded variant BB/FG formula evaluated with extended precision. The Gray-coded implementations, in turn, perform so much faster that executing them in extended precision (to obtain equivalent numerical precision) is still favorable.
## 4 Classical simulation of Boson Sampling including photon losses
The classic experimental setup of BS is shown in Fig.5. Given an \(m\)-mode \(n\)-particle Fock state \(|\vec{S}\rangle=|1...10...0\rangle\), and the interferometer, described by a \(m\times m\) matrix \(U\), one performs the passive (particles-number preserving) linear-optical evolution of \(|\vec{S}\rangle\) with the interferometer, then we measure the outcome using the particle-number resolving detectors.
In subsequent formulas, we use \(s_{i}\) and \(t_{i}\) to label the components of the occupation vector describing the input state \(\vec{S}\) and the output state \(\vec{T}\) as \(\vec{S}=(s_{1}=1,s_{2}=1,...,s_{n}=1,s_{n+1}=0,...,s_{m}=0)\) and \(\vec{T}=(t_{1},t_{2},\dots,t_{m})\). In the particle-conserving case, \(\sum_{i=1}^{m}s_{i}=\sum_{i=1}^{m}t_{i}\). The output probability \(p_{U}\) associated with a given process \(\vec{S}\rightarrow\vec{T}\) is then given by (10)
\[p_{U}(\vec{S}\rightarrow\vec{T})=\frac{|\text{perm}(U_{ST})|^{2}}{\prod_{i=1 }^{m}s_{i}!t_{i}!}, \tag{10}\]
where \(U_{ST}\) is an \(n\times n\) matrix constructed from the unitary \(U\) describing the interferometer as follows. First, we construct an \(m\times n\) matrix \(U_{S}\), by taking \(i\)-th column of \(U\)\(s_{i}\) times and then we take the \(j\)-th row of \(U_{S}\)\(t_{j}\) times. The hardness of BS is a consequence of the fact that the probability amplitude in Eq. (10) is proportional to matrix permanent. The situation is especially clear in the regime \(m\gg n\), where for typical Haar random \(U\) the probability distribution given in Eq. (10) is concentrated on collision-free sub-spaces spanned by states \(|\vec{T}\rangle\) with \(t_{i}\geq 1\). This is the
regime in which arguments for hardness put forward in the original BS paper [5] hold (see also [67] for a more refined analysis of hardness of this problem). The hardness of BS in the regime when the number of photons is comparable to the number of modes requires a separate analysis and is a subject of an upcoming work [68].
From an experimental point of view, it should be noted that in real-world devices realizing BS photon losses occur on a regular basis. Thus,
\[n=\sum_{i=1}^{m}s_{i}\geq\sum_{i=1}^{m}t_{i} \tag{11}\]
where we previously defined all of the symbols. Lossy BS introduces many new challenges in simulations. In [69] and [70] the authors discuss loss handling and a classical simulation algorithm for lossy BS simulation. In general, interferometers are a net of interconnected beam splitters. In the lossy scenario, the lossy beam splitter transfers the particle into an inaccessible (unmeasurable) mode. One can easily see the practicality of this approach as its implementation doesn't need any new tools besides beam splitters. Graphically, one usually presents it as in the right panel of Fig. 5. The authors of [70] also noticed that the uniform part of the losses (i.e. loss that applies uniformly to all of the modes) can be extracted from the interferometer and applied as a pure-loss channel at the beginning of the simulation leading to a new instance of the matrix \(U\) describing the interferometer. One may still incorporate further, non-uniform losses into the model, while the simulation will be computationally less demanding due to the lower number of particles at its input. It's worth pointing out that the matrix describing a lossy interferometer is no longer unitary as its eigenvalues can be lower than unity.
The most popular algorithm of BS simulation is the Clifford & Clifford algorithm [44, 1]. Although it was designed for the exact simulation of BS, the algorithm can be adopted for lossy BS simulations as well by virtually doubling the number of the optical modes according to the reasoning of Ref. [69], while taking the sample only from the first half of the outputs.
We implemented the Clifford & Clifford algorithm in the Piquasso Boost high-performance C++ library linked against traditional CPU and DFE permanent calculation engines. The BS experiment can be designed by the high-level programming interface of the Piquasso framework in Python language. The BS simulation can be scaled up over multiple servers via MPI communication protocol. In our numerical experiments, we used _AMD EPYC 7542 32-Core Processor_ servers, each server containing 2 CPU nodes and two Xilinx Alveo U250 FPGA cards. We executed the BS simulation on 4 MPI processes, each process utilizing one CPU node with one FPGA card. Figure 6 shows the results of our BS simulation performance measurement carried out on host servers equipped with the DFE permanent evaluation engines. Following the reasoning in Clifford & Clifford [1], the theoretical prediction of the average sampling time (i.e. averaged over many samples) in the BS simulation (proportional to the average-case sampling complexity) can be given
Figure 5: _Left: A schematic of a standard BS experiment. The blue elements denote beam splitters inside the interferometer. A net of interconnected beam splitters is the usual physical realization of the interferometer. Right: A schematic representation of the loss model we use in this work. On the left side of the diagrammatic equation, we see a lossy channel on a single mode, with particle transmission probability \(\eta\). On the right side of the equation, one can see the applied beam splitter that mimics such a channel. The transmitted beam labeled by \(\sqrt{1-\eta}\) is not measured._
by the
\[T_{\rm sampling}=T_{0}\left[\frac{n(m+n)}{m}\left(\begin{array}{c}m+n\\ n+1\end{array}\right)^{-1}\left(\begin{array}{c}2m+n\\ n+1\end{array}\right)+n^{2}m\right] \tag{12}\]
expression in the \(n,m\rightarrow\infty\) limit. The expression contains a single parameter \(T_{0}\) providing a portable measure to capture the performance of a BS simulator. We fitted Eq.(12) to three data sets obtained from the averaged sampling time of a 60-mode interferometer. One data set corresponds to an ideal BS without any photon loss in the simulation. The second and third data sets were obtained by simulations assuming 30% and 50% losses. (The 60-mode interferometer and 30% loss correspond to the experimental setup of Ref. [33].) In the case of lossy BS, the simulation implies a doubling of the photonic modes, resulting in increased sampling time reported also in Fig. 6. We found that Eq. (12) describes the complexity of BS simulations very well at larger input photon counts. For the developed BS simulator design we obtained \(T_{0}=7.5\times 10^{-11}\) sec from the fitting. At lower input photon count, the measured sampling performance is different from the prediction of Eq (12). At the corresponding \(10^{-3}\)-\(10^{-6}\) s timescale the CPU-DFE performance crossover, the CPU parallelization overhead, and the initialization cost of the computational model (such as the Python-C++ binding, library loading, etc.) dominates the execution time of the BS simulator.
Finally, we notice that the obtained \(T_{0}\) fitting parameter is expected to be inversely proportional to the number of the incorporated DFEs, scaling ideally up to a large number of accelerators. Since the CPU servers hosting the DFEs are collecting the samples on their own, the gathering of the samples over the host servers will pose a single-time communication overhead that can be neglected compared to the overall sampling time. We have successfully demonstrated this design using two CPU servers hosting 4 FPGA cards in total.
## 5 Discussion
In this work, we described a novel scalable approach to evaluate the permanent function based on the BB/FG formula. Utilizing the mathematical properties of reflected Gray code ordering one may concurrently evaluate partitions of the outer sum, paving the way for parallel computation of the permanent. We further generalized the algorithm for cases when columns or row multiplicities occur in the input matrix (corresponding to multiple occupations of photonic modes) and the complexity of the permanent calculation might be reduced. We achieved this by the utilization of generalized n-ary Gray code ordering of the outer sum in the BB/FG permanent formula, with digits varying between zero and the occupation number of the individual optical modes. This
Figure 6: Performance of the developed BS simulation design executed on two _AMD EPYC 7542 32-Core Processor_ servers, each server containing of \(2\) CPU nodes and two Xilinx Alveo U250 FPGA cards. The averaged sampling time \(T_{\rm sampling}\) of a 60-mode BS simulations calculated over 1000, 300, 100 samples for input photon counts less or equal than 34, 38, 40, respectively.
generalization makes the BB/FG formula applicable for high-performance BS simulation utilizing significant reduction in the computational complexity as it was previously demonstrated using the Ryser formula as well [55, 56, 1]. The main advantage of the BB/FG formula opposed to the Ryser variant lies in the numerical accuracy of the calculated permanent value. Our numerical experiments using the MPFR multi-precision library showed that Ryser's formula loses against the BB/FG method by several orders of magnitude in accuracy in both the double and extended precision calculations.
We also implemented the BB/FG method on FPGA-based data-flow engines. Our implementations are capable of handling matrices of arbitrary size (up to 40 columns) without the overhead of reprogramming the FPGA chips and accounting for row multiplicities in the input matrix via the N-ary Gray code ordering ported to the FPGA chips. The throughput of the DFEs was further increased by organizing the input matrices into batches and executing multiple permanent calculations on the FPGA chips in a single execution. Finally, the fix-point number representation implemented on the FPGA chips provides competitive accuracy to the extended precision BB/FG CPU implementation in the evaluation of the permanent of unitary matrices. The accuracy of the DFE implementation - equivalent to extended precision - holds up to matrices of size \(n=40\).
These features turned out to be essential to achieve an efficient BS simulator design supported by high-performance DFEs. We integrated our permanent evaluation DFEs into the computation flow of both the ideal and lossy BS simulation. Since the simulation of lossy BS involves twice as many photonic modes as the ideal variant of the same BS design, the simulation of lossy BS takes more time in general. On average, our setup of 4 Alveo U250 FPGA cards made it possible to take a single sample from a 60-mode interferometer with 40 input photons in \(\sim 80\) seconds without photon losses. Introducing photon losses into the design, our numerical experiments could draw a single sample in \(\sim 360\) seconds. The theoretical description of Ref. [1] fits the measured performance data very well by fitting a single parameter (labeled by \(T_{0}\) in Eq. (12) to all data points. The fitting parameter provides a portable measure to compare different BS simulator implementations. However, we did not find any competitive work in the literature on BS simulation providing similar performance measurements to ours. In turn, we can compare the performance of our simulator to a real BS experiment involving 60 photonic modes, 20 input photons, and 14 measured photons (i.e. loss of 30% on average). We could simulate the described design in \(\sim 0.8\) milliseconds per sample, while in Ref.[33] authors detected 150 valid samples of 14-photon coincidence measurements in 26 hours.
Finally, we notice that the BS simulation capabilities described in this work can be further improved by utilizing the concept of approximate BS described in Ref. [70], in which part of the optical modes are treated with MF approximation. In this approach, the number of the approximated modes is a hyper-parameter in the algorithm controlling both the speed and the fidelity of the BS simulation. We leave the study of approximate BS simulation with DFEs for future work.
## 6 Acknowledgements
This research was supported by the Ministry of Culture and Innovation and the National Research, Development and Innovation Office within the Quantum Information National Laboratory of Hungary (Grant No. 2022-2.1.1-NL-2022-00004), by the UNKP-22-5 New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund, and by the Hungarian Scientific Research Fund (OTKA) Grants No. K134437 and FK135220. RP. acknowledges support from the Hungarian Academy of Sciences through the Bolyai Janos Stipendium (BO/00571/22/11) as well. We acknowledge the computational resources provided by the Wigner Scientific Computational Laboratory (WSCLAB) (the former Wigner GPU Laboratory). TR and MO acknowledge financial support by the Foundation for Polish Science through TEAM-NET project (contract no. POIR.04.04.00-00-17C1/18-00).
|
2309.15423 | Prosumers Participation in Markets: A Scalar-Parameterized Function
Bidding Approach | In uniform-price markets, suppliers compete to supply a resource to
consumers, resulting in a single market price determined by their competition.
For sufficient flexibility, producers and consumers prefer to commit to a
function as their strategies, indicating their preferred quantity at any given
market price. Producers and consumers may wish to act as both, i.e., prosumers.
In this paper, we examine the behavior of profit-maximizing prosumers in a
uniform-price market for resource allocation with the objective of maximizing
the social welfare. We propose a scalar-parameterized function bidding
mechanism for the prosumers, in which we establish the existence and uniqueness
of Nash equilibrium. Furthermore, we provide an efficient way to compute the
Nash equilibrium through the computation of the market allocation at the Nash
equilibrium. Finally, we present a case study to illustrate the welfare loss
under different variations of market parameters, such as the market's supply
capacity and inelastic demand. | Abdullah Alawad, Muhammad Aneeq uz Zaman, Khaled Alshehri, Tamer Başar | 2023-09-27T06:20:28Z | http://arxiv.org/abs/2309.15423v3 | # Prosumers Participation in Markets:
###### Abstract
In uniform-price markets, suppliers compete to supply a resource to consumers, resulting in a single market price determined by their competition. For sufficient flexibility, producers and consumers prefer to commit to a function as their strategies, indicating their preferred quantity at any given market price. Producers and consumers may wish to act as both, i.e., prosumers. In this paper, we examine the behavior of profit-maximizing prosumers in a uniform-price market for resource allocation with the objective of maximizing the social welfare. We propose a scalar-parameterized function bidding mechanism for the prosumers, in which we establish the existence and uniqueness of Nash equilibrium. Furthermore, we provide an efficient way to compute the Nash equilibrium through the computation of the market allocation at the Nash equilibrium. Finally, we present a case study to illustrate the welfare loss under different variations of market parameters, such as the market's supply capacity and inelastic demand.
## I Introduction
The competition for a divisible resource between selfish agents has made game-theoretic methods useful tools for the design of resource allocation mechanisms. For such mechanisms design, several metrics have been investigated in the literature, such as fairness and social welfare efficiency of the allocation as well as the computational cost for finding the allocation where strategy space plays a significant role. One measure of fairness was discussed in [1] for a proportionally fair (PF) pricing mechanism where the resulting allocation makes it impossible to increase the sum of weighted proportional gains. Another design metric is the efficiency of allocation with respect to social welfare maximization, i.e., to what extent the sum of agents' utilities is close to the maximum possible value. Efficiency of the aforementioned PF pricing mechanism, however, is undermined when agents behave strategically, i.e., when their strategies incorporate the relationship of the price to the bids--this turns the mechanism into an auction. In a competitive formulation, [2] studied the efficiency loss of this PF auction and showed that it is 25% in the worst case.
Vickrey-Clarke-Groves (VCG) is a well-known class of mechanisms [3, 4, 5] for resource allocation which ensures that truthful reporting of each agent is a dominant strategy. However, it may not be practical for some domains because of shortcomings such as providing a different price to different agents for the same resource. Other mechanisms similar to VCG were studied for pricing divisible resources with scalar strategy spaces such as [6, 7]; see section 2 of [8] for an extended list. [6] investigated a PF divisible auction in which the notions of price and demand functions were introduced to characterize optimal response functions of the agents. A unique Nash equilibrium was proven to exist for agents with heterogeneous quasilinear utilities and a decentralized iterative algorithm was described to converge to the equilibrium. A class of mechanisms with single-dimensional signaling (bidding strategy) was studied in [7] where the PF auction was shown to be inefficient in general. An infinite subclass of efficient signal proportional allocation (ESPA) mechanisms was shown to maximize the social welfare for agents with quasi-linear utilities. Besides efficiency, computational cost and signaling space are other design metrics in which ESPA is an optimal allocation mechanism.
The analysis of supply function equilibria (SFE) for uniform-price markets is closely related to this growing literature on efficiency guarantees in market design. In such bidding mechanisms and for sufficient flexibility, competing suppliers prefer to commit to (offer) supply functions as their strategies, indicating their preferred supply quantity at any given market price. This is in contrast to committing to a scalar strategy, such as a fixed price (Bertrand model) or a fixed quantity (Cournot model). [9] investigated the existence of Nash equilibria resulting from supply function bids, more precisely supply offers, and demonstrated that they can be highly inefficient.
In a centralized uniform-price market-clearing mechanism for supply-quantity allocation of an infinitely divisible resource, [8] proposed a restriction on the class of supply functions, limiting the supplier's strategy to scalar-parameterized functions. Under a fixed, inflexible (inelastic) demand and suppliers with maximum production capacity, the paper studied the existence of Nash equilibrium and the efficiency of its associated market allocation. This formulation was extended in [10] to study capacity constrained suppliers and inelastic demands that are spread throughout a transmission constrained power network. By studying the efficiency of Nash equilibrium's market allocation, the paper explained how certain market structures, such as market share and residual supply index, can be useful in predicting the extent to which each supplier can exert market power to influence the market outcome to its advantage. In [11], the assumption of total inelastic demand in [8] was relaxed to study two-sided markets (producers and consumers) with multiple strategic consumers having both elastic (flexible)
and inelastic (minimum) demands, where the strategies of both producers and consumers are scalar-parameterized functions. In some markets, the distinction between producers and consumers may not be needed such that the participants may wish to act as both. These participants are often called prosumers. This paper proposes a scalar-parameterized function for the prosumer markets.
Given that electricity is a prime example of an infinitely divisible resource with ample research on its prosumer markets, the remaining part of this introduction reviews the literature on the participation of prosumers in electricity markets. Different structures for prosumer electricity markets were discussed in [12]. From game-theoretic point of view, these structures can be classified into three main typologies: peer-to-peer model in which competitive prosumers interact directly with each other within a region (neighborhood), prosumer-to-microgrids model in which competitive prosumers interact under a central microgrid-operator which also interacts with other neighbor microgrid-operators, and organized-prosumer-group model in which a group of prosumers cooperate to form a virtual power plant which is connected to the main grid.
Prosumers, as individual agents involved in distributed decision making, are closely related to other similar agents discussed in the electricity markets literature such as interconnected microgrids and controllable loads [13]. The aforementioned scalar-parameterized supply functions were reformulated in [14] such that they are used as bidding mechanism for the consumers who are equipped with controllable loads. The paper studied the existence of Nash equilibrium and the efficiency of its associated allocation of the consumers' load adjustment capacities. Prosumers and interconnected microgrids, however, differ from controllable demand in their objective functions, as they are agents equipped (directly or by proxy) with production capability that comes with an associated cost. Given the various possible structures in which these competitive or cooperative agents can participate in electricity markets, game-theoretic methods are attractive tools for such applications due to their effectiveness in analyzing multi-agent strategic decision making.
In non-cooperative settings, whether the typology of prosumers is peer-to-peer or prosumer-to-microgrids, there are several solution concepts in game theory that are useful for analyzing such typologies [13]. For example, Nash equilibrium is suitable for non-cooperative games where no player dominates the decision process whereas Stackelberg equilibrium is useful when hierarchy is allowed in the decision process [15].
Compared to a large producer in the wholesale electricity market (within the main grid), the size of a prosumer is often too small to participate in such a market. Hence, retail aggregators offer a reasonable solution to enable the participation of prosumers in the wholesale market. In this trading setting where a profit-maximizing retail aggregator sets a uniform price for its competitive prosumers, [16] formulated a Stackelberg game between the aggregator and the prosumers and characterized the Stackelberg equilibrium. The goal was to quantify the loss in efficiency (e.g. welfare loss) that may result from the strategic incentive of the aggregator, when compared to the benchmark efficiency in which the prosumers directly participate in the wholesale market. [17] introduced a framework where an aggregator incentivizes its prosumers to produce or consume energy over a period of time by setting two prices: one for production and another for consumption. The paper focuses on the prosumer's strategic decision-making process which was formulated as a game with the aggregator. For such a game, sufficient conditions on the aggregator's pricing strategy were established for the existence of a unique Nash equilibrium. Also, a distributed algorithm was proposed which enables the prosumers to seek this Nash equilibrium, relying on local information acquired through exchange of information with neighboring prosumers. Furthermore, the algorithm was proven to be asymptotically convergent, when the network of prosumers is connected and undirected.
Among the several typologies above for the prosumer markets, this paper restricts the analysis on a peer-to-peer model. We generally seek to have an efficient Nash equilibrium in the design of uniform-price market where strategic (price-anticipating) market participants compete for an infinitely divisible resource. In other words, we want the market's aggregate cost at the Nash equilibrium to be close to the minimum possible cost (i.e. the socially-optimal cost or cost at the price-taking competitive equilibrium). Equivalently, we seek to have a social welfare at the Nash equilibrium that is close to the socially-optimal welfare. Our goal is to study the market design question of how to formulate a bidding mechanism that provides sufficient flexibility for the prosumers to declare their bidding preferences in a way that yields an efficient allocation of productions and consumptions, minimizing the "welfare loss" that occurs due to their strategic behavior. In other words, are there any restrictions that can be placed on the declared bidding functions to ensure such an efficient allocation?
Due to its ability to simultaneously produce and consume, a prosumer's consumption volume from other market participants may not be zero even when the net supply is positive (i.e., the prosumer supplies more than it consumes). In this case, the prosumer still has consumption utility and cost. This feature is known as prosumer duality. By expanding the traditional Cournot model, [18] demonstrated by simulations that the dual nature of prosumers can lead to more competitive behavior than pure producers in a traditional producer/consumer system. In other words, the prosumers' best-response supply quantities are closer to the competitive levels than those of the traditional producers, under the same game-theoretic scenario. The design that we propose considers the net quantity of supply/demand but does not consider the prosumer's duality; which is recommended for future research.
The rest of the paper is organized as follows: Section II introduces the bidding mechanism for the prosumers and the market model (optimization of resource allocation) for the central clearinghouse. Section III discusses the competitive
equilibrium where the prosumers are price-takers. Section IV investigates the Nash equilibrium where the prosumers are price-makers (strategic). Section V outlines a case study where the market outcome (allocation) is examined with respect to the welfare loss when different market parameters (e.g. supply capacity or inelastic demand) are varied. Section VI concludes and provides future directions.
## II Bidding Mechanism and Market Model
To investigate the posed market design question, we consider profit maximizing prosumers having production costs and utilities characterized, respectively, by convex cost functions and concave utility functions in the output quantity. Each prosumer has a maximum production capacity and a minimum inelastic demand. In the search for a restriction on general bidding functions to scalar-parameterized ones, analogous to [8] for one-sided market of suppliers and [11] for two-sided market of suppliers and consumers, it is natural to first examine the possibility of using the supply offers and demand bids proposed in [11] so that together they represent the prosumer's strategy--using this bidding mechanism enables the dual nature of prosumers. Denote the set of prosumers by \(\mathcal{N}\)=\(\{1,2,\ldots,N\}\). Let \(d_{min}\in\mathbb{R}^{+}\) represent the minimum inelastic (inflexible) demand for each prosumer, and \(s_{max}\in\mathbb{R}^{+}\) denote the maximum supply capacity. Their values are assumed to be identical for all prosumers without loss of generality. In the paper, we emphasize the distinction between the supply capacity \(s_{max}\) and the production capacity. The production capacity includes not only the supply capacity but also the capacity used by each prosumer to meet its own demand, i.e., the production capacity is \(s_{max}+d_{min}\). In addition, \(\theta_{i}^{s}\in\mathbb{R}^{+}\) and \(\theta_{i}^{d}\in\mathbb{R}^{+}\) are the parameters to be chosen by prosumer \(i\) to declare its preferred quantity of supply and demand, respectively. Also, \(p\in\mathbb{R}^{+}\) is the price to be determined by the market operator (central clearinghouse) to clear the market. The bidding functions are:
\[\begin{split}& s_{i}=S(\theta_{i}^{s},p)\coloneqq s_{max}-\frac{ \theta_{i}^{s}}{p},\text{ and}\\ & d_{i}=D(\theta_{i}^{d},p)\coloneqq d_{min}+\frac{\theta_{i}^{ d}}{p},\ i\in\mathcal{N}\end{split} \tag{1}\]
Furthermore, each prosumer has a production cost function \(C_{i}(s_{i})\) that is continuously differentiable, convex, and increasing as well as a utility function \(U_{i}(d_{i})\) that is continuously differentiable, concave, and increasing. In a perfect competition setting where the prosumers are price takers, given a market price \(\mu\)\(>\)\(0\) each prosumer maximizes a payoff function given by:
\[\begin{split}\pi_{i}^{p}(\theta_{i}^{d},\theta_{i}^{s},\mu)=& U_{i}(D(\theta_{i}^{d},\mu))-\mu D(\theta_{i}^{d},\mu)\\ &+\mu S(\theta_{i}^{s},\mu)-C_{i}(S(\theta_{i}^{s},\mu))\end{split} \tag{2}\]
The market operator chooses the price \(p(\boldsymbol{\theta}^{d},\boldsymbol{\theta}^{s})>0\) to clear the market, i.e., so that the supply/demand balance equation \(\sum_{i=1}^{N}D(\theta_{i}^{d},p(\boldsymbol{\theta}^{d},\boldsymbol{\theta} ^{s}))=\sum_{i=1}^{N}S(\theta_{i}^{s},p(\boldsymbol{\theta}^{d},\boldsymbol{ \theta}^{s}))\) is satisfied in which case:
\[p(\boldsymbol{\theta}^{d},\boldsymbol{\theta}^{s})=\frac{\sum_{i=1}^{N}( \theta_{i}^{d}+\theta_{i}^{s})}{N(s_{max}-d_{min})} \tag{3}\]
In an oligopoly setting where the prosumers are price-anticipating, each prosumer maximizes the following payoff function which is similar to (2) except that now the prosumer realizes that the price is set as a function of all prosumers' actions according to (3), i.e., \(\mu=p(\boldsymbol{\theta}^{d},\boldsymbol{\theta}^{s})\). Consequently, this incentivizes the prosumers to strategically adjust their payoffs to gain more profits.
\[\begin{split}\pi_{i}^{p}(\theta_{i}^{d},\theta_{i}^{d},\theta_{i }^{s},\theta_{i-i}^{s})=& U_{i}(D(\theta_{i}^{d},p(\boldsymbol{ \theta}^{d},\boldsymbol{\theta}^{s})))-p(\boldsymbol{\theta}^{d},\boldsymbol {\theta}^{s})\\ &\times D(\theta_{i}^{s},p(\boldsymbol{\theta}^{d},\boldsymbol{ \theta}^{s}))+p(\boldsymbol{\theta}^{d},\boldsymbol{\theta}^{s})\\ &\times S(\theta_{i}^{s},p(\boldsymbol{\theta}^{d},\boldsymbol{ \theta}^{s}))-C_{i}(S(\theta_{i}^{s},p(\boldsymbol{\theta}^{d},\boldsymbol{ \theta}^{s})))\end{split} \tag{4}\]
Given the game \(\mathcal{G}\) defined by the set of prosumers \(\mathcal{N}\), their payoffs given by (4) and their action spaces \(\boldsymbol{\Theta}_{i}^{d}=\mathbb{R}^{+}\) and \(\boldsymbol{\Theta}_{i}^{s}=\mathbb{R}^{+}\), a bidding profile \((\boldsymbol{\tilde{\theta}}^{d},\boldsymbol{\tilde{\theta}}^{s})\) is a Nash equilibrium if:
\[\pi_{i}^{p}(\tilde{\theta}_{i}^{d},\tilde{\theta}_{-i}^{d},\tilde{\theta}_{i }^{s},\tilde{\theta}_{-i}^{s})\geq\pi_{i}^{p}(\theta_{i}^{d},\tilde{\theta}_{ -i}^{d},\theta_{i}^{s},\tilde{\theta}_{-i}^{s}),\forall\theta_{i}^{d},\theta_ {i}^{s},i\in\mathcal{N} \tag{5}\]
Unfortunately, the payoff function (4) increases without a bound since it is not concave in the pair \((\theta_{d}^{d},\theta_{s}^{s})\). Therefore, the problem is not well defined. As a result, we propose an alternative bidding mechanism to (1) for the prosumers. Let \(q_{i}\in\mathbb{R}\) denote the desired quantity of demand (positive) or supply (negative) for each prosumer \(i\), \(d_{min}>0\) be the minimum inelastic demand, \(s_{max}>0\) be the maximum supply capacity, and \(p>0\) represent the market's price. We propose the following bidding mechanism which consists of a scalar-parameterized function representing the quantity \(q_{i}\) and a scalar representing \(s_{max}\):
\[q_{i}=Q(\theta_{i},p)\coloneqq d_{min}+\frac{\theta_{i}}{p},\text{ and }s_{max},\ \ i\in\mathcal{N} \tag{6}\]
Note that \(d_{min}\) is included in the scalar-parameterized function. Also, \(q_{i}\) and \(s_{max}\) in the bidding mechanism (6) must satisfy \(s_{max}\geq-q_{i}\). Prosumer \(i\) chooses the parameter \(\theta_{i}\)\(\in\)\(\mathbb{R}\) such that \(\theta_{i}\)\(>\)\(0\) and \(\theta_{i}\)\(\leq\)\(0\) represent, respectively, the pure-consumption and prosumption modes. Thus, \(q_{i}\)\(>\)\(d_{min}\) is the pure-consumption mode such that \(q_{i}\) is the quantity to be consumed by prosumer \(i\) including the inelastic demand \(d_{min}\), \(q_{i}\)\(<0\) is the supply mode such that \(q_{i}\) is the quantity to be produced by prosumer \(i\) including the quantity produced to meet its own inelastic demand \(d_{min}\) (local inelastic demand), and \(0\leq q_{i}\leq d_{min}\) is the prosumption mode such that \(q_{i}\) is the locally unmet portion of the inelastic demand \(d_{min}\) to be consumed by prosumer \(i\) from other prosumers (hence, \(d_{min}-q_{i}\) is prosumer \(i\)'s produced quantity which is consumed locally). Note that using this bidding mechanism, each prosumer cannot supply to other prosumers until it produces its own entire inelastic demand, i.e., the mechanism does not allow the duality of prosumers. The market operator solves the following convex optimization problem to maximize the aggregate social welfare defined in (7a):
\[\underset{\boldsymbol{q}}{\text{maximize}}\ \ \mathcal{W}(\boldsymbol{q})\coloneqq \sum_{i=1}^{N}S_{i}(q_{i}),\] (7a) subject to \[\sum_{i=1}^{N}q_{i}=0, \tag{7b}\] \[s_{max}\geq-q_{i},\ \forall i\in\mathcal{N} \tag{7c}\]
where \(S_{i}(q_{i})\) is the utility/cost function for prosumer \(i\), i.e., utility function when it is positive or cost function when it is negative. Any solution \(\mathbf{q}\) (i.e. allocation profile) to (7) is referred to as an efficient allocation. We impose the following assumption on \(S_{i}(q_{i})\):
**Assumption 1**.: _Let \(N\geq 2\) and for \(\forall i\in\mathcal{N}\), \(q_{i},q^{\prime}_{i}\in\mathbb{R}\), \(d_{min}>0\), \(s_{max}>0\), and \(-s_{max}\leq q_{i}<q^{\prime}_{i}\), \(S_{i}(q_{i})\) is twice continuously differentiable, strictly increasing, \(S_{i}(d_{min})=0\) and satisfies the following condition which is stricter than strict concavity:_
\[(1+\frac{q^{\prime}_{i}}{(N-1)d_{min}})\frac{\partial S_{i}(q^{\prime}_{i})}{ \partial q^{\prime}_{i}}<(1+\frac{q_{i}}{(N-1)d_{min}})\frac{\partial S_{i}(q _{i})}{\partial q_{i}} \tag{8}\]
The market operator chooses the price \(p(\mathbf{\theta})>0\) to clear the market, i.e., so that the supply/demand balance constraint (7b) \(\sum_{i=1}^{N}Q(\theta_{i},p(\mathbf{\theta}))=0\) is satisfied in which case:
\[p(\mathbf{\theta})=-\frac{\sum_{i=1}^{N}\theta_{i}}{Nd_{min}} \tag{9}\]
\(p(\mathbf{\theta})\geq 0\) is only possible if \(\sum_{i=1}^{N}\theta_{i}\leq 0\) (assumed). If the latter is zero then \(q_{i}=d_{min}\) regardless of the value of \(p\). Hence, the following conventions are adopted which make the price continuous in \(\mathbf{\theta}\):
\[Q(0,0)=d_{min},\ \ \ \mathrm{and}\ \ p(\mathbf{0})=0 \tag{10}\]
Due to the assumption \(\sum_{j=1}^{N}\theta_{j}\leq 0\), the action parameter \(\theta_{i}\) for each prosumer \(i\in\mathcal{N}\) must stay within \(\theta_{i}\leq-\sum_{j\neq i}^{N}\theta_{j}\) which is enforced by the market operator.
## III Perfect Competition and Competitive Equilibrium
In this section, we present the case where all prosumers are price takers and the goal is to analyze the market outcome by establishing the existence and characterization of a unique competitive market equilibrium. Therefore, we can conclude that the allocation at the competitive equilibrium is efficient, which is established by the first fundamental theorem of welfare economics. Given the market price \(\mu>0\), prosumer \(i\) maximizes the following payoff function:
\[\pi^{p}_{i}(\theta_{i},\mu)=S_{i}(Q(\theta_{i},\mu))-\mu Q(\theta_{i},\mu) \tag{11}\]
Based on the definition in (6), let \(Q(\theta_{i},\mu)=q_{i}\) in (11). When \(q_{i}>d_{min}\), then \(S_{i}(q_{i}){>}0\) represents the utility gained from consuming the amount \(q_{i}\). When \(q_{i}<0\), then \(|S_{i}(q_{i}){<}0|\) represents the cost incurred from producing the amount \(|q_{i}|\). When \(0\leq q_{i}\leq d_{min}\), then \(|S_{i}(q_{i}){\leq}0|\) (assumed) represents the cost incurred from consuming the amount \(q_{i}\) (i.e. consuming \(q_{i}\) from other prosumers while producing \(d_{min}-q_{i}\) for local consumption). Also based on whether \(q_{i}\) is positive or negative, the second term in (11) represents the cost of consumption or the revenue from supply, respectively. It is worth noting that this formulation allows the payoff (11) to be negative, e.g., when \(0\leq q_{i}\leq d_{min}\), both terms in (11) are negative. Furthermore, the optimal social welfare (7a) can be negative depending on the structure of the functions \(S_{i}(q_{i}),i\in\mathcal{N}\); we will see in Section V that the case study results in negative optimal social welfare since the values of the example functions are larger in magnitude over the negative domains than the positive counterparts. The following theorem states the result characterizing the unique competitive equilibrium, and makes a conclusion about the corresponding allocation. The proof is provided in Appendix I.
**Theorem 1**.: _Suppose Assumption 1 holds. Then, there exists a unique competitive equilibrium, i.e., a scalar \(\mu\) given by (10) and a vector \(\mathbf{\theta}^{*}\), satisfying:_
\[\pi^{p}_{i}(\theta^{*}_{i},\mu)\geq\pi^{p}_{i}(\theta_{i},\mu),\forall\ \theta_{i}\in\mathbb{R},i\in\mathcal{N} \tag{12}\]
_Also, the allocation profile \(\mathbf{q}^{*}\) is efficient where \(\mathbf{q}^{*}\) is defined by \(q^{*}_{i}=Q(\theta^{*}_{i},\mu)\)._
## IV Strategic Prosumers and Nash Equilibrium
In this section, we analyze the oligopoly case where prosumers are price-anticipating. Each prosumer maximizes the following payoff function which is the same as (10) except that now the prosumer realizes that the price is set as a function of all prosumers' actions according to (9), i.e., \(\mu=p(\mathbf{\theta})\):
\[\pi^{p}_{i}(\theta_{i},\theta_{-i})=S_{i}(Q(\theta_{i},p(\mathbf{\theta})))-p(\bm {\theta})Q(\theta_{i},p(\mathbf{\theta})) \tag{13}\]
Since the prosumer's payoff is a function of the actions of all prosumers, this incentivises the prosumers to strategically adjust their payoff functions. Let \(\mathcal{G}\) denote the game defined by the set of prosumers (players) \(\mathcal{N}\), their payoffs given by (13) and their action space \(\mathbf{\Theta}_{i}=\mathbb{R}\). Our goal is to demonstrate that the game \(\mathcal{G}\) has a Nash equilibrium and that the corresponding market allocation is unique, providing an efficient way to compute it. This can be achieved by showing that at a Nash equilibrium, the resulting allocation is obtained by solving a modified version of the convex optimization problem (7) where the prosumers modify their utility/cost functions \(S_{i}(q_{i})\). For notational simplicity, we use slight abuse of notations to refer to \(Q(\theta_{i},p(\mathbf{\theta}))\) or \(Q(\theta_{i},\mu)\) as \(q_{i}\) and \(\pi^{p}_{i}(\theta_{i},\theta_{-i})\) or \(\pi^{p}_{i}(\theta_{i},\mu)\) as \(\pi^{p}_{i}\).
The collection of parameters \(\mathbf{\tilde{\theta}}\) (i.e. bidding profile) constitutes a Nash equilibrium for the game \(\mathcal{G}\) if:
\[\pi^{p}_{i}(\tilde{\theta}_{i},\theta^{-}_{-i})\geq\pi^{p}_{i}(\theta_{i}, \tilde{\theta^{-}_{-i}}),\forall\theta_{i}\in\mathbb{R},\ i\in\mathcal{N} \tag{14}\]
First, we state some conditions on the prosumers' action spaces in which the existence of Nash equilibrium for the game \(\mathcal{G}\) is ruled out:
**Lemma 1**.: _If \(\tilde{\mathbf{\theta}}\) is a Nash equilibrium for the game \(\mathcal{G}\), then the following cannot hold: \(\tilde{\mathbf{\theta}}=\mathbf{0}\) or \(\forall i\in\mathcal{N},\sum_{j\neq i}^{N}\tilde{\theta}_{j}\geq 0\)._
The proof is given in Appendix II. It is worth noting from the proof that prosumer \(i\) can exert market power if \(\sum_{j\neq i}^{N}\tilde{\theta}_{j}>0\) since its payoff would increase without a bound. Next, we state a sufficient condition on the prosumers' action spaces for the existence of Nash equilibrium for the game \(\mathcal{G}\):
**Lemma 2**.: _Assume that \(N\geq 2\), and suppose that Assumption 1 holds. Then, \(\mathcal{G}\) admits a Nash equilibrium \(\tilde{\mathbf{\theta}}\) where
the following condition is satisfied for all \(i\in\mathcal{N}\):_
\[-(\sum_{j\neq i}^{N}\tilde{\theta_{j}})\Big{(}\frac{Nd_{min}}{2}\frac{(S_{i})_{ \tilde{q}_{i}\tilde{q}_{i}}(\tilde{q}_{i})}{(S_{i})_{\tilde{q}_{i}\tilde{q}_{i}} (\tilde{q}_{i})}+1\Big{)}\leq\tilde{\theta_{i}}\leq-(\sum_{j\neq i}^{N}\tilde{ \theta_{j}})-\epsilon \tag{15}\]
_where \(\tilde{q}_{i}=Q(\tilde{\theta_{i}},p(\boldsymbol{\tilde{\theta}}))\) as defined in (6) and \(\epsilon>0\) is any infinitesimal constant._
The proof is given in Appendix III. It is worth noting from the proof that the left inequality of condition (15) represents the interval in which the prosumer's payoff (13) is concave in \(\theta_{i}\) and the right inequality represents the interval in which the payoff is continuous in \(\theta_{i}\)--at \(\theta_{i}=-(\sum_{j\neq i}^{N}\theta_{j})\), the market price (9) is zero and the payoff (13) is undefined; hence, \(\epsilon>0\) is a technical requirement enforced by the market operator which guarantees the continuity of (13) over a compact subset of \(\mathbb{R}\) for \(\theta_{i}\), and ensures a positive market price. We can now state the main result, concluding the uniqueness of Nash equilibrium and characterizing its corresponding market allocation. To prove this result, we construct a convex optimization problem by modifying (7) such that we replace the utility/cost functions \(S_{i}(q_{i})\) by modified utility/cost functions \(\tilde{S}_{i}(q_{i})\):
\[\underset{\boldsymbol{q}}{\text{maximize}} \tilde{\mathcal{W}}(\boldsymbol{q})\coloneqq \sum_{i=1}^{N}\tilde{S}_{i}(q_{i}),\] (16a) subject to \[\sum_{i=1}^{N}q_{i}=0, \tag{16b}\] \[s_{max}\geq-q_{i},\ \forall i\in\mathcal{N} \tag{16c}\]
where
\[\tilde{S}_{i}(q_{i})=\begin{cases}(1+\frac{q_{i}}{(N-1)d_{min}})S_{i}(q_{i})- \frac{\int_{d_{min}}^{q_{i}}S_{i}(z)\,dz}{(N-1)d_{min}},q_{i}{\geq}d_{min}\\ (1+\frac{q_{i}}{(N-1)d_{min}})S_{i}(q_{i})+\frac{\int_{q_{i}}^{d_{min}}S_{i}(z )\,dz}{(N-1)d_{min}},q_{i}{<}d_{min}\end{cases} \tag{17}\]
Theorem 2's proof is given in Appendix IV.
**Theorem 2**.: _Assume that \(N\geq 2\) and suppose that Assumption 1 holds. Let \(\boldsymbol{\tilde{q}}\) be an allocation profile corresponding to a Nash equilibrium \(\boldsymbol{\tilde{\theta}}\) for the game \(\mathcal{G}\), i.e., \(\forall i\in\mathcal{N},\tilde{q}_{i}=Q(\tilde{\theta_{i}},p(\boldsymbol{ \tilde{\theta}}))\) as defined in (6). If_
\[\tilde{q}_{i}\geq-(N-1)d_{min}-\frac{(S_{i})_{\tilde{q}_{i}}(\tilde{q}_{i})}{ (S_{i})_{\tilde{q}_{i}\tilde{q}_{i}}(\tilde{q}_{i})},\forall i\in\mathcal{N} \tag{18}\]
_then \(\boldsymbol{\tilde{q}}\) is the unique solution to the convex optimization problem (16) and \(\boldsymbol{\tilde{\theta}}\) is unique._
It is worth noting from the proof that the condition (18) guarantees the strictly concavity of the modified utility/cost function (17) in \(q_{i}\). Theorem 2 provides an efficient way of computing the Nash equilibrium for the game \(\mathcal{G}\). Rather than solving \(N\) prosumer problems in the action variables \(\boldsymbol{\theta}\), we can compute the solution of the optimization problem (16), providing the market allocation \(\boldsymbol{\tilde{q}}\). This, in turn, allows the computation of Nash equilibrium \(\boldsymbol{\tilde{\theta}}\) directly using (6). To understand the rationale for constructing the optimization problem (16), first note that it is similar to (7) except the objective function, where the utility/cost functions of the prosumers are modified. Therefore, (16a) represents the maximization of a welfare at the Nash equilibrium which is not the true welfare maximized in (7a). This means that at the Nash equilibrium, the true utility/cost functions \(S_{i}(q_{i})\) are strategically misrepresented by the prosumers such that they declare untruthful utility/cost functions \(\tilde{S}_{i}(q_{i})\) to maximize their profits.
## V Case Study
In this section, our goal is to examine the welfare loss due to the strategic behavior of prosumers when the market's supply capacity or inelastic demand are varied. To achieve this, we compute the social welfare and contrast its behavior under two scenarios: first with the optimal allocation resulting from the perfect competition of the prosumers, given by the program (7), and second with the optimal allocation resulting from the prosumers' strategic interaction, given by the program (16). In both cases, we calculate the true social welfare from (7a). We vary the market's supply capacity or inelastic demand by changing the supply \(s_{max}\) or demand \(d_{min}\) parameters in the proposed bidding mechanism (6), respectively, while fixing the other parameters. We select the range in which we vary these parameters such that the welfare of the perfect competition plateaus. We consider the following example of a strictly concave, strictly increasing utility/cost function \(S_{i}(q_{i})\) for each prosumer \(i\in\mathcal{N}\):
\[S_{i}(q_{i})=e^{\frac{-\beta_{i}}{5}}-e^{\frac{-\beta_{i}q_{i}}{5d_{min}}} \tag{19}\]
Using this example function, we can compute the true welfare (7a). Also, to calculate the "modified" welfare (16a), we write \(\tilde{S}_{i}(q_{i})\), defined in (17), as follows:
\[\tilde{S}_{i}(q_{i})= (1+\frac{q_{i}}{(N-1)d_{min}})\cdot(e^{\frac{-\beta_{i}}{5}}-e^{ \frac{-\beta_{i}q_{i}}{d_{min}}})-\frac{1}{(N-1)d_{min}}\] \[\cdot\left(e^{\frac{-\beta_{i}}{5}}(q_{i}-d_{min})+\frac{5d_{min} }{\beta_{i}}(e^{\frac{-\beta_{i}q_{i}}{d_{min}}}-e^{\frac{-\beta_{i}}{5}})\right) \tag{20}\]
To compare the values of the two resulting welfares (welfare at the competitive equilibrium and welfare at the Nash equilibrium) when the market's supply/demand parameters are changed, we solve both programs (7) and (16) several times, first varying the total supply capacity (i.e. \(s_{max}\) for all \(i\in\mathcal{N}\)) and second changing the total inelastic demand (i.e. \(d_{min}\) for all \(i\in\mathcal{N}\)). In both simulations and using an ad-hoc technique, we investigate two cases. In the first case, we make sure that the conditions in Lemma 1 and Lemma 2 are satisfied. That is, we check in each simulation if \(\forall i\in\mathcal{N}\), \(\sum_{j\neq i}^{N}\tilde{\theta}_{j}<0\) and (15) are satisfied by tuning the parameters \(\beta_{i}\), \(d_{min}\), and \(s_{max}\). In the second case, we carry out other simulations such that we allow the left inequality of (15) to not be satisfied; our goal is to observe whether the welfare loss would be the same as in the first case when (15) is always satisfied--recall that the left inequality of (15) is a sufficient condition for the existence of Nash equilibrium since it guarantees concavity of each prosumer's payoff in \(\theta_{i}\). Also, recall that (18) guarantees strict concavity of each prosumer's modified utility/cost function in \(q_{i}\), and hence it is a sufficient condition for existence and uniqueness of
the market allocation at Nash equilibrium. It is worth noting that the set \(\mathcal{B}\), constituting all possible \(\theta_{i}\)'s defined by the inequality in \(\theta_{i}\) that is obtained from substituting \(Q(\theta_{i},p(\boldsymbol{\theta}))\) in (18), is contained within the set \(\mathcal{A}\) defined by the left inequality of (15)--see Appendix IV for more details. Given our example function (19), the condition (18) yields:
\[\tilde{q}_{i}\geq\frac{5d_{min}}{\beta_{i}}-d_{min}(N-1),i\in\mathcal{N} \tag{21}\]
(21) indicates that to guarantee existence and uniqueness of Nash equilibrium, the optimal allocation for each prosumer \(i\) must be above a certain value which depends on \(\beta_{i}\), \(d_{min}\), and \(N\). In all the simulations, we fix the number of prosumers \(N\) to 11. By fixing \(d_{min}>0\), the right-hand side of (21) decreases as \(\beta_{i}\) increases. Similarly by fixing \(\beta_{i}>0.5\), it decreases as \(d_{min}\) increases. Therefore, the minimum optimal allocation that is sufficient for existence and uniqueness of Nash equilibrium moves further left in the real line as we increase \(d_{min}\) and/or \(\beta_{i}\). When the absolute value of this minimum value is less than the maximum supply capacity \(s_{max}\) for all prosumers, existence and uniqueness of Nash equilibrium is guaranteed.
First, we compare the two resulting welfares and observe the gap between them when the prosumers' supply capacities \(s_{max}\) are varied while fixing their inelastic demands \(d_{min}\). On the top of Fig. 1, \(s_{max}\) is increased gradually from 0.1 to 3 while \(d_{min}=4\) and \(\beta_{i}=\{1.9+0.1i\mid i=1,...,N\}\) are fixed. As mentioned earlier, the values of the latter two are selected using an ad-hoc technique to guarantee existence and uniqueness of Nash equilibrium by making sure that the resulting optimal allocations of the prosumers always satisfy (21). The figure shows that if a Nash equilibrium exists, the welfare loss does not grow unbounded when the total supply capacity is increased. Similarly, Fig. 1 on the bottom shows the welfare gap when \(s_{max}\) is increased from 0.1 to 4.5 while \(d_{min}=1\) and \(\beta_{i}=\{0.5+0.1i\mid i=1,...,N\}\) are fixed. In contrast to the previous simulations, in this case, the values of \(d_{min}\) and \(\beta_{i}\) are selected such that the resulting optimal allocations of the prosumers do not all necessarily satisfy (21) when their supply capacities exceed a certain threshold. Consequently, a Nash equilibrium may not exist. The figure shows that the welfare loss grows unbounded when the total supply capacity is increased. In such simulations, the optimal allocations of prosumers 1 and 2 do not satisfy (21) when the total supply capacity approximately exceeds 18.5 and 31.5, respectively.
Second, we examine the welfare loss between the two resulting welfares when the prosumers' inelastic demands \(d_{min}\) are varied while having their supply capacities \(s_{max}\) fixed. On the top of Fig. 2, \(s_{max}=0.7\) and \(\beta_{i}=\{1.9+0.1i\mid i=1,...,N\}\) are fixed while \(d_{min}\) is decreased gradually from 5 to 0.7. The values of the latter two are selected using an ad-hoc technique so that the condition (21) for the existence and uniqueness of Nash equilibrium is satisfied for all prosumers. The figure shows that when the total inelastic demand is decreased, the welfare loss does not grow unbounded if a Nash equilibrium exists. Similarly, Fig. 2 on the bottom shows the welfare gap when \(s_{max}=3\) and \(\beta_{i}=\{0.5+0.1i\mid i=1,...,N\}\) are fixed while \(d_{min}\) is decreased from 5 to 0.7. The values of the latter two are selected such that, in this case, the resulting optimal allocations of the prosumers do not all necessarily satisfy (21) when their inelastic demands are below a certain threshold. Thus, a Nash equilibrium may not exist. The figure shows that the welfare loss grows unbounded when the total inelastic demand is decreased. In these simulations, the optimal allocations of prosumers 1 and 2 do not satisfy (21) when the total inelastic demand drops approximately below 20 and 11.5, respectively.
## VI Conclusion and Future Directions
In this paper, a scalar-parameterized bidding mechanism has been proposed for the prosumers in a uniform-price peer-to-peer market. A competitive equilibrium and the associated efficient allocation have been established. When certain conditions on the action spaces of the prosumers are satisfied, we have shown that a unique Nash equilibrium exists. In addition, we have provided an efficient way to compute the market allocation at the Nash equilibrium, and in turn, the Nash equilibrium itself. Finally, a case study was given where we have shown that the welfare gap between the
Fig. 1: Welfare loss with increasing supply capacity and fixed inelastic demand. (Top) Nash equilibrium exists and the welfare loss does not grow unbounded. (Bottom) Nash equilibrium may not exist and the welfare loss grows unbounded.
welfare at the competitive equilibrium and the welfare at the Nash equilibrium is bounded when the market supply or inelastic demand are varied. On the contrary when the existence of Nash equilibrium is not guaranteed, the welfare loss grows unbounded as the market supply is increased or the market inelastic demand is decreased. A future research direction would be to characterize a bound for the welfare loss. Also given that the proposed mechanism does not allow the prosumers to choose their preferred quantity of supply and demand separately, a future research direction would be to develop a mechanism which captures the dual nature of the prosumers.
|
2309.05630 | Boundary Peeling: Outlier Detection Method Using One-Class Peeling | Unsupervised outlier detection constitutes a crucial phase within data
analysis and remains a dynamic realm of research. A good outlier detection
algorithm should be computationally efficient, robust to tuning parameter
selection, and perform consistently well across diverse underlying data
distributions. We introduce One-Class Boundary Peeling, an unsupervised outlier
detection algorithm. One-class Boundary Peeling uses the average signed
distance from iteratively-peeled, flexible boundaries generated by one-class
support vector machines. One-class Boundary Peeling has robust hyperparameter
settings and, for increased flexibility, can be cast as an ensemble method. In
synthetic data simulations One-Class Boundary Peeling outperforms all state of
the art methods when no outliers are present while maintaining comparable or
superior performance in the presence of outliers, as compared to benchmark
methods. One-Class Boundary Peeling performs competitively in terms of correct
classification, AUC, and processing time using common benchmark data sets. | Sheikh Arafat, Na Sun, Maria L. Weese, Waldyn G. Martinez | 2023-09-11T17:19:07Z | http://arxiv.org/abs/2309.05630v2 | # Boundary Peeling: Outlier Detection Method Using One-Class Peeling
###### Abstract
Unsupervised outlier detection constitutes a crucial phase within data analysis and remains a dynamic realm of research. A good outlier detection algorithm should be computationally efficient, robust to tuning parameter selection, and perform consistently well across diverse underlying data distributions. We introduce One-Class Boundary Peeling, an unsupervised outlier detection algorithm. One-class Boundary Peeling uses the average signed distance from iteratively-peeled, flexible boundaries generated by one-class support vector machines. One-class Boundary Peeling has robust hyperparameter settings and, for increased flexibility, can be cast as an ensemble method. In synthetic data simulations One-Class Boundary Peeling outperforms all state of the art methods when no outliers are present while maintaining comparable or superior performance in the presence of outliers, as compared to benchmark methods. One-Class Boundary Peeling performs competitively in terms of correct classification, AUC, and processing time using common benchmark data sets.
benchmark datasets, isolation forest, multi-modal data, unsupervised
## I Introduction
Outlier detection is important in many applications such as fraud detection [1], medicine [2] and network intrusion [3]. Although there exists a large body of literature on outlier detection methods, there is no consensus on a single best method for outlier identification [4]. Since there is no consensus, it is advantageous to continue to improve upon current methods and develop new methods. In all of the literature outliers have many definitions, but broadly speaking, an outlier is an object that deviates sufficiently from most observations, enough to consider it as being generated by another process [5, 6].
In this paper we introduce an unsupervised method for outlier detection which uses the average signed distance from iteratively-peeled, flexible boundaries, generated by one-class support vector machines (OCSVM) [7]. One-Class Boundary Peeling (OCBP) provides robust default hyperparameters, (unlike \(k\) in nearest neighbor methods) and performs well when implemented with a simple threshold for outlier identification. We show that the proposed method is computationally efficient and outperforms many current state of the art methods on benchmark and synthetic data sets. This is especially the case when there are no outliers present which is particularly appealing when the sample size, \(N\) is small. Unlike outlier detection methods based on covariance estimation OCBP performs well when the dimension of the data, \(p\), is smaller than \(N\)[8]. One-Class Boundary Peeling performs well regardless of the number of modes and with various percentages of outliers. Additionally, for multimodal data, OCBP requires no pre-specification of the number of modes.
Real-world data might be high dimensional and have distributions with multiple modes. Multiple modes in data are generated from processes that operate under different modalities [9, 10, 11, 12]. Examples of multimodal data can be found in chemical engineering, environmental science, biomedicine, and the pharmaceutical business [13, 14]. In fact, all business processes that involve transitions or depend on time might be categorized as multimodal. According to [15], a multimode process is one that operates properly at various operational points. It is important to identify and separate defects from modes in order to preserve data quality. In this work we assess the performance of our method, as well as others, in unimodal and multimodal settings. Ideally, a method should work well regardless of the number of modes and not require the specification of the number of modes _a priori_.
Outlier detection methods can be supervised, semi-supervised, or unsupervised. Unsupervised outlier detection is the most challenging, but most realistic case, as labeled data is often unavailable. Unsupervised outlier detection methods include ensemble-based, probabilistic methods, proximity-based methods, deep learning methods, and graph-based methods [5]. An ideal method will scale to problems with high-dimensional and/or large data, perform regardless of the shape of the data, consistently identifies outliers in uni- or multimodal data, have minimal hyperparameter tuning, and be computationally efficient. A popular ensemble method that checks many of those boxes is the Isolation Forest [16].
The Isolation Forest (ISO) algorithm identifies outliers
by generating recursive random splits on attribute values which isolate anomalous observations using the path length from the root node. The popularity of ISO is due its accuracy, flexibility, and its linear time complexity. [5] shows the ISO to be the preferable method among other unsupervised methods using real and synthetic data sets. [17] point out that the ISO has weaknesses, including finding anomalies in multimodal data with a large number of modes and finding outliers that exist between axis parallel clusters. [17] addresses these weakness with a supervised method which we do not consider in this work.
Proximity-based or distance-based methods find outliers either based on a single distance measure, such as Mahalanobis distance, from some estimated center or using a local neighborhood of distances. Local Outlier Factor (LOF) [18] is a popular distance-based method that finds outliers based on Euclidean distance between a point \(x_{i}\) and its \(k^{th}\) closest neighbor. The implementation of LOF requires the specification of \(k\), the number of neighbors. Another widely implemented distance-based method is k-nearest neighbors (kNN) [19] which ranks each point on the basis of its distance to its \(k^{th}\) nearest neighbor. From this ranking, one can declare the top \(n\) points to be outliers. The kNN algorithm is very scalable and easy to understand but also requires the specification of \(k\).
Graph-based Learnable Unified Neighbourhood-based Anomaly Ranking (LUNAR) is a graph-neural network based outlier detection method that is more flexible and adaptive to a given set of data than local outlier methods such as kNN and LOF. [20]. Different from the proximity based methods kNN and LOF, LUNAR learns to optimize model parameters for better outlier detection. LUNAR is more robust to the choice of \(k\) compared to other local outlier methods and shows good performance when compared to other deep methods.
Deep learning methods such as Autoencoders or adversarial neural networks are increasingly used for unsupervised outlier detection [21, 22, 23, 24]. Deep-learning methods help to overcome the curse of dimensionality by learning the features of the data while simultaneously flagging anomalous observations. A competitive, deep-learning, unsupervised outlier detection method is Deep SVDD [25], which combines kernel-based SVDD with deep learning to simultaneously learn the network parameters and minimize the volume of a data-enclosing hypersphere. Outliers are identified as observations whose distance are far from the estimated center.
Deep autoencoders [26] are the predominant approach used in deep outlier detection. Autoencoders are typically trained to minimize the reconstruction error, the difference between the input data and the reconstruction of the input data with the latent variable. A Variational AutoEncoder (VAE) [27] is a stochastic generative model in which the encoder and decoder are generated by probabilistic functions. In this manner VAE does not represent the latent space with a simple value but maps input data to a stochastic variable. Observations with a high reconstruction error are candidates to be considered anomalous in VAEs [28].
Probabilistic methods for unsupervised outlier detection estimate the density function of a data set. One such method, Empirical Cumulative Distribution Functions (ECOD), [29] estimates the empirical distribution function of the data. ECOD performs this estimation by computing an empirical cumulative distribution function for each dimension separately. [29] shows ECOD to be a competitive method for outlier detection. ECOD does not require hyperparameter specification and is a scalable algorithm.
The sampling of outlier detection methods mentioned above, and others, from a variety of different approaches are implemented in the PyOD package [28]. Details of each implementation and the methods available can be found here: [https://github.com/yzhao062/pyod](https://github.com/yzhao062/pyod). We have used the PyOD implementations of the above-mentioned algorithms for ease of comparison.
This paper is organized as follows Section II describes the Boundary Peeling One Class (OCBP) and the Ensembled version (EOCBP). Section IV compares the performance of Boundary Peeling with other benchmark methods on semantically constructed benchmark datasets. Section III compares the performance of our method with other benchmark methods using synthetic data. Section V discusses the limitations, contributions and future research.
## II Boundary Peeling
[30] introduced One Class Peeling (OCP), a proximity-based method, for outlier detection using Support Vector Data Description (SVDD) [31]. The OCP method involves fitting a SVDD boundary and then removing the points identified as support vectors from the data. The process is then repeated until a small number, say 2, points remain. These final points are used to estimate the multivariate center of data. From that center, Gaussian kernel distances used to calculate a distance of each observation from the estimated center and large distances are flagged as outliers. OCP is not dissimilar from convex hull peeling [32] but unlike convex hull peeling OCP is scaleable and flexible [30]. The OCP method works well, but only on data with a single mode. While [30] does provide empirical threshold values to
control the false positive rate, those values are highly dependent on the data itself.
Similar to the OCP method [30] the Boundary Peeling One Class method paper uses successively peeled boundaries created one-class support vector machines (OCVM) [7]. Interestingly, [33] and [34] show that One Class Support Vector Machines (OCSVM) [7] are equivalent as long as the data is transformed to unit variance. Instead of successively peeling boundaries to estimate a mean and then calculating distances, the OCBP method calculates the signed distance from _each observation to each successive OCVM separating hyperplane_. From each boundary a group of support vectors is obtained, and the radius, \(R\), is calculated using
\[\begin{split} R^{2}=&\sum_{i,j}\alpha_{i}\alpha_{j}K (\mathbf{x_{i}}\cdot\mathbf{x_{j}})+K(\mathbf{x_{sv}}\cdot\mathbf{x_{sv}})\\ &-2\sum_{i}\alpha_{i}K(\mathbf{x_{i}}\cdot\mathbf{x_{sv}}),\end{split} \tag{1}\]
where \(x_{sv}\) is any support vector. Similar to OCP, the boundaries are constructed to be as large as possible and made flexible by employing the Gaussian kernel and the boundaries are intentionally large for sensitivity to individual observations. The bandwidth of the Gaussian kernel function is set \(s=p\), which is the recommended setting in [35] and [33].
Suppose the decision function \(f(x)=\|x-\mathbf{a}\|^{2}-R^{2}\) measures the signed distance to the separating hyperplane generated by the OCSVM fit. If \(f(x)>0\), the sample will be outside the hyperplane, and if \(f(x)<0\), the sample will inside the hyperplane. \(f(x)=0\) only for support vectors. As successive peels are made, the decision function's signed distance values are stored in a \(N\times\textit{peel}\) matrix. In other words, every observation receives a decision function value from each successively created boundary, \(f(\hat{x})_{ij}\) even if the observation has been removed from a previous boundary. In total, \(j\) boundaries are created and peeled until at least two observations remain. The \(N\times\textit{peel}\) decision function values are averaged over each \(i^{th}\) observation, \(\overline{f(x)}_{i}\).
Observations with \(f(\hat{x})_{ij}\geq 0\) in the early peels are more likely outliers and therefore we inversely weight the final average decision function values. For example, if observation \(i\) is labeled as an outlier or support vector on the first peel then observation \(i\) would have a \(depth_{i}=1\). If a total of 10 peels were constructed to eliminate all but 2 observations then \(peel=10\). The weight, \(d\), for this observation would be \(d=\frac{depth_{i}}{peel}=\frac{10}{1}=10\), a relatively large weight since an early peel indicates a potentially anomalous observation. Combining the weights and the averaged decision function values, creates a vector of length \(N\) of final kernel distance scores, \(\text{KDS}_{i}=d_{i}*\overline{f(x)}_{i}\). An observation is flagged as an outlier if its \(\text{KDS}_{i}\) is outside of a threshold defined as \(h=Q_{3}(\text{KDS})+1.5\cdot\text{IQR}(\text{KDS})\).
```
1:procedureOCBP(\(S,q=0.01,n=2\))
2:\(r\gets N\)
3:\(s\gets p\)
4:\(S_{r}\gets S\)
5:\(peel\gets 0\)
6:while\(r>n\)do
7:\(X_{SV}\leftarrow\text{OCSVM}(S_{r},q,s)\)
8:\(S_{r}\gets S_{r}\backslash X_{SV}\)
9:\(D_{r}\gets f(S)\)
10:\(r\leftarrow\text{rows}(S_{r})\)
11:\(peel\gets peel+1\)
12:\(depth_{i}\gets peel\)
13:endwhile
14:\(\text{d}\gets peel/depth_{i}\)
15:\(\text{KDS}_{i}\gets d\cdot mean(D_{r})\)
16:\(h\gets Q_{3}(\text{KDS})+1.5\cdot\text{IQR}(\text{KDS})\)
17:\(\text{flag}_{\text{OCBP}}\gets I(\text{KDS}_{i}>h)\)
18:return\(\text{KDS}_{i},\text{flag}_{\text{OCBP}}\)
19:endprocedure
```
**Algorithm** OCBP
To illustrate how the OCBP method works, we have plotted in Figure 1 the boundaries generated at some iterations on 100 observations sampled from the multivariate T distribution (\(df=10\)) as inliers, and 10 observations from a uniform distribution \(U(-10,10)\) as outliers. The picture in the top row, center is the first flexible boundary and notice that every outlier is indicated by a blue contour, indicating a positive distance, that they are outside the hyperplane. In the second iteration (Figure 1 top row, far right graph) the outliers and few inliers in the outer regions in all four quadrants have no contours surrounding them. Those observations were all identified as support vectors and were "peeled" off prior to the creation of the second boundary. In Figure 1 bottom row left and center pictures we observe that peels 3 and 4 only remove a small number of points each time and then the final far right, bottom row plot shows the final peel where only a few points are left.
Similarly, Figure 2 illustrates how the OCBP method works in a bimodal data set. 50 inlier observations are generated from normal distribution with mean vector \(\boldsymbol{\mu}=-\mathbf{3}\) and off-diagonal elements of \(\Sigma\) equal to 0.5. For the second mode, 50 observations from the normal distribution are generated mean vector \(\boldsymbol{\mu}=\mathbf{3}\) and off-diagonal elements of \(\Sigma\) equal to 0.5. The 20 outlying observations are generated from \(U(-10,10)\). We can see how outlying observations are selected as support vectors on the first two iterations. Subsequently, we can see how
the inlier observations have lower kernel distances than outlying observations as the they are peeled last. The average kernel distances for outlying observations are therefore higher as they are far away from the modes.
### _Ensembled Boundary Peeling_
To increase the sensitivity of the OCBP method, we computed the average distance calculated after fitting the OCBP method on feature sampled data, say \(Y_{r}\). We call this approach the ensemble OCBP (EOCBP). While the OCBP algorithm is relatively fast compared to other methods (see Table VII), the EOEBP algorithm is slower, but still a relatively strong-performing and feasible algorithm. The additional set of steps of the EOCBP algorithm is most prominent in line 5 of Algorithm, where a subset of \(\sqrt{p}\) of features are selected and the algorithm is iterated \(c\) times. In our implementation we chose \(c=50\) to limit the computational time in simulations. EOCBP flags outliers using and average of \(\text{KDS}_{i}=d_{i}*\overline{f(x)}_{i}\) from each of the \(c\) ensembles compared to a robust threshold, \(h\), computed from the \(\text{KDS}_{i}\) averaged over all \(c\) iterations.
```
1:procedureEOCBP(\(S,q=0.01,n=2,c=50\))
2:\(r\gets N\)
3:\(S_{r}\gets S\)
4:for\(i\gets 1\) to \(c\)do
5:\(Y_{r}\leftarrow\text{colsample}(S_{r},int(\sqrt{p}))\)
6:while\(r>n\)do
7:\(X_{SV}\leftarrow\text{OCSVM}(Y_{r},q,s)\)
8:\(Y_{r}\gets Y_{r}\backslash X_{SV}\)
9:\(D_{r}\gets f(S)\)
10:\(r\leftarrow\text{rows}(Y_{r})\)
11:\(peel\gets peel+1\)
12:\(depth_{i}\gets peel\)
13:endwhile
14:\(\text{d}\gets depth_{i}/peel\)
15:\(\text{KDS}_{i}\leftarrow\text{mean}(D_{r})\)
16:endfor
17:\(\text{EKDS}_{i}\leftarrow\text{mean}(\text{KDS}_{i})\times\text{mean}(d)\)
18:\(h=Q_{3}(\text{EKDS})+1.5\cdot\text{IQR}(\text{EKDS})\)
19:\(\text{flag}_{\text{EOCBP}}\gets I(\text{ERKD}_{i}>h)\)
20:return\(\text{ERKD}_{i},\text{flag}_{\text{EOCBP}}\)
21:endprocedure
```
**Algorithm** The Ensemble OCBP Method
## III Synthetic Data Comparison
To explore the behavior of the OCBP and EOCBP methods under controlled circumstances, we conducted simulations of 1,000 data sets of size \(n=50\), and dimension \(p=100\) with sample observations drawn from the multivariate normal, \(\text{T}(df=5)\), Lognormal and Wishart distributions on unimodal and multimodal data. For each of 1000 iterations we randomly generated sample data with different correlation structures; none, medium and high for data with no outliers and data with 10% outliers. For unimodal data with a single mode, data was generated with a \(p\)-dimensional mean vector \(\boldsymbol{\mu}=\mathbf{0}\). For bimodal data the second mode has
Fig. 1: OCBP Example on a 2-dimensional unimodal data set. 100 inlier observations generated from \(T(df=5)\), and 10 outlier observations generated from \(U(-10,10)\). Contours indicate kernel signed distances from separating hyperplane. Blue contours indicate positive distances (darker blue indicates distance closer to zero), while red indicate negative distances. Support vectors are marked with an X.
mean vector \(\mathbf{\mu}=\mathbf{2}\) or \(\mathbf{\mu}=\mathbf{5}\). When the data has no correlation \(\Sigma=I\). For moderate or high correlation the off-diagonal elements of \(\Sigma\) are equal to 0.5 or 0.75 respectively. Outliers for the Normal, T, and Wishart distributions are generated uniformly using \(U(-10,10)\). For the lognormal distribution the outliers are generated using \(U(-20,20)\). There is also a mixed distribution case where the correlation and distributions of each mode are chosen at random between 0, 0,5 and 0.75 and from the three distributions.
For a more realistic scenario we also conduct a simulation where the number modes, the distribution of the modes, the means and covariance of the modes, and percentage of outliers were randomly generated for each iteration. For each iteration \(N\) was chosen randomly from \([50,150]\) and \(p\) from [50,300]. In each case the number of modes was randomly selected from 1 to 5 where \(N\) was divided equally among the number of modes. The off-diagonal elements of \(\Sigma\) were chosen uniformly between 0 and 1. For each mode the data was randomly generated from the multivariate normal, T(\(df=5\)), Lognormal and Wishart distributions. The percentage of contamination randomly selected between three different settings that include no contamination, 1 to 10% and 10 to 20%. The outlying observations were generated from the \(U(-20,20)\) distribution.
For all scenarios we compare OCBP and EOCBP with ISO, ECOD, KNN, LOF, LUN, VAE and DSVDD. We implement each method using the default/recommended settings listed in the PYOD package [28] and the OCBP and EOCBP parameters as shown in Algorithms 1 and 2. The contamination ratio (\(cr\)) is kept at a default value of 10% for all competing algorithms.
We measure performance using detection rate (DR), correct classification rate (CC), area under the curve, and precision. Using the usual measures of True Positive (TP), True Negative (TN) False Positive (FP) and False Negative (FN) we define the detection rate as \(DR_{i}=\frac{TP_{i}}{TP_{i}+FN_{i}}\times 100\). We define correct classification for a dataset \(i\) as CC\({}_{i}=\frac{TP_{i}+TN_{i}}{N}\times 100\) and Precision (PREC) is defined as PREC\({}_{i}=\frac{TP_{i}}{TP_{i}+FP_{i}}\times 100\). The Area Under the Curve (AUC) of the Receiver Operating Characteristic curve measures the probability of correctly separating inlier and outliers. For brevity, only the tables for CC and AUC are shown. Tables for DR and PREC can be found in the supplementary materials.
The results in Table I show when no outliers are present OCBP will have the best, most consistent, correct classification rate. This is the case for unimodal and bimodal data with two modes (\(\mathbf{\mu}_{1}=0\) and \(\mathbf{\mu}_{2}=5\)). For small samples, this property is a protection against data loss. Note that for the many of the methods, the necessity of having to provide a percentage of outliers ahead of time forces the identification of 10% of points as outliers.
Table II shows the CC rate for unimodal data with \(\mathbf{\mu}=0\) and bimodal data with \(\mathbf{\mu}_{1}=0\) and \(\mathbf{\mu}_{2}=5\). ISO has the highest CC for most levels of correlation
Fig. 2: OCBP Example on a 2-dimensional bimodal data set. 50 inlier observations generated from \(N(-3,1)\) and 50 inlier observations generated from \(N(3,1)\). 20 outlier observations generated from \(U(-10,10)\). Contours indicate kernel signed distances from separating hyperplane. Blue contours indicate positive distances (darker blue indicates distance closer to zero), while red indicate negative distances. Support vectors are marked with an X.
and distribution, closely followed by OCBP for the zero correlation case. For moderate correlation OCBP, EOCBP and ISO all have a perfect CC for the bimodal Wishart distribution case and EOCBP and ISO for the unimodal case. LOF produces the second highest CC for the bimodal moderately correlated Normal data. When the correlation is high we observe that OCBP and EPOC methods fair the best when the data is non-normal. In general we do not observe a difference between unimodal and bimodal data in terms of CC. We do observe that EOCBP has a slightly lower CC for highly-correlated lognormal data. When the correlation and distribution of the two modes is mixed, EOCBP has the highest CC rate. Overall ISO produces the highest average CC followed by OCBP then EOCBP. DSVDD had the lowest average CC overall.
Table III shows that all of the methods except for DSVDD have high values of AUC indicating that, at some cut off value, each method will be able to almost perfectly identify outliers in this simulated data. LOF has the highest overall average AUC followed by KNN then OCBP then VAE. ISO and EOCBP have the same same performance in the mixed distribution case, closely followed by OCBP.
Although not shown in the main body of the paper, the DR is high for most methods and for many cases (see Supplementary Materials). Data with two normally distributed modes, regardless of correlation structure, leads to the highest DR. ISO has the has a perfect DR except for the mixed distributions. For every case where ISO has a DR of 100% OCBP or EOCBP either also have a DR of 100% or have a DR of 99.272% or higher. EOCBP has the highest DR for the case when the modes have random distributions. ISO has the highest average DR=99.493 overall followed by OCBP=99.469 in close second. DSVDD has the lowest DR overall. ISO has the highest average precision followed by VAE, LUN and KNN (see Supplementary Materials). Interestingly,
several of the methods, including OCBP and EOCBP have prefect precision when the modes are generated from the Wishart distribution and the correlation is high. In general a method that has a lower precision but a high detection rate is tending to flag more inliers as outliers.
Table IV gives the result of the random simulation. When no outliers are present, OCBP has the highest CC followed by EOCBP. This is also the case when outliers are present between 1% and 10%. At the the higher percentage of outliers, ISO has the best CC followed by LOF. When data contain 1-10% outliers ISO, LOF, KNN, LUN and VAE all have a DR=100.00%. For the higher percentage of outliers only KNN has a DR=100.00%. OCBP and EOCBP have a mid level DR at 98.208 and 98.734 respectively. This performance is consistent with what we have observed with OCBP and EOCBP of identifying less observations as outliers overall. This conservative behaviour is reflected in the high Precision observed in Table IV with both measures having the highest precision. Many of the measures have an extremely high AUC, EOCBP having the highest for a outliers between 1 and 10% and KNN having the highest overall. EOCBP has the second highest AUC whereas ECOD and DVSVDD have the lowest AUC.
## IV Example Data Comparison
[36] utilized semantically meaningful datasets to evaluate multiple outlier detection methods. The original datasets include a labeled class that can be assumed to be rare and therefore outliers. For example, a class of sick patients within a population dominated by healthy patients. The prepared datasets are from benchmark data commonly used in the outlier literature [36]. To
\begin{tabular}{l l l l l l l l l l l} \hline \hline & Cor & OCBP & EOCBP & ISO & ECOD & LOF & KNN & DVSDD & LUN & VAE \\ \hline N(**0**) & 0 & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & 46.798 & 99.736 & 99.652 \\ N(**0**,**5**) & 0 & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & 100.00 & 31.707 & 99.573 & **100.00** \\ LN(**0**) & 0 & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & 49.665 & 99.822 & 99.734 \\ LN(**0**,**5**) & 0 & 99.984 & **100.00** & **100.00** & **100.00** & 99.996 & 99.996 & 27.119 & 99.804 & **100.00** \\ T(**0**) & 0 & 99.998 & **99.99** & 99.994 & 99.992 & 99.998 & 99.998 & 51.463 & 99.676 & 99.668 \\ T(**0**,**5**) & 0 & **99.989** & 99.950 & 99.950 & 99.986 & 99.988 & 99.988 & 36.929 & 99.700 & 99.945 \\ W(**0**) & 0 & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & 30.444 & 99.400 & 98.656 \\ W(**0**,**5**) & 0 & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & 30.347 & 99.036 & 99.981 \\ \hline N(**0**) & 0.5 & **100.00** & **100.00** & **100.00** & 99.994 & **100.00** & **100.00** & 47.930 & 99.978 & 99.979 \\ N(**0**,**5**) & 0.5 & **100.00** & **100.00** & **100.00** & 99.973 & **100.00** & **100.00** & 38.245 & 99.844 & **100.00** \\ LN(**0**) & 0.5 & 99.991 & 99.900 & 99.999 & **100.00** & 99.996 & 99.991 & 63.800 & 99.881 & 99.811 \\ LN(**0**,**5**) & 0.5 & 99.813 & 98.789 & 98.789 & 99.961 & **99.998** & 99.975 & 43.632 & 99.827 & 99.978 \\ T(**0**) & 0.5 & **99.977** & 99.970 & 99.960 & 99.844 & 99.984 & **99.977** & 51.867 & 99.737 & 99.747 \\ T(**0**,**5**) & 0.5 & 99.991 & 99.899 & 99.899 & 97.589 & 99.998 & **100.00** & 45.483 & 99.869 & 99.820 \\ W(**0**) & 0.5 & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & 29.668 & 98.548 & 99.024 \\ W(**0**,**5**) & 0.5 & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & 26.775 & 99.363 & **100.00** \\ \hline N(**0**) & 0.75 & **100.00** & **100.00** & **100.00** & 99.892 & **100.00** & 100.00 & 63.888 & 99.990 & 99.987 \\ N(**0**,**5**) & 0.75 & **100.00** & **100.00** & **100.00** & 99.696 & **100.00** & **100.00** & 49.808 & 99.953 & **100.00** \\ LN(**0**) & 0.75 & 99.946 & 99.535 & 99.969 & **99.979** & 99.987 & 99.943 & 73.388 & 99.872 & 99.853 \\ LN(**0**,**5**) & 0.75 & 99.668 & 98.252 & 98.252 & 99.759 & **99.987** & 99.956 & 53.806 & 99.811 & 99.942 \\ T(**0**) & 0.75 & 99.998 & 99.992 & 99.977 & 99.504 & 99.998 & **99.999** & 64.051 & 99.908 & 99.956 \\ T(**0**,**5**) & 0.75 & 99.990 & 99.859 & 99.859 & 94.978 & 99.992 & **99.994** & 49.713 & 99.908 & 99.956 \\ W(**0**) & 0.75 & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & 31.076 & 99.040 & 98.552 \\ W(**0**,**5**) & 0.75 & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & **100.00** & 28.714 & 98.592 & **100.00** \\ \hline Mixed(**0**,**5**) & & 99.880 & 99.884 & 99.8818 & **99.944** & 99.920 & 67.938 & 99.806 & 99.939 \\ \hline Average & 99.968 & 99.834 & 99.855 & 99.582 & **99.994** & 99.989 & 45.311 & 99.622 & 99.761 \\ \hline \hline \end{tabular} TABLE IV Summary of measures for each method on data with a randomly generated number of modes from random distributions and random correlation structure. Bold indicates the best performance.
\begin{tabular}{l|l|l l l l l l l l l} \hline \hline Measure & \% Out & OCBP & EOCBP & ISO & ECOD & LOF & KNN & DVSDD & LUN & VAE \\ \hline CC & none & **97.878** & 94.050 & 87.986 & 89.645 & 91.243 & 91.015 & 89.650 & 89.650 & 89.650 \\ & 1 to 10 & **99.040** & 96.881 & 93.504 & 90.395 & 93.078 & 92.743 & 88.392 & 91.559 & 91.559 \\ & 10 to 20 & 96.401 & 94.502 & **97.035** & 91.060 & 96.646 & 96.505 & 85.982 & 95.499 & 95.482 \\ \hline DR & 1 to 10 & 99.803 & 99.608 & **100.00** & 99.350 & **100.00** & **100.00** & 98.233 & **100.00** & **100.00** \\ & 10 to 20 & 98.208 & 98.734 & 99.998 & 97.455 & 99.950 & **100.00** & 94.614 & 99.927 & 99.917 \\ \hline PREC & 1 to 10 & **99.211** & 97.233 & 93.456 & 90.822 & 92.967 & 92.622 & 89.789 & 91.422 & 91.422 \\ & 10 to 20 & **98.076** & 95.505 & 96.879 & 92.941 & 96.524 & 96.326 & 90.214 & 95.334 & 95.325 \\ \hline AUC & 1 to 10 & 99.680 & 99.966 & 99.9
prepare the datasets, the authors sampled (where possible) the outlying class at different rates 20%, 10%, 5%, and 2%. Because the datasets were sampled, to avoid bias, 10 different versions for each percentage of outliers for each data set were created. Note that not every dataset was created with all four different percentages of outliers due to the amount of observations in the outlier class. The authors give versions of these sample datasets that are normalized to [0,1] and with duplicates removed and non-normalized versions that contain duplicate observations. To emulate the most realistic scenario where data pre-processing has taken place we use the normalized datasets with no duplicates for a total of 1700 datasets. The datasets are available here: [https://www.dbs.ifi.lmu.de/research/outlier-evaluation/](https://www.dbs.ifi.lmu.de/research/outlier-evaluation/) and the attribute type and number and sample size of each original dataset is summarized in Table 1. Regarding the shape of the data, there no way to tell if this data is multi-modal, but several of the datasets contain data that belongs to more than two classes and therefore it might be reasonable to assume that some of these datasets might contain more than a single mode.
Table V shows the average CC across all versions of the datasets. This includes datasets with 2-20% outliers. EOCBP has the highest CC in 7 of the 11 datasets with ISO having a 1 of the highest and KNN having 3 of the highest.
Table VI shows the average AUC across all versions of the data. EOCBP has the highest average AUC in 5 cases where ISO has the highest AUC in 4 cases. OCBP has the highest average AUC for two cases. Since AUC is independent of the specific choice of cutoff indicating the EBOPC method is better at separating the outliers from the inliers for these datasets.
Although not shown in the main body of the paper, the DR is sporadic across the methods and the datasets with ISO and OCBP having 3 and 4 of the top DR respectively. In general the DR for these datasets are low and variable. For example for the Hepatitis data ISO had a detection rate of 85% whereas OCBP had a detection rate of 0. Likewise ISO has a 0% detection rate for the InternetAds data whereas LOF has 55% DR.
In 5 out the 8 datasets EOCBP has the highest precision. In two cases OCBP has a precision of 0% and in one case ISO has a precision of 0%. Lastly Table VII shows the average processing time overall versions of each of the datasets. OCBP had four of the fastest processing times but KNN had the overall average fastest processing time. Processing times were measured in seconds using a workstation with an Intel Xeon Processor E5-2687W and dual SLI NVIDIA Quadro P5000 graphical processing units (GPU).
In the implementation of the methods in the PyOD package, the users must specify the percentage of outliers in the data. We set that to be 10% for all of the methods. We show the DR, CC, AUC and Precision for only the sampled datasets with 10% outliers. Generally we see similar patterns as before. DR is fairly sporadic with OCBP having three of the highest values. EOCBP dominates correct classification for the datasets with only 10% outliers. EOCBP had four of the highest AUC values followed by ISO with three of the highest values. ISO shows a precision of 100% for the Arrhythmia datasets with 10% outliers, however, EOCBP has four of the highest precision values for the remainder of the datasets.
## V Discussion
We have introduced a novel method for outlier detection based on iteratively peeled observations and signed distances. We have compared our method with state-of-the art methods on unimodal and multimodal synthetic and on semantically meaningful benchmark data. From the synthetic data studies we can make the following conclusions. When no outliers are present OCBP has a higher CC rate than ISO in all but a single case. All methods have high CC and DR rates for unimodal and bimodal data with 10% outliers, OCBP and EOCBP included. LOF has the highest average AUC for all synthetic data with OCBP with the second highest.. In the case of completely randomly generated synthetic data OCBP has the highest CC rate for data with 0-10% outliers and the highest precision. KNN has the highest AUC but OCBP and EOCBP are on par with AUCs of all of the other methods.
From the synthetic data comparison we conclude that the performance of OCBP and EOCBP will be equivalent when outliers are present and better if there are no outliers present. For small sample data, preventing unnecessary data loss by eliminating inliers is potentially a problem. Using OCBP or EOCBP will protect against unessary data loss in the case of a small sample.
The comparison of the methods on the semantically created benchmark datasets illustrates the following. EOCBP had the highest overall CC but not the highest overall DR indicating it is conservative in its identification of outliers. The method with the highest average DR is ISO followed by closely by VAE. VAE never had the highest DR for a single dataset, but had the most consistent performance. Although EOCBP did not have highest DR it did have the highest AUC and Precision. This indicates that the default threshold for outlier identification might not be optimal. The most consistent, computationally, efficient method is KNN. In
its current implementation, EOCBP is the least efficient. However OCBP is about average compared to the other methods. We can conclude that EOCBP is performs better on a variety of data types. Further work is required to increase the computational efficiency of the method. If computational efficiency is necessary, the OCBP method will perform equivalently to the other methods.
One weakness of the OCBP and EOCBP involves the size of the modes. In the synthetic data random mode simulation the modes created had different sizes, but not despairingly different. If one mode contained almost all of the data and another was small, OCBP and EOCBP would probably categorize the observations in the smaller mode as outliers. We tested the performance of the OCBP and EOCBP against the other algorithms and although the performance in terms of the correct classification rate is lower compared to cases where the number of modes is similar, they perform competitively and most often outperform competing methods.
OCBP and EOCBP were implemented with baseline parameters and the performance of both could be improved by tuning, specifically the value \(h\) the outlier threshold and further research is warranted here since OCBP proves to be a computationally efficient method (see Table VII). In the implementation of EOCBP, the parameter \(c\) and the feature set, \(\sqrt{p}\) were set to strict values. Changing these might improve the computational efficiency and sensitivity of the method.
For both methods the boundaries created by OCSVM were set to be as large as possible and hence peel a small number of observations at a time. For a dataset with a large number of observations, adjusting \(q\) so that smaller boundaries are created each peel would improve the computational efficiency by requiring fewer peels.
|
2309.09022 | gym-saturation: Gymnasium environments for saturation provers (System
description) | This work describes a new version of a previously published Python package -
gym-saturation: a collection of OpenAI Gym environments for guiding
saturation-style provers based on the given clause algorithm with reinforcement
learning. We contribute usage examples with two different provers: Vampire and
iProver. We also have decoupled the proof state representation from
reinforcement learning per se and provided examples of using a known ast2vec
Python code embedding model as a first-order logic representation. In addition,
we demonstrate how environment wrappers can transform a prover into a problem
similar to a multi-armed bandit. We applied two reinforcement learning
algorithms (Thompson sampling and Proximal policy optimisation) implemented in
Ray RLlib to show the ease of experimentation with the new release of our
package. | Boris Shminke | 2023-09-16T15:25:39Z | http://arxiv.org/abs/2309.09022v1 | # gym-saturation: Gymnasium environments for saturation provers (System description) +
###### Abstract
This work describes a new version of a previously published Python package -- gym-saturation: a collection of OpenAI Gym environments for guiding saturation-style provers based on the given clause algorithm with reinforcement learning. We contribute usage examples with two different provers: Vampire and iProver. We also have decoupled the proof state representation from reinforcement learning per se and provided examples of using a known ast2vec Python code embedding model as a first-order logic representation. In addition, we demonstrate how environment wrappers can transform a prover into a problem similar to a multi-armed bandit. We applied two reinforcement learning algorithms (Thompson sampling and Proximal policy optimisation) implemented in Ray RLlib to show the ease of experimentation with the new release of our package.
Keywords:Automated theorem proving Reinforcement learning Saturation-style proving Machine learning
## 1 Introduction
This work describes a new version (0.10.0, released 2023.04.25) of a previously published [28] Python package -- gym-saturation1: a collection of OpenAI Gym [6] environments for guiding saturation-style provers (using the given clause algorithm) with reinforcement learning (RL) algorithms. The new version partly implements the ideas of our project proposal [29]. The main changes from the previous release (0.2.9, on 2022.02.26) are:
Footnote 1: [https://pypi.org/project/gym-saturation/](https://pypi.org/project/gym-saturation/)
* guiding two popular provers instead of a single experimental one (Section 3)
* pluggable first-order logic formulae embeddings support (Section 4)
* examples of experiments with different RL algorithms (Section 5)
* following the updated Gymnasium [35] API instead of the outdated OpenAI Gym
gym-saturation works with Python 3.8+. One can install it by pip install gym-saturation or conda install -c conda-forge gym-saturation. Then, provided Vampire and/or iProver binaries are on PATH, one can use it as any other Gymnasium environment:
import gymnasium
import gym_saturation
_# v0 here is a version of the environment class, not the prover_ env = gymnasium.make("Vampire-v0") _# or "iProver-v0" _# edit and uncomment the following line to set a non-default problem_ _# env.set_task("a-TPTP-problem-path")_ observation, info = env.reset() print("Starting proof state:") env.render() _# truncation means finishing an episode in a non-terminal state_ _# e.g. because of the externally imposed time limit_ terminated, truncated = False, False while not (terminated or truncated): _# apply policy (e.g. a random available action)_ action = env.action_space.sample(mask=observation["action_mask"]) print("Given clause:", observation["real_obs"][action]) observation, reward, terminated, truncated, info = env.step(action) print("Final proof state:") env.render() env.close()
## 2 Related work
Guiding provers with RL is a hot topic. Recent projects in this domain include TRAIL (Trial Reasoner for AI that Learns) [2], FLoP (Finding Longer Proofs) [37], and lazyCoP [26]. We will now compare the new gym-saturation features with these three projects.
Usually, one guides either a new prover created for that purpose (lazyCoP; FLoP builds on fCoP [14], an OCaml rewrite of older leanCoP [19]) or an experimental patched version of an existing one (TRAIL relies on a modified E [27]). Contrary to that, gym-saturation works with unmodified stable versions of Vampire [15] and iProver [10].
In addition, known RL-guiding projects are prover-dependent: FLoP could, in principle, work with both fCoP and leanCoP but reported only fCoP experiments. TRAIL claims to be reasoner-agnostic, but to our best knowledge, no one has tried it with anything but a patched E version it uses by default. [26] mentions an anonymous reviewer's suggestion to create a standalone tool for other existing systems, but we are not aware of further development in this di
rection. Quite the contrary, we have tested gym-saturation compatibility with two different provers (Vampire and iProver).
Deep learning models expect their input to be real-valued tensors and not, for example, character strings in the TPTP [32] language. Thus, one always uses a _representation_ (or _embeddings_) -- a function mapping a (parsed) logic formula to a real vector. In lazyCoP and FLoP parts of embedding functions belong to the underlying provers, making it harder to vary and experiment with (e.g., one needs Rust or OCaml programming skills to do it). gym-saturation leaves the choice of representation open and supports any mapping from TPTP-formatted string to real vectors. The version described in this work also provides a couple of default options.
## 3 Architecture and implementation details
### Architecture
gym-saturation is compatible with Gymnasium [35], a maintained fork of now-outdated OpenAI Gym standard of RL-environments, and passes all required environment checks. As a result of our migration to Gymnasium, its maintainers featured gym-saturation in a curated list of third-party environments 2.
Footnote 2: [https://gymnasium.farama.org/environments/third_party_environments/](https://gymnasium.farama.org/environments/third_party_environments/)
Previously, gym-saturation guided an experimental pure Python prover [28] which happened to be too slow and abandoned in favour of existing highly efficient provers: Vampire and iProver.
Although the gym-saturation user communicates with both iProver and Vampire in the same manner, under the hood, they use different protocols. For Vampire, we relied on the so-called manual (interactive) clause selection mode implemented several years ago for an unrelated task [11]. In this mode, Vampire interrupts the saturation loop and listens to standard input for a number of a given clause instead of applying heuristics. Independent of this mode, Vampire writes (or not, depending on the option show_all) newly inferred clauses to its standard output. Using Python package pexpect, we attach to Vampire's standard input and output, pass the action chosen by the agent to the former and read observations from the latter. In manual clause selection mode, Vampire works like a server awaiting a request with an action to which it replies (exactly what an environment typically does).
iProver recently added support of being guided by external agents. An agent has to be a TCP server satisfying a particular API specification. So, iProver behaves as a client which sends a request with observations to some server and awaits a reply containing an action. To make it work with gym-saturation, we implemented a _relay server_. It accepts a long-running TCP connection from a running iProver thread and stores its requests to a thread-safe queue, and pops a response to it from another such queue filled by gym-saturation thread. See Figure 1 for a communication scheme.
### Implementation details
#### 3.2.1 Clause class
A clause is a Python data class having the following keys and respective values:
* literals -- a string of clause literals in the TPTP format, e.g.'member(X0,bb) | member(X0,b)'
* label -- a string label of a clause, e.g. '21'. Some provers (e.g. Vampire) use integer numbers for labelling clauses, but others (e.g. iProver) use an alphanumeric mixture (e.g. 'c_54')
* role -- a string description of a clause role in a proof (hypothesis, negated conjecture, axiom, et cetera)
* inference_rule -- a string name of an inference rule used to produce the clause. It includes not only resolution and superposition but also values like 'axiom' and 'input' (for theorem assumptions)
* inference_parents -- a tuple of clause labels if needed by the inference rule ('axiom' doesn't need any, 'factoring' expects only one,'resolution' -- two, et cetera)
* birth_step -- an integer step number when the clause appeared in the proof state. Axioms, assumptions, and the negated conjecture have birth step zero.
All these fields except the birth_step (computed by the environment itself) are already available as separate entities (and not parts of TPTP-formatted strings) in iProver and Vampire output.
Figure 1: gym-saturation interacting with iProver
#### 2.1.1 Environment class
Observationis a Python dictionary with several keys:
* real_obs is a tuple of all clauses (processed and unprocessed). It can be transformed to tensor representation by so-called observation wrappers 1. The gym-saturation provides several such wrappers for cases of external embeddings service or hand-coded feature extraction function
Footnote 1: [https://gymnasium.farama.org/api/wrappers/observation_wrappers/](https://gymnasium.farama.org/api/wrappers/observation_wrappers/)
* action_mask is a numpy [13] array of the size max_clauses (a parameter which one can set during the environment object instantiation) having a value 1.0 at index \(i\) if and only if a clause with a zero-based order number \(i\) currently exists and can be a given clause (e.g. not eliminated as redundant). All other values of action_mask are zeros. This array simplifies tensor operations on observation representations.
Limiting the total number of clauses in a proof state is a proxy of both random-access memory (each clause needs storage space) and time (a prover has to process each clause encountered) limits typical for the CASC [33] competition. One can add a standard Gymnasium time-limit wrapper to limit the number of steps in an episode. Setting wall-clock time and RAM limits is not typical for RL research.
Actionis a zero-based order number of a clause from real_obs. If a respective action_mask is zero, an environment throws an exception during the execution of the step method.
Rewardis 1.0 after a step if we found the refutation at this step and 0.0 otherwise. One can change this behaviour by either Gymnasium reward wrappers or by collecting trajectories in a local buffer and postprocessing them before feeding the trainer.
Episode is terminatedwhen an empty clause $false appears in the proof state or if there are no more available actions.
Episode is truncatedwhen there are more than max_clauses clauses in the proof state. Since the state is an (extendable) tuple, we don't raise an exception when a prover generates a few more clauses.
Infodictionary is always empty at every step by default.
Render modesof the environment include two standard ones ('human' and 'ansi'), the first one printing and the second one returning the same TPTP formatted string.
#### 3.0.1 Multi-task environment
The latest gym-saturation follows a Meta-World benchmark [36] style and defines set_task method with one argument -- a TPTP problem full path. If one resets an environment without explicitly setting a task in advance, the environment defaults to a simple group theory problem (any idempotent element equals the identity). Having a default task helps us keep compatibility with algorithms not aware of multi-task RL. One can inherit from gym-saturation environment classes to set a random problem at every reset or implement any other desirable behaviour.
## 4 Representation subsystem
### Existing first-order formulae representations and related projects
As mentioned in Section 2, to apply any deep reinforcement learning algorithm, one needs a representation of the environment state in a tensor form first. There are many known feature engineering procedures. It can be as simple as clause age and weight [25], or information extracted from a clause syntax tree [18] or an inference lineage of a clause [30]. Representing logic formulae as such is an active research domain: for example, in [23], the authors proposed more than a dozen different embedding techniques based on formulae syntax. In communities other than automated deduction, researchers also study first-order formulae representation: for example, in [5], the authors use semantics representation rather than syntax. One can also notice that first-order logic (FOL) is nothing more than a formal language, so abstract syntax trees of FOL are not, in principle, that different from those of programming language statements. And of course, encoding models for programming languages (like code2vec[4] for Java) exist, as well as commercially available solutions as GPT-3 [7] generic code embeddings and comparable free models like LLaMA [34].
To make the first step in this direction, we took advantage of existing pre-trained embedding models for programming languages and tried to apply them to a seemingly disconnected domain of automated provers.
### ast2vec and our contributions to it
In [20], the authors proposed a particular neural network architecture they called _Recursive Tree Grammar Autoencoders (RTG-AE)_, which encodes abstract syntax trees produced by a programming language parser into real vectors. Being interested in education applications, they also published the pre-trained model for Python [21]. To make use of it for our purpose, we furnished several technical improvements to their code (our contribution is freely available 1):
Footnote 1: [https://gitlab.com/inpefess/ast2vec](https://gitlab.com/inpefess/ast2vec)
* a TorchServe [24] handler for HTTP POST requests for embeddings
* request caching with the Memcached server [9]
* Docker container to start the whole subsystem easily on any operating system
To integrate the ast2vec server with gym-saturation environments, we added Gymnasium observation wrappers, one of them mapping a clause in the TPTP language to a boolean-valued statement in Python (in particular, by replacing logic operation symbols, e.g. = in TPTP becomes == in Python). See Figure 2 for a communication diagram. In principle, since a clause doesn't contain any quantifiers explicitly, one can rewrite it as a boolean-valued expression in many programming languages for which pre-trained embeddings might exist.
### Latency considerations
Looking at Figure 2, one might wonder how efficient is such an architecture. The average response time observed in our experiments was \(2ms\) (with a \(150ms\) maximum). A typical natural language processing model which embeds whole texts has a latency from \(40ms\) to more than \(600ms\)[17] (depending on the model complexity and the length of a text to embed) when run on CPU, so there is no reason to believe that ast2vec is too slow. When evaluating a prover, one usually fixes the time limit: for example, \(60s\) is the default value for Vampire. Being written in C++ and with a cornucopia of optimisation tweaks, Vampire can generate around a million clauses during this relatively short timeframe.
Figure 2: gym-saturation communication with ast2vec
Thus, to be on par with Vampire, a representation service must have latency around \(60\mu s\) (orders of magnitude faster than we have). There can be several ways to lower the latency:
* inference in batches (one should train the embedding model to do it; ast2vec doesn't do it out of the box). The improvement may vary
* use GPU. NVIDIA reports around 20x improvement vs CPU [16]. However, throwing more GPUs won't be as efficient without batch inference from the previous point
* request an embedding for a binary object of an already parsed clause instead of a TPTP string. It means not repeating parsing already done by a prover, which might lower the latency substantially. To do this, one will have to patch an underlying prover to return binary objects instead of TPTP strings
* use RPC (remote procedure call) instead of REST protocol. TorchServe relies on REST and parcels in JSON format, and in gRPC [12], they prefer the binary protobuf format. One rarely expects sub-millisecond latency from REST, although for RPC, \(150\mu s\) is not unusual. This point doesn't make much sense without the previous one
## 5 Usage examples
We provide examples of experiments easily possible with gym-saturation as a supplementary code to this paper 1. We don't consider these experiments as being of any scientific significance per se, serving merely as illustrations and basic usage examples. Tweaking the RL algorithms' meta-parameters and deep neural network architectures is out of the scope of the present system description.
Footnote 1: [https://github.com/inpefess/ray-prover/releases/tag/v0.0.3](https://github.com/inpefess/ray-prover/releases/tag/v0.0.3)
We coded these experiments in the Ray framework, which includes an RLlib -- a library of popular RL algorithms. The Ray is compatible with Tensorflow [1] and PyTorch [22] deep learning frameworks, so it doesn't limit a potential gym-saturation user by one.
In the experiments, we try to solve SET001-1 from the TPTP with max_clauses=20 (having no more than twenty clauses in the proof state) for guiding Vampire and max_clauses=15 for iProver. This difference is because even a random agent communicating to iProver manages to always solve SET001-1 by generating no more than twenty clauses. We wanted training to start, but keep the examples as simple as possible, so we chose to harden the constraints instead of moving on to a more complicated problem.
In one experiment, we organise clauses in two priority queues (by age and weight) and use an action wrapper to map from a queue number (0 or 1) to the clause number. That means we don't implant these queues inside provers but follow a Gymnasium idiomatic way to extend environments. Of course, Vampire and iProver have these particular queues as part of their implementation, but our illustration shows one could use any other priorities instead. It transforms our environment into a semblance of a 2-armed bandit, and we use Thompson
sampling [3] to train. This experiment reflects ideas similar to those described in [31].
In another experiment, we use ast2vec server for getting clause embeddings and train a Proximal Policy Optimisation (PPO) algorithm as implemented in the Ray RLlib. The default policy network there is a fully connected one, and we used \(256\times 20\) tensors as its input (256 is an embedding size in ast2vec, and 20 is the maximal number of clauses we embed). So, the policy chooses a given clause given the embeddings of all clauses seen up to the current step (including those already chosen or judged to be redundant/subsumed). Such an approach is more similar to [37].
We provide Figure 3 as a typical training process chart.
## 6 Conclusion and future work
We contributed a new version of gym-saturation, which continued to be free and open-source software, easy to install and use while promising assistance in setting up experiments for RL research in the automated provers domain. In the new version, we enabled anyone interested to conduct experiments with RL algorithms independently of an underlying prover implementation. We also added the possibility of varying representations as external plug-ins for further experimentation. We hope that researchers having such an instrument can focus on more advanced questions, namely how to generate and prioritise training problems to better transfer search patterns learned on simpler theorems to harder ones.
Our experience with adding Vampire and iProver support to gym-saturation shows that working tightly with corresponding prover developers is not manda
Figure 3: Episode reward mean vs the total number of steps. The blue line is for a random agent and the orange one — for the PPO. Both agents guide Vampire
tory, although it might help immensely. Implementing the prover guidance through the standard I/O (as in Vampire) seems to be relatively easy, and we hope more provers will add similar functionality in future to be more ML-friendly. Such provers could then profit from using any other external guidance (see [8] for a different system using the same iProver technical features as we did).
We identify a discerning and computationally efficient representation service as a bottleneck for our approach and envision an upcoming project of creating a universal first-order logic embedding model usable not only by saturation-style provers but also tableaux-based ones, SMT-solvers, semantic reasoners, and beyond.
#### 4.0.1 Acknowledgements
We would like to thank Konstantin Korovin for the productive discussion and for adding the external agents' communication feature to iProver, without which this work won't be possible. We also thank anonymous reviewers for their meticulous suggestions on improving the present paper.
|
2302.14467 | Automatic Internal Stray Light Calibration of AMCW Coaxial Scanning
LiDAR Using GMM and PSO | In this paper, an automatic calibration algorithm is proposed to reduce the
depth error caused by internal stray light in amplitude-modulated continuous
wave (AMCW) coaxial scanning light detection and ranging (LiDAR). Assuming that
the internal stray light generated in the process of emitting laser is static,
the amplitude and phase delay of internal stray light are estimated using the
Gaussian mixture model (GMM) and particle swarm optimization (PSO).
Specifically, the pixel positions in a raw signal amplitude map of calibration
checkboard are segmented by GMM with two clusters considering the dark and
bright image pattern. The loss function is then defined as L1-norm of
difference between mean depths of two amplitude-segmented clusters. To avoid
overfitting at a specific distance in PSO process, the calibration check board
is actually measured at multiple distances and the average of corresponding L1
loss functions is chosen as the actual loss. Such loss is minimized by PSO to
find the two optimal target parameters: the amplitude and phase delay of
internal stray light. According to the validation of the proposed algorithm,
the original loss is reduced from tens of centimeters to 3.2 mm when the
measured distances of the calibration checkboard are between 1 m and 4 m. This
accurate calibration performance is also maintained in geometrically complex
measured scene. The proposed internal stray light calibration algorithm in this
paper can be used for any type of AMCW coaxial scanning LiDAR regardless of its
optical characteristics. | Sung-Hyun Lee, Wook-Hyeon Kwon, Yoon-Seop Lim, Yong-Hwa Park | 2023-02-28T10:19:52Z | http://arxiv.org/abs/2302.14467v2 | # Automatic Internal Stray Light Calibration of AMCW Coaxial Scanning LiDAR Using GMM and PSO
###### Abstract
In this paper, an automatic calibration algorithm is proposed to reduce the depth error caused by internal stray light in amplitude-modulated continuous wave (AMCW) coaxial scanning light detection and ranging (LiDAR). Assuming that the internal stray light generated in the process of emitting laser is static, the amplitude and phase delay of internal stray light are estimated using the Gaussian mixture model (GMM) and particle swarm optimization (PSO). Specifically, the pixel positions in a raw signal amplitude map of calibration checkboard are segmented by GMM with two clusters considering the dark and bright image pattern. The loss function is then defined as L1-norm of difference between mean depths of two amplitude-segmented clusters. To avoid overfitting at a specific distance in PSO process, the calibration check board is actually measured at multiple distances and the average of corresponding L1 loss functions is chosen as the actual loss. Such loss is minimized by PSO to find the two optimal target parameters: the amplitude and phase delay of internal stray light. According to the validation of the proposed algorithm, the original loss is reduced from tens of centimeters to 3.2 mm when the measured distances of the calibration checkboard are between 1 m and 4 m. This accurate calibration performance is also maintained in geometrically complex measured scene. The proposed internal stray light calibration algorithm in this paper can be used for any type of AMCW coaxial scanning LiDAR regardless of its optical characteristics.
_Index Terms_ -- Amplitude-modulated continuous wave (AMCW), Coaxial-scanning light detection and ranging (LiDAR), Internal stray light calibration, Gaussian mixture model (GMM), Particle swarm optimization.
## I Introduction
Light detection and ranging (LiDAR) technology has enormously improved the 3D recognition performance of various intelligent systems such as drones, autonomous vehicles, and robots [1, 2]. Compared to conventional stereo vision, which takes relatively long time for disparity calculation, the LiDAR can provide precise 3D depth information in real time with relatively low calculation loads. In addition to the highly precise measurement performance, the relatively low cost of LiDAR compared to structured light (SL)-based active illumination method also increases the versatility of LiDAR in various engineering applications [3, 4]. Based on these advantages of LiDAR, many researchers have developed the LiDAR systems and related recognition technologies.
Among LiDAR systems, there exist two main methods used to measure the distance: direct time-of-flight (ToF) method and indirect ToF method. The direct ToF measurement method utilizes highly precise time-to-digital converter (TDC) to directly measure the travel time of emitted light signals [5]. Due to the simplicity of the measurement principle, the post processing algorithm for direct ToF LiDAR sensors is generally simpler than that of indirect ToF sensors. Meanwhile, the indirect ToF method also referred to amplitude-modulated continuous wave (AMCW) method, estimates the phase delay of light signals reflected from an object using signal demodulation [6, 7]. The AMCW ToF sensor is widely used for relatively short ranges, up to 10 m, due to its high measurement precision and low cost compared to the direct ToF LiDAR sensor. For every application situations, there is an appropriate distance measurement method in the aspects of maximum range, object property, frame rate, etc.
Aforementioned LiDAR systems have a common measurement error source, the internal stray light. Although the optical components in LiDAR systems such as beam splitter (BS) and focusing lens are coated by anti-reflection materials, 100 % penetration or reflection of light is impossible. Namely, unwanted scattered or multi-reflected light inevitably exists inside the optical lenses of LiDAR systems which results in depth distortion [5, 8, 9]. Internal stray light is mainly affected by the structure of the optical components in LiDAR system, _i.e._, the relative orientation/position of each lens.
Considering the intrinsic model of LiDAR, many researchers have developed internal stray light mitigation methods [5, 8, 9, 10, 11]. For the scanning type LiDAR, many researchers have used single ultra-high precision TDC or multi-channel of TDC to directly estimate the parameters of internal stray light, _i.e._, the time delay and amplitude of stray light [5, 10]. Since TDC directly records the arrival time of light in real-time, it is possible to separate the original reflected light from multiple stray lights. However, this TDC-based approach is generally associated with high costs. Although there exists a signal processing-based approach to estimate the scattered light, this method is not applicable in real-time fast imaging due to its complex feedback algorithm structure [11]. For the flash type ToF sensors (ToF cameras), there are two main approaches to estimate stray light information: heterodyne modulation and hardware optimization of the optical layout. The heterodyne modulation method utilizes multiple modulation frequencies to increase the information of acquired cross-correlations including the stray light [9]. After post-processing the data acquired by heterodyne mixing, the undistorted distance can be estimated. The coded modulation method, similar to amplitude modulation with multiple frequencies, has also been used to estimate the stray light information in previous research works [9]. For the hardware optimization, some researchers have tried
to modify the layout of optical components using optical path simulation to minimize the stray light effects in ToF sensors [8]. Likewise, for flash-type ToF sensors, there exist stray light mitigation methods mainly related to the modification of modulation source or optical layout. These hardware modifications inevitably increase the cost of sensors.
To mitigate internal stray light effects without aforementioned high-cost hardware modifications, this paper proposes an automatic internal stray light calibration method targeting coaxial scanning type AMCW LiDAR [6]. Due to the existence of static-internal stray light generated in the process of emitting laser, the ratio of directly reflected light from an object to the internal stray light varies with the reflectivity of the object even at same distance. Consequently, the measured distance is inevitably changed with the amplitude of received light even at same object distance. If the exact stray light information can be estimated, then the depth distortion caused by stray light can be mitigated [12]. To precisely estimate the amplitude and phase delay of internal stray light using a single modulation frequency, the Gaussian mixture model (GMM) and particle swarm optimization (PSO) are used in this paper [13, 14]. Specifically, a calibration checkboard is measured with previously developed AMCW coaxial scanning LiDAR at multiple fixed distances [6]. The image pixel positions in each raw amplitude (amplitude of cross-correlation) map are then segmented into two clusters: a pixel position group of bright pattern, and that of dark pattern. This amplitude-based segmentation is processed using GMM, since the data distributions of measured depth and raw amplitude maps follow the Gaussian distribution [15]. After such amplitude-based segmentation, the L1-norm of difference between the mean depths of corresponding two amplitude-segmented clusters is calculated for each measured distance case. The average of all these aforementioned L1-norms is then chosen as actual loss to be minimized by PSO. By the optimization process of PSO, the optimal values of the two target parameters, _i.e.,_ the amplitude and phase delay of internal stray light, are extracted. Using these estimated stray light parameters, the cross-correlations caused by stray light are calculated and subtracted from the raw measured cross-correlations to result in corrected cross-correlations. Using these corrected cross-correlations, post-corrected depth maps can then be generated. Consequently, this optimization process is the same as finding the internal stray light parameters which make all post-corrected depth maps of the checkboard as flat as possible. After finding out the correct stray light parameters, these values can also be used to correct depth maps of other object scenes. Experimental validation in this paper showed that there was a decrease in loss from tens of centimeters to 3.2 mm when the distance of the calibration checkboard ranged from 1 m to 4 m. Such highly precise depth error correction is also maintained in complex multi-objects images. The main advantages of the proposed internal stray light calibration method can be summarized as follows:
1. As there is no systematic assumption for stray light parameter identification, the proposed calibration method can be utilized in any type of AMCW coaxial scanning LiDAR.
2. The proposed calibration method utilizes only some depth and raw amplitude maps of checkboard with single modulation frequency.
This paper is organized as follows: Section II presents the problem statement related to internal stray light. Section III presents the internal stray light calibration method using GMM and PSO. Section IV presents the validation results of the proposed stray light calibration method including parametric study and experimental results. Section V presents the conclusion of this paper.
## II. Problem Definition: Internal stray light in AMCW coaxial scanning LiDAR
In AMCW coaxial scanning LiDAR, the light source, which is generally a laser diode, is amplitude-modulated in sinusoidal waveform [6, 16]. The modulated light signal is then collimated and emitted to the measurement point through optical components such as BS, wave plate, and scanner. After the emitted light signal is reflected from the object, it is focused on the active area of the photodetector, such as the avalanche photodiode (APD). Using the received light signal and the demodulation (reference) signal, the cross-correlation samples are calculated. Based on these correlation samples, the phase, amplitude, and offset of the original cross-correlation function can be estimated [6, 16]. However, in the process of emitting laser signal, unintended light signals are generated by inner multi-reflection in optics and sensed by APD. Such internal stray light results in distortion of cross-correlation as shown in Fig. 1.
In Fig. 1, there exists multi-reflected light in optics between each reflectance facet during emitting laser signal to object. Although only one stray light signal is presented in Fig. 1 as an example, there actually exist lots of stray light rays which are multi-reflected or scattered inside the coaxial optics during emitting the laser signal. All these internal stray light signals can be assumed as static if the measurement conditions such as layout of optical components, the laser power, and the modulation frequency are fixed. Consequently, for homodyne
Figure 1: Example of internal stray light in homodyne AMCW LiDAR optical system. QWP is quarter wave plate, PBS is polarizing beam splitter, HWP is half wave plate.
mixing which modulates signal in sinusoidal waveform with single frequency, the net static internal stray light signal can be modeled as a single sinusoidal waveform following trigonometric characteristics [12]. Based on this fact, the related mathematical expressions in the time domain are as follows:
\[\hat{C}(\varphi_{n}) = \frac{1}{T_{\text{int}}}\int_{0}^{T_{\text{int}}}\left(r(t)+s(t) \right)\cdot m_{\varphi_{n}}\left(t\right)dt\] (1) \[= \frac{1}{T_{\text{int}}}\int_{0}^{T_{\text{int}}}r(t)\cdot m_{ \varphi_{n}}(t)dt+\frac{1}{T_{\text{int}}}\int_{0}^{T_{\text{int}}}s(t)\cdot m _{\varphi_{n}}\left(t\right)dt\] \[= C_{\text{\tiny{${}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{{}_{ }_{{}_{{}_{{}_{}_{{}_{{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{ }}_{{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{} }_{{}_{{}_{}_{{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{}_{{}_{}_{{}_{}_{}_{{}_{}_{{}_{}} }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\,\,\\\\\\\\,
\[=\frac{c}{4\pi\cdot f}\cdot\arctan\left(\frac{C_{r}(\varphi_{i})-C_{r}(\varphi_{i}) }{C_{r}(\varphi_{i})-C_{r}(\varphi_{i})}\right) \tag{8}\]
where \(\tilde{d}\) is the corrected depth, _i.e._, estimated true depth corresponding to \(\varphi_{r}\), based on the identified parameters of internal stray light. To acquire the correct depth corresponding to directly reflected light in (8), the accurate estimation of internal stray light parameters, _i.e._, \(A_{s}\) and \(\varphi_{s}\), is important.
In this paper, to estimate the aforementioned parameters of net internal stray light and to correct the distorted correlation samples, a novel calibration method based on GMM and PSO is proposed. The key idea of the proposed method is that the signal-to-noise ratio (SNR) of directly reflected light to static internal stray light changes with the reflectivity of an object even at same distance. This different SNR directly results in the different measured distances. Following this measurement property, the raw measured depth map of the calibration checkboard has two distinguished depth clusters due to the repetitive dark and bright image pattern. By finding the target parameters of stray light that make the post-corrected depth map of calibration checkboard flat, the true depth induced by only reflected light can be properly estimated using (8). The detailed explanation of the stray light calibration method is presented in the following Section.
## III Internal stray light calibration in amcw coaxial scanning lidar using GMM and PSO
In this Section, the details of the stray light calibration method are presented. The block diagram of the proposed calibration method is shown in Fig. 2. According to Fig. 2, 4 cross-correlation sample maps and one raw amplitude (amplitude of cross-correlation) map of checkboard are measured at \(N-\) different object distances. The GMM-based segmentation of image pixel positions in a raw amplitude map is then processed \(N-\) times to classify two different reflectivity regions (dark and bright) for each raw amplitude image. The reason GMM is used in this paper is mainly attributed to the inherent Poisson distributions of measured depth and raw amplitude data in this paper. As the number of data points is over 500, this Poisson distribution can be approximated as a Gaussian distribution [15, 17]. Namely, it can be intuitively deduced that the raw amplitude data distribution of the checkboard measured by amcw coaxial scanning LiDAR is composed of two Gaussian clusters corresponding to the dark and bright repetitive image pattern. After the amplitude-based segmentation of image pixel positions, the loss at specific measured distance case is defined as L1 norm of difference between mean depths of corresponding two pixel position clusters. Likewise, \(N-\) different losses are calculated for each measured distance case. The average of these L1 losses is then chosen as actual loss. This loss is minimized by PSO algorithm, which is a non-convex iterative optimization method [13, 20]. Considering multiple local minimums of searching points and global minimum simultaneously, the PSO algorithm can find the exact global optimal point in relatively fast calculation time. The estimated stray light parameters are then used to correct the cross-correlation samples. At final step, the post-corrected depth maps are generated based on (8).
To specifically explain the backgrounds of Fig. 2, the GMM, PSO, and depth correction method are presented.
### _Gaussian Mixture Model (GMM)_
GMM is generally used to cluster the given dataset assuming that the distribution of data follows a linear combination of multiple Gaussian distribution functions [14, 18, 19]. If the number of clusters is fixed in advance, the GMM can classify each data into one of the Gaussian distributed clusters. To determine the class of a given input data, the responsibility, _i.e._, the posterior probability of belonging to specific cluster for the input data, is calculated for each cluster. According to GMM principle, the input data is determined to belong to the class of which the responsibility is the maximum [14, 18, 19]. The detailed expression for the pixel position segmentation based on GMM in Fig. 2 is expressed as follow [14, 18, 19]:
Fig. 4: Histogram of raw amplitude map measured by amcw scanning LiDAR at distance of 2.3 m
Fig. 3: Raw amplitude map of calibration checkboard measured by amcw scanning LiDAR at distance of 2.3 m.
\[\gamma_{c}^{\ \lambda}\left(\hat{\Gamma}_{\lambda}(u,v)\right)\!=\!\frac{\pi_{c}^{ \ \lambda}\cdot\mathrm{N}(\hat{\Gamma}_{\lambda}(u,v)\mid\mu_{c}^{\ \lambda},\sigma_{c}^{\ \lambda})}{\sum_{i=1}^{ \lambda}\pi_{i}^{\ \lambda}\cdot\mathrm{N}(\hat{\Gamma}_{\lambda}(u,v)\mid\mu_{c}^{\ \lambda},\sigma_{i}^{\ \lambda})} \tag{9}\]
where \(\hat{\Gamma}_{\lambda}(u,v)\) is the raw amplitude of cross correlation at the pixel position of \((u,v)\) in the \(k^{\ \lambda}\)( \(k\!=\!1,2,...N\)) raw amplitude map, \(\gamma_{c}^{\ \lambda}\)( \(\bullet\)) is the responsibility corresponding to cluster \(c\) (1: dark pattern, 2: bright pattern) for input data in the \(k^{\ \lambda}\) raw amplitude map, \(\pi_{c}^{\ \lambda}\) is the weight factor of cluster \(c\) in the \(k^{\ \lambda}\) raw amplitude map, \(\mu_{c}^{\ \lambda}\) is the mean-amplitude of image pixels belonging to cluster \(c\) in the \(k^{\ \lambda}\) raw amplitude map, \(\sigma_{c}^{\ \lambda}\) is the variance of raw amplitude of image pixels belonging to cluster \(c\) in the \(k^{\ \lambda}\) raw amplitude map, and \(\mathrm{N}(\mu_{c}^{\ \lambda},\sigma_{c}^{\ \lambda})\) is the Gaussian distribution function of raw amplitude corresponding to class \(c\) in the \(k^{\ \lambda}\) raw amplitude map. For each \(k^{\ \lambda}\) ( \(k\!=\!1,2,...N\)) raw amplitude map, all image pixels are determined to belong to either pixel position cluster of dark pattern, _i.e._, \(R_{1}^{\ \lambda}\), or pixel position cluster of bright pattern, _i.e._, \(R_{2}^{\ \lambda}\), considering the maximum value of (9). Meanwhile, to estimate the proper parameters, _i.e._, \(\pi_{c}^{\ \lambda}\), \(\mu_{c}^{\ \lambda}\), \(\sigma_{c}^{\ \lambda}\) for all \(c\) and \(k\), Expectation-Maximization (EM) is used in this paper like many other previous works [14, 18, 19].
As presented above, the GMM assumes the distribution of dataset same as the linear summation of Gaussian clusters. Due to this assumption, the GMM is not always the ideal solution for various clustering and segmentation problems. However, for the data distribution measured by AMCW LiDAR, the GMM intuitively fits well. Since the inherent distribution of large number data measured by AMCW LiDAR definitely follows the Gaussian distribution [6, 15, 17], the raw amplitude map of calibration checkboard can be divided into two main Gaussian clusters, _i.e._, dark pattern and bright pattern. To validate such tendency, Fig. 3 and 4 are presented. According to Fig. 3, the raw amplitude map of checkboard at distance of 2.3 m has clear separation for each reflectivity region. The histogram of corresponding amplitude data in Fig. 4 also shows the two main Gaussian-like clusters. Considering these backgrounds, the GMM was utilized in this paper to segment the raw amplitude map of calibration checkboard.
### _Particle Swarm Optimization (PSO)_
PSO is an iterative optimization method mimicking the natural movement mechanism of a flock of birds or a school of fish [13, 20]. The PSO randomly sets a number of searching points, which are also called _particles_, in the data feature space. Each particle moves in the feature space following the direction of minimizing the designed loss. The direction is directly affected by the velocity vector in current optimization iteration step. To update the particles for each iteration, the velocity vectors are properly updated. The distinguishing point is that the velocity vector is updated considering not only the local minimum found by specific particle, but also the global minimum found by the entire particles in the current iteration step. This indicates that each particle shares its current feature space information at every iteration step, which resembles the nature social behavior of animals. Such local-global iterative optimization has robust convergence performance even for non-convex optimization problems such as complex calibration problems of LiDAR and cameras [21, 22, 23, 24]. Such PSO is adopted as optimization method in this paper to estimate the internal stray light parameters, _i.e._, \(\hat{A}\), and \(\hat{\phi}_{s}\) in Fig. 2. The overall block diagram of PSO process is shown as Fig. 5. As shown in Fig. 5, total \(h\!-\!\)candidates of stray light parameter vector exist. For each candidate (particle) assumption, post-correction based on (7) and (8) are processed for \(N\!-\!\)different measured distance cases in Fig. 2. Consequently, \(N\!-\!\)corrected depth maps are generated for each particle. Based on the \(N\!-\!\)corrected depth maps, the local loss is calculated for each particle. After finding local-global minimums and particle updates, the aforementioned process is repeated until the total PSO iteration number satisfies predetermined threshold.
For the precise estimation of internal stray light in Fig. 5, the loss should be properly defined to reduce the mean depth discrepancy between \(R_{1}^{\ \lambda}\) and \(R_{2}^{\ \lambda}\) in corresponding \(k^{\ \lambda}\) ( \(k\!=\!1,2,...N\)) raw amplitude map, considering all cases of \(k\). For the specific particle, the loss and related parameters in Fig. 2 and 5 are defined as follows:
\[x_{i}^{\ \lambda}=\left(A_{{}_{i-\lambda}^{\ \lambda}},\varphi_{{}_{i-\lambda}^{ \lambda}}\right)^{\ T} \tag{10}\]
Fig. 5: Flow architecture representing estimation of stray light parameters based on PSO in Fig. 2. \(h\) is the number of particles. \(N\) is the number of measured distances in Fig. 2.
Based on the aforementioned defined particle and loss, simplified mathematical expressions of PSO are expressed as follows:
\[{v_{i}}^{j+1}={w^{j}}\cdot{v_{i}}^{j}+{c_{1}}\cdot{\tau_{i}}\cdot\left({{p_{i}}^{ j}-{x_{i}}^{j}}\right)+{c_{2}}\cdot{r_{2}}\cdot\left({{g^{j}}-{x_{i}}^{j}}\right) \tag{13}\]
\[{x_{i}}^{j+1}={x_{i}}^{j}+{v_{i}}^{j+1} \tag{14}\]
where \({v_{i}}^{j}\) is the velocity vector of \(i^{\text{th}}\) particle at iteration step \(j\), \(w^{j}\) is the weight decay which is exponential function of 0.5, \({p_{i}}^{j}\) is the local minimum point found by \(i^{\text{th}}\) particle until iteration step \(j\), \({g^{j}}\) is the global minimum point found by entire particles until iteration step \(j\), \({c_{1}}\) and \({c_{2}}\) are the weight factors, \({r_{1}}\) and \({r_{2}}\) are random numbers between 0 and 1. The PSO was utilized to find final global-minimum loss and corresponding parameters of internal stray light as shown in Fig. 2 and 5.
### _Depth Correction Using Corrected Cross-Correlation_
Using the optimal stray light parameters, the cross-correlation samples generated by stray light, _i.e._, \({C_{s}}({\varphi_{n}})\), can be calculated based on (7). Each correlation sample of stray light is subtracted from the corresponding raw measured cross-correlation sample, _i.e._, \(\hat{C}({\varphi_{n}})\), to extract the corrected cross-correlation sample. The corrected cross-correlation samples are the same as the cross-correlation samples of directly reflected light signal, _i.e._, \({C_{s}}({\varphi_{n}})\). The corrected depth map is then acquired by (8) at final step in Fig. 2.
## IV Validation of Internal Stray Light Calibration Method
To validate the calibration performance of the proposed algorithm in Fig. 2, the parametric study and experimental validation are presented in this Section. In the parametric study, the calibration performance of the proposed algorithm for a number of different input images is analyzed in terms of convergence speed and loss value. To validate the actual calibration performance, the experimental results are also presented and analyzed. For measurement objects, calibration checkboard and sculptures were used.
### _Parametric Study_
To utilize GMM and PSO, the related hyperparameters should be properly determined before executing the algorithm. The main hyperparameters related to GMM and PSO are shown in Table I and Table II. All details related to the hyperparameters are explained in other previous works [13, 14, 20, 25].
The image data of the calibration checkboard were acquired for four distance points: 1.75 m, 2.3 m, 3.0 m, and 4.0 m. To analyze the convergence speed, minimized loss, and overfitting of the proposed calibration method in Fig. 2, optimization processes were conducted multiple times changing the number of input images. The CPU used in this paper is an Intel(R) core i5-8500 with 16 GB RAM. The software used is Matlab 2022b. The loss plots along the optimization iteration for different number of input images are shown in Fig. 6. The loss in Fig. 6 is the L1 loss in (12) corresponding to the global optimal point
for each PSO iteration step in (13). According to Fig. 6, the smooth convergence of loss minimization is achieved for all cases of optimization trials. From Fig. 6(a) to Fig. 6(d), the minimized losses are 5.2310\(\cdot\)10\({}^{9}\) m, 0.0022 m, 0.0012 m, and 0.0032 m, respectively. However, there is no confidence to assure that there is no overfitting at specific distance. To examine the overfitting property, an additional parametric study was conducted as shown in Table III. For each case of different input images shown as left column of Table III, corresponding L1 losses in (12) calculated using only checkboard images measured at 3.0 m ( \(N\) = 1 ) are presented. For each calculation of L1 loss at 3.0 m, the used internal stray light parameters, _i.e._, \(\hat{A}_{\mathrm{s}}\) and \(\hat{\varphi}_{\mathrm{s}}\) in Fig. 2, are estimated from the input images measured at corresponding distance condition shown in left side of Table III. According to Table III, there exists a tendency of overfitting confined to the inner region of input distances. Specifically, if the measured distance of used input images is only 1.75 m, the corresponding L1 loss at 3.0 m in Table III is 0.3128 m, which is much larger than the minimized loss in Fig. 6(a) same as 5.2310\(\cdot\)10\({}^{9}\) m. Such large discrepancy between loss in Table III and corresponding minimized loss in Fig. 6(a) means that the estimated stray light parameters are over fitted at 1.75 m. To avoid such overfitting at specific distance, enough input images measured at wide range are necessary. For the case of 2 input distances: 1.75 m and 4.0 m, the corresponding L1 loss in Table III is about 0.1560 m, which is not still enough to avoid overfitting. However, if the input images measured at 1.75 m, 2.3 m, and 4.0 m are used, the corresponding L1 loss at 3.0 m is enormously reduced to 0.0151 m. The gap between averaged loss in Fig. 6(c) and corresponding loss at 3.0 m in Table III is about 1 cm. This gap is quite tolerable value in that the depth deviation due to random shot noises in this paper is cm-scale at 3.0 m, which is described in detail in the following subsection. To ensure margin preventing the over fitting, total 4 input images measured at 1.75 m, 2.3 m, 3.0 m, and 4.0 m are used to estimate the parameters of internal stray light in this paper. The averaged loss in Fig. 6(d) is 0.0032 m. Meanwhile,
Fig. 6: Loss plot along the iteration for different number of used input images for optimization: (a) distance of input images is 1.75 m, (b) distances of input images are 1.75 m, 4.0 m, (c) distances of input images are 1.75 m, 2.3 m, 4.0 m, (d) distances of input images are 1.75 m, 2.3 m, 3.0 m, 4.0 m.
to test the calculation load, the total processing time for each different input images are also presented in Table IV. Table IV shows that total calculation time for optimization is generally proportional to the number of input images. For the case of 4 input images, the total processing time is 172.038 sec, which is quite tolerable. According to the parametric study, the final optimized phase delay of stray light using 4 input images is 0.3509 in radian, and the corresponding optimized amplitude is 0.0976 in voltage, in this paper.
Fig. 8: Depth maps of calibration check board at distance of 2.3 m: (a) raw depth map in front view, (b) corrected depth map in front view, (c) raw depth map in side view, (d) corrected depth map in side view.
Fig. 10: Depth maps of calibration check board at distance of 4.0 m: (a) raw depth map in front view, (b) corrected depth map in front view, (c) raw depth map in side view, (d) corrected depth map in side view.
Fig. 7: Depth maps of calibration check board at distance of 1.75 m: (a) raw depth map in front view, (b) corrected depth map in front view, (c) raw depth map in side view, (d) corrected depth map in side view.
Fig. 9: Depth maps of calibration check board at distance of 3.0 m: (a) raw depth map in front view, (b) corrected depth map in front view, (c) raw depth map in side view, (d) corrected depth map in side view.
### _Experimental Validation of Internal Stray Light Calibration Method_
The actual calibration checkboard images measured at four different distances of 1.75 m, 2.3 m, 3.0 m, and 4.0 m were used as input images in Fig. 2. After the final optimization process, the optimal internal stray light parameters were utilized to correct the depth error of each checkboard image. The measurement conditions are as follows: 20 mW of laser power, 16 use of integration time, 31.25 MHz single modulation frequency, bright indoor room (300 lx). The used AMCW coaxial scanning LiDAR in this paper has an optical layout same as Fig. 1 and scans the laser signal through a two-axis fast galvo scanner [6]. All the results are shown in Fig. 7 to Fig. 10.
According to Fig. 7(a), the raw depth map with resolution of 100-100, has two main clusters due to the internal stray light as explained in previous Sections. This tendency can be more easily identified in Fig. 7(c). The depth distortion due to internal stray light is corrected as Fig. 7(b) and Fig. 7(d). The standard deviation of raw depth map in Fig. 7(a) is about 0.4474 m due to the abrupt depth variation. This large deviation is reduced to 0.0142 m as shown in Fig. 7. For Fig. 8, 9, and 10, the qualitative backgrounds are the same as those of Fig. 7. The standard deviations of raw depth map in Fig. 8, 9, and 10 are 0.7730 m, 1.9248 m, and 2.2190 m, respectively. After the optimization in Fig. 2, these original standard deviations are reduced to 0.0135 m, 0.0441 m, and 0.0755 m, respectively. According to these results, as the measured distance is increased, the depth error due to internal stray light also increases. This tendency is mainly attributed to the SNR of directly reflected light to internal stray light which is lowered as the measured distance is increased. Meanwhile, as shown in Fig. 8(b) and (d), the random deviation pattern in dark region is much larger than that in relatively bright region. Such difference in deviation can be explained by the property of AMCW LiDAR. Generally, the depth deviation of AMCW LiDAR is proportional to the inverse of the amplitude of received light. Since the checkboard has a repetitive black and white pattern, the standard deviation of depth map is also inevitably changed along the image pixel position.
To validate the versatility of the proposed internal stray light calibration method, some sculptures were measured in 300-200 resolution as shown as Fig. 11. According to Fig. 11(a), (b), and (c), the original raw depth map which is enormously distorted can be corrected, restoring the original geometry of objects. To analyze the depth variation in detail, the Julien bust was zoomed as shown as Fig. 11(d). The depth map in Fig. 11(d) shows naturally smooth depth gradients in every image pixel compared to Fig. 11(a).
In summary, the depth error correction performance of the proposed calibration algorithm was validated in terms of depth deviation and versatility using checkboard images and sculptures. With proper input images of checkboard in wide ranges, the proposed stray light calibration method is anticipated to maintain accurate error correction results even in other types of AMCW coaxial scanning LiDAR.
## V Conclusion
In this paper, a novel internal stray light calibration method based on GMM and PSO is proposed and demonstrated. Owing to the inherent distribution of AMCW LiDAR data, the GMM can be properly used to segment the raw amplitude map of the calibration checkboard. Using the clustered map, the depth loss is calculated and then minimized by PSO algorithm to find the optimal internal stray light parameters. The raw depth map is then corrected based on the estimated stray light information. All these processes are conducted with single modulation frequency data. According to the validation results, the average loss of depth discrepancy induced by stray light can be reduced to 3.2 mm. Using the proposed stray light calibration method, the raw depth map of geometrically complex objects could be restored based on the estimated stray light parameters. The proposed calibration algorithm in this paper can be utilized in various AMCW coaxial scanning LiDAR in that the proposed method is not dependent on the systematic information of LiDAR.
|
2309.03269 | The Sphinx Public Data Release: Forward Modelling High-Redshift JWST
Observations with Cosmological Radiation Hydrodynamics Simulations | The recent launch of JWST has ushered in a new era of high-redshift astronomy
by providing detailed insights into the gas and stellar populations of galaxies
in the epoch of reionization. Interpreting these observations and translating
them into constraints on the physics of early galaxy formation is a complex
challenge that requires sophisticated models of star formation and the
interstellar medium (ISM) in high-redshift galaxies. To this end, we present
Version 1 of the Sphinx$^{20}$ public data release. Sphinx$^{20}$ is a full box
cosmological radiation hydrodynamics simulation that simultaneously models the
large-scale process of cosmic reionization and the detailed physics of a
multiphase ISM, providing a statistical sample of galaxies akin to those
currently being observed by JWST. The data set contains $\sim14,000$ mock
images and spectra of the stellar continuum, nebular continuum, and 52 nebular
emission lines, including Ly$\alpha$, for each galaxy in Sphinx$^{20}$ with a
star formation rate $\geq0.3\ {\rm M_{\odot}\ yr^{-1}}$. All galaxy emission
has been processed with dust radiative transfer and/or resonant line radiative
transfer, and data is provided for ten viewing angles for each galaxy.
Additionally, we provide a comprehensive set of intrinsic galaxy properties,
including halo masses, stellar masses, star formation histories, and ISM
characteristics (e.g., metallicity, ISM gas densities, LyC escape fractions).
This paper outlines the data generation methods, presents a comparative
analysis with JWST ERS and Cycle 1 observations, and addresses data set
limitations. The Sphinx$^{20}$ data release can be downloaded at the following
URL: https://github.com/HarleyKatz/SPHINX-20-data | Harley Katz, Joki Rosdahl, Taysun Kimm, Jeremy Blaizot, Nicholas Choustikov, Marion Farcy, Thibault Garel, Martin G. Haehnelt, Leo Michel-Dansac, Pierre Ocvirk | 2023-09-06T18:00:01Z | http://arxiv.org/abs/2309.03269v2 | The Sphinx Public Data Release: Forward Modelling High-Redshift JWST Observations with Cosmological Radiation Hydrodynamics Simulations
###### Abstract
The recent launch of JWST has ushered in a new era of high-redshift astronomy by providing detailed insights into the gas and stellar populations of galaxies in the epoch of reionization. Interpreting these observations and translating them into constraints on the physics of early galaxy formation is a complex challenge that requires sophisticated models of star formation and the interstellar medium (ISM) in high-redshift galaxies. To this end, we present Version 1 of the Sphinx[20] public data release. Sphinx[20] is a full box cosmological radiation hydrodynamics simulation that simultaneously models the large-scale process of cosmic reionization and the detailed physics of a multiphase ISM, providing a statistical sample of galaxies akin to those currently being observed by JWST. The data set contains \(\sim 14,000\) mock images and spectra of the stellar continuum, nebular continuum, and 52 nebular emission lines, including Ly\(\alpha\), for each galaxy in Sphinx[20] with a star formation rate \(\geq 0.3\) M\({}_{\odot}\) yr\({}^{-1}\). All galaxy emission has been processed with dust radiative transfer and/or resonant line radiative transfer, and data is provided for ten viewing angles for each galaxy. Additionally, we provide a comprehensive set of intrinsic galaxy properties, including halo masses, stellar masses, star formation histories, and ISM characteristics (e.g., metallicity, ISM gas densities, LyC escape fractions). This paper outlines the data generation methods, presents a comparative analysis with JWST ERS and Cycle 1 observations, and addresses data set limitations. The Sphinx[20] data release can be downloaded at the following URL: [https://github.com/HarleyKatz/SPHINK-20-data](https://github.com/HarleyKatz/SPHINK-20-data).
Subject headings:high-redshift galaxies, ISM, galaxy spectra, reionization, galaxy formation +
Footnote †: slugcomment: Version November 5, 2023
## 1. Introduction
Elucidating the underlying physics that governs galaxy formation at high-redshift is one of the primary goals of extragalactic spectroscopic and imaging surveys with JWST (Gardner et al., 2006). However, understanding how to map the features of a galaxy image or spectra to the underlying properties of the stellar populations and gas as well as the physical processes driving the evolution of the interstellar and circumgalactic medium (ISM, CGM) represents a key theoretical challenge.
Prior to JWST, high-redshift space-based observations were limited to primarily photometric surveys with the Hubble Space Telescope (HST) and Spitzer. The occasional follow-up observation with the WFC grism on HST or ground-based telescopes (e.g. ALMA, MOSFIRE on Keck, or MUSE on the VLT) provided additional spectroscopic redshift confirmation for some of the high-redshift candidates (e.g. Stark et al., 2017; Hashimoto et al., 2018; Jiang et al., 2021; Inami et al., 2017). These telescopes have been used predominantly to constrain high-redshift population statistics such as the UV luminosity function (Bouwens et al., 2015; Livermore, Finkelstein & Lotz, 2017), the global growth of star formation or UV luminosity density as a function of redshift (Ellis et al., 2013; Oesch et al., 2018), and galaxy morphology (e.g. Kawamata et al., 2018; Bouwens et al., 2022). The photometry of individual galaxies has been used to measure early star formation histories (SFHs) with spectral energy distribution (SED) fitting codes (e.g. Chevallard and Charlot, 2016; Carnall et al., 2018; Johnson et al., 2021) or to infer the presence of strong emission lines (e.g. by IR excesses, Roberts-Borsani et al., 2016), which can be used to measure quantities such as the ionizing photon production efficiency (\(\xi_{\rm ion}\), e.g. Bouwens et al., 2016; Stefanon et al., 2022). JWST significantly improves upon HST+Spitzer data for numerous reasons. Not only is JWST more sensitive and has higher spatial resolution compared to its predecessors, but it also has significantly more filters that probe a wavelength range overlapping and between HST and Spitzer that is ideal for capturing rest-frame UV and optical photons at high-redshift. Hence the accuracy by which one can constrain quantities such as the SFH or stellar mass of a galaxy, purely from photometry is greatly improved with JWST. Likewise, the additional spatial resolution and sensitivity allows for spatially resolved constraints on these properties, deeper |
2309.05618 | Progress in Direct Measurements of the Hubble Constant | One of the most exciting and pressing issues in cosmology today is the
discrepancy between some measurements of the local Hubble constant and other
values of the expansion rate inferred from the cosmic microwave background
(CMB) radiation. Resolving these differences holds the potential for the
discovery of new physics beyond the standard model of cosmology: Lambda Cold
Dark Matter (LCDM), a successful model that has been in place for more than 20
years. Given both the fundamental significance of this outstanding discrepancy,
and the many-decades-long effort to increase the accuracy of the extragalactic
distance scale, it is critical to demonstrate that the local measurements are
convincingly free from residual systematic errors. We review the progress over
the past quarter century in measurements of the local value of the Hubble
constant, and discuss remaining challenges. Particularly exciting are new data
from the James Webb Space Telescope (JWST). JWST is delivering high-resolution
near-infrared imaging data to both test for and to address directly several of
the systematic uncertainties that have historically limited the accuracy of the
extragalactic distance scale. We present an overview of our new JWST program to
observe Cepheids, TRGB and JAGB stars. For the first galaxy in our program, NGC
7250, the high-resolution JWST images demonstrate that many of the Cepheids
observed with the Hubble Space Telescope (HST) are significantly crowded by
nearby neighbors. Avoiding the more significantly crowded variables, the
scatter in the JWST near-infrared (NIR) Cepheid period-luminosity relation is
decreased by a factor of two compared to those from HST, illustrating the power
of JWST for improvements to local measurements of Ho. Ultimately, these data
will either confirm the standard model, or provide robust evidence for the
inclusion of additional new physics. | Wendy L. Freedman, Barry F. Madore | 2023-09-11T17:05:49Z | http://arxiv.org/abs/2309.05618v2 | # Progress in Direct Measurements of the Hubble Constant
###### Abstract
One of the most exciting and pressing issues in cosmology today is the discrepancy between some measurements of the local Hubble constant and other values of the expansion rate inferred from the observed temperature and polarization fluctuations in the cosmic microwave background (CMB) radiation. Resolving these differences holds the potential for the discovery of new physics beyond the standard model of cosmology: Lambda Cold Dark Matter (\(\Lambda\)CDM), a successful model that has been in place for more than 20 years. Given both the fundamental significance of this outstanding discrepancy, and the many-decades-long effort to increase the accuracy of the extragalactic distance scale, it is critical to demonstrate that the local measurements are convincingly free from residual systematic errors. We review the progress over the past quarter century in measurements of the local value of the Hubble constant, and discuss remaining challenges. Particularly exciting are new data from the James Webb Space Telescope (_JWST_), for which we present an overview of our program and first results. We focus in particular on Cepheids and the Tip of the Red Giant Branch (TRGB) stars, as well as a relatively new method, the JAGB (J-Region Asymptotic Giant Branch) method, all methods that currently exhibit the demonstrably smallest statistical and systematic uncertainties. _JWST_ is delivering high-resolution near-infrared imaging data to both test for and to address directly several of the systematic uncertainties that have historically limited the accuracy of extragalactic distance scale measurements (e.g., the dimming effects of interstellar dust, chemical composition differences in the atmospheres of stars, and the crowding and blending of Cepheids contaminated by nearby previously unresolved stars). For the first galaxy in our program, NGC 7250, the high-resolution _JWST_ images demonstrate that many of the Cepheids observed with the Hubble Space Telescope (HST) are significantly crowded by nearby neighbors. Avoiding the more significantly crowded variables, the scatter in the _JWST_ near-infrared (NIR) Cepheid PL relation is decreased by a factor of two compared to those from HST, illustrating the power of _JWST_ for improvements to local measurements of \(H_{0}\). Ultimately, these data will either confirm the standard model, or provide robust evidence for the inclusion of additional new physics.
1]Wendy L. Freedman 2]Barry F. Madore 3]Wendy L. Freedman 1]Barry F. Madore 4]The Department of Astronomy & Astrophysics, and the Kavli Institute for Cosmological Physics, University of Chicago, 5640 S. Ellis Ave., Chicago, IL, 60637 [email protected]
###### Contents
* 1 Introduction
* 2 The Landscape at the Turn of the Century: The Hubble Key Project
* 3 Progress Since the Key Project: The Cepheid Distance Scale: 2001-2023
* 3.1 Chicago Carnegie Hubble Program (CCHP)
* 3.2 Supernova Ho for the Equation of State (SHoES)
* 4 Tip of the Red Giant Branch (TRGB) Distance Scale: 1993-2023
* 4.1 Chicago Carnegie Hubble Program (CCHP) and the TRGB
* 4.2 Other Determinations of the Hubble Constant based on the TRGB
* 5 Anchors to the Distance Scale
* 5.1 Large Magellanic Cloud (LMC)
* 5.2 Milky Way Parallaxes: Hipparcos, HST and Gaia
* 5.3 NGC 4258
* 6 Type Ia Supernovae
* 6.1 Carnegie Supernova Project (CSP)
* 6.2 Pantheon+
* 7 J-Region Asymptotic Giant Branch (JAGB) Distance Scale: 2000-2023
* 8 Other Methods
* 8.1 Surface Brightness Fluctuations (SBF)
* 8.2 Masers
* 8.3 Strong Gravitational Lensing
* 8.4 Gravitational Wave Sirens
* 9 The Hubble Constant and the Impact of the James Webb Space Telescope (JWST)
* 9.1 JWST Cepheid Program
* 9.2 JWST Tip of the Red Giant Branch Program
* 9.3 JWST Resolved Carbon-Rich AGB Stars Program
* 10 Is There a Crisis in Cosmology?
* 11 Appendix: Intercomparison of the Cepheid, TRGB and JAGB Methods and Sensitivities
* 11.1 General Remarks
* 11.2 Photometry
* 11.3 Crowding
* 11.4 Mass
* 11.5 Evolutionary Status
* 11.6 Spatial Distribution
* 12
11.7 Metallicity 11.8 Binarity 11.9 Mass Loss 11.10 Boundary Conditions 11.11 Correlated Variance 11.12 Mean Magnitudes 11.13 Optimal Bandpasses 11.14 Comments
## 1 Introduction
The year 2023 marks 100 years since Edwin Hubble's famous discovery of a single Cepheid variable in the Andromeda galaxy. Hubble's subsequent measurements of extragalactic distances were based (in part) on the Cepheid Period-Luminosity (PL) relation, aka the Leavitt Law [1]. Correlating these distances with spectral measurements of radial (line-of-sight) velocities [2], ultimately led to the discovery of the expansion of the universe in 1929 [3], and ushered in modern cosmology.1
Footnote 1: It is now appreciated that Lemaître [4] had earlier found a mathematical solution for an expanding universe, recognizing that it provided a natural explanation for the observed recession velocities of galaxies, but these results were published in French in the Annals of the Scientific Society of Brussels, and at that time were not widely accessible.
At the time of its launch in 1990, one of the highest priorities for the Hubble Space Telescope (HST) was to convincingly measure the current rate of the expansion of the universe, the Hubble constant (\(H_{0}\)), to an accuracy of 10%. In a Cepheid-based calibration, the Hubble Key Project team in 2001 obtained a value of \(H_{0}\) = 72 \(\pm\) 3 (statistical) \(\pm\) 7 (systematic) [5]. Two additional decades of effort with HST, \(Spitzer\), and many additional ground-based telescopes, subsequently improved the measurements of \(H_{0}\), with estimated accuracies currently falling in the 2-5% range [6]. The Cepheid calibration of \(H_{0}\)[7, 8] continues to yield values of \(H_{0}\sim 73\,\,\,{\rm km\,s^{-1}\,Mpc^{-1}}\), whereas measurements using the tip of the red giant branch (TRGB) [9, 10] yield slightly lower values, closer to 70 \(\,\,{\rm km\,s^{-1}\,Mpc^{-1}}\). Recent estimates of \(H_{0}\) from CMB measurements have extremely high precision, with results from the Planck satellite [11] yielding \(H_{0}=67.4\,\pm\,0.5\,\,\,{\rm km\,s^{-1}\,Mpc^{-1}}\) (better than 1%)2.This level of precision is new for the field of observational cosmology, where until as recently as a couple of decades ago, a factor-of-two uncertainty had persisted for several decades. In a sense, this new level of precision has led to high expectations for other types of cosmological measurements. Yet, obtaining equally high precision observations of the local measurements of \(H_{0}\) remains a formidable challenge. At face value, the inconsistency between the local value of the Hubble constant and the cosmologically modeled value could be interpreted as an inadequacy in the theory, thereby begging the questions: Is cosmology in a crisis? And is our current model of the universe now in need of new physics?
Footnote 2: For a discussion of possible systematics in the CMB analysis see [12] and references therein.
While acoustic oscillations of the ionized plasma in the early universe are well understood and based on linear physics, it is important to keep in mind that the astrophysics of stellar distance indicators is less predictive from first principles; and the requirement of accurate _absolute_ calibrations of the local distance scale at a comparable (1%) level, with the identification and elimination of systematic effects for evolving stars (which may be located in dusty, crowded regions) are tall orders. Given the current challenges in obtaining percent
level accuracy in the local distance measurements, it may be premature to be claiming either confirmation, or the refutation, of the need for physics beyond the standard model [13]. These remaining challenges underscore the need for a definitive measure of \(H_{0}\) locally, which in turn demands a complete and independently confirmed assessment of its total (statistical and systematic) uncertainties [14].
from _JWST_. In an appendix, we compare and contrast the strengths and weaknesses of the most promising methods in use today for measuring distances in the local universe. The prospects are good for a resolution to the _local_ (distance scale) version of the \(H_{0}\) tension. The past 20 years have been referred to as the era of 'precision cosmology'. We must now ensure that we have convincingly entered the era of 'accurate cosmology'.
## 2 The Landscape at the Turn of the Century: The Hubble Key Project
The launch of HST in 1990 provided the opportunity to undertake a major program to calibrate the extragalactic distance scale. The HST Key Project was designed to measure the Hubble constant to a total (statistical plus systematic) uncertainty of \(\pm 10\%\)[5]. Given that the dominant sources of error were clearly systematic in nature, the approach taken in the Key Project was to measure \(H_{0}\) by intercomparing several different methods, each having minimally overlapping systematics. The goal was to extend and apply the Cepheid distance scale beyond what could be achieved from the ground, and then to assess and to quantify the overall systematic errors in the measurement of \(H_{0}\). Observations were obtained in the V band (F555W; 12 epochs within a 60-day window + 1 additional epoch a year later, to avoid aliasing effects) and the I band (F814W; 4 epochs). The roll angle of the telescope was held fixed for all of the observations to maximize overlap of the different epochs and to facilitate the photometric measurements. Data were taken with a power-law spacing to minimize aliasing effects [15]. In addition, a test for the metallicity dependence of the Cepheid PL relation was undertaken.
Cepheids are supergiants, but they are still not sufficiently bright that they can be used to determined distances far enough away to sample the unperturbed cosmic Hubble flow. Large-scale flows generated by major clusters, filaments and voids induce so-called "peculiar velocities" on one another and on individual field galaxies. This ubiquitous source of noise in the velocity field must either be modelled out, averaged over large samples, or diminished in its relative impact by going out to distances where the Hubble flow is dominant. To make that leap secondary distance indicators of higher luminosity, (but often of lower precision and accuracy), were invoked. The secondary distance indicators specifically targeted by the Key Project for zero-point calibration by the Cepheids were the Tully-Fisher relation, the Surface Brightness Fluctuation method and the Fundamental Plane of galaxies, as well as two types of extremely bright explosive events, Type I and Type II supernovae.
None of the secondary distance indicators have first-principles physics backing them up; they are largely empirical distances indicators. Type Ia supernovae have most recently become the secondary indicator of choice because of 1) their brightness, which allows them to probe cosmological distances, 2) their standardizable maximum-light luminosities and 3) their low scatter in the Hubble diagram. At lower redshifts these candles are found to have absolute magnitudes with a dispersion of less than 5-6 percent per event [16]. Establishing the absolute zero point of Type Ia supernovae quickly became the _de facto_ standard means of deriving the local value of the expansion rate of the universe, with Cepheids providing the zero point calibration.
The final result from the HST Key Project, \(H_{0}=72\,\pm 3\) (stat) \(\pm\) 7 (sys) \(\,\rm km\,s^{-1}\,Mpc^{-1}\), was based on Cepheid distances to 31 galaxies, 18 of which were newly measured as part of the Key Project. The largest contribution to the systematic uncertainty (5%), at that time, was that of the distance to the calibrating galaxy, the Large Magellanic Cloud (LMC), to which the distance had been measured using a wide variety of independent techniques.
In what follows, we discuss in detail the two currently highest-precision methods for measuring distances to nearby galaxies, and for providing a tie-in to SNe Ia: Cepheids and the Tip of the Red Giant Branch (TRGB) method. For nearby galaxies, these two methods currently have the lowest measured scatter, their distances can be compared _galaxy by galaxy within the same galaxies_, and they can be applied individually to samples of dozens of galaxies, in sharp contrast to other techniques at the moment.
We pay particular attention to systematic uncertainties, the essential issue in the measurement of galaxy distances, the determination of \(H_{0}\), and for settling the question of whether there is additional physics beyond \(\Lambda\)CDM.
## 3 Progress Since the Key Project: The Cepheid Distance Scale: 2001-2023
Cepheids have held the place of being the gold standard for the measurement of extragalactic distances ever since Edwin Hubble's discovery of the expansion. A recent review of Cepheids as distance indicators is given by Freedman & Madore [17]. For more details on the nature of Cepheid variables themselves, the reader is also referred to some earlier reviews[18, 19, 20, 21].
Following on the Key Project, Macri et al. [22] obtained H band (F160W) observations of a subset of the Key Project galaxies using NICMOS on HST. Their findings supported the assumption of universality for the extinction law for Cepheids: the VI photometry used in the Key Project Cepheid distance scale agreed with the augmented VIH distances employing the additional near-infrared observations. This result suggested that there is no (extinction law) advantage in going to the extra effort to move the Cepheid calibration and its application into the IR. The study additionally showed that the lower spatial resolution in the H band imaging data led to more serious crowding effects than in the optical, an issue of even more concern as the sample of galaxies is augmented to include galaxies farther away.
### Chicago Carnegie Hubble Program (CCHP)
The goal of the Chicago Carnegie Hubble Program (CCHP) is to increase the accuracy of measurements of \(H_{0}\). Initially begun 15 years ago (as the Carnegie Hubble Program), the program was designed as a follow-up to the HST Key Project, taking advantage of the mid-infrared capabilities of the Infrared Array Camera (\(IRAC\)) on \(Spitzer\), and was undertaken in anticipation of the launches of _Gaia_ and _JWST_[7]. It followed up on HST \(NICMOS\) observations made in the F160W bandpass [22] for Cepheids in 12 nearby galaxies, and the detailed JHK (complete lightcurve coverage) near-infrared, ground-based study of 92 Cepheids in the LMC[23]. Over time, the program was expanded to include not only Cepheids, but also TRGB [24, 25] and J-region Asymptotic Giant Branch (_JAGB_) stars [26, 27, 28], each of these being independent means of calibrating Type Ia supernovae (SNe Ia) and thereby, \(H_{0}\). The current focus of the CCHP is directed at exploiting the superb infrared sensitivity and high spatial resolution of the _JWST_ to improve the accuracy and precision of all three of these methods.
### Supernova Ho for the Equation of State (SHoES)
The SHoES program [29, 8] has the goal of using _HST/ACS_ and _HST/WFC3_ to extend and improve the Cepheid calibration of SNe Ia for a measurement of \(H_{0}\). Most recently, they have obtained NIR observations of Cepheids in 42 SN Ia host galaxies with the aim of reducing the systematic uncertainties due to reddening and metallicity. Reddening corrections are obtained using a small number of (2 to 3) single-phase observations in the F814W and F555W
bands from _HST/ACS_. The distances are based primarily on \(\sim\)6 low-signal-to-noise observations taken in the F160W (\(H\)) band. The resulting scatter in the F160W period-luminosity relations is typically of order \(\pm\)0.4 - 0.5 mag, which is about a factor of four times greater than the intrinsic dispersion observed in the uncrowded sample of Cepheids in the LMC, for instance. The zero-point calibration is set by Early Data Release 3 (EDR3) geometric parallaxes, masers in the galaxy NGC 4258, and detached eclipsing binaries in the LMC. Their most recent result quotes a 1% uncertainty with \(H_{0}=73.04\)\(\pm\)1.04 \(\,{\rm km\,s^{-1}\,Mpc^{-1}}\), based on their sample of 42 galaxies with distances in the range from 7 out to 80 Mpc.
Figure 2: A sampling of reddening-free near-IR Wesenheit magnitude PL relations adapted from [8]. Note the four-times larger scatter seen in virtually all of the SHoES galaxies compared to the fiducial scatter seen in the LMC and M31 (top row).
Tip of the Red Giant Branch (TRGB) Distance Scale: 1993-2023
The TRGB provides one of the most precise and accurate means of measuring distances in the local universe. Observed color-magnitude diagrams of the Population II stars in halos of nearby galaxies reveal a sharp discontinuity in the red giant branch (RGB) luminosity function at a well-determined magnitude. This feature is easily identified and corresponds to the core helium-flash luminosity at the end phase of RGB evolution for low-mass stars. As a result, the TRGB provides a superb standard candle in the I band [30; 31; 32; 33; 24; 34], and it is a standardizable candle in the near infrared [35; 36; 37; 38]. The method is described in more detail in a number of reviews [39; 20; 40].
In brief, the underlying theory for why the TRGB is an excellent standard candle is well-developed [41; 42; 43; 44; 45]. For low-mass stars with masses \(M\lesssim 2M_{\odot}\), their evolution ascending the red giant branch consists of a shell that is burning hydrogen immediately above a degenerate helium core. The mass of the helium core increases with freshly formed helium from the shell burning, until the core mass reaches a threshold value of about 0.5 M\({}_{\odot}\)**independent of the initial mass of the star**. At this stage the core will have reached a temperature of about \(10^{8}\) degrees, at which point the triple-alpha process (helium burning) can commence. Because the core is degenerate and cannot expand, a thermonuclear runaway ensues, injecting energy that overcomes the core degeneracy, and changing the equation of state. The star then rapidly evolves off the red giant branch to the (lower-luminosity) horizontal branch or the red clump, thereafter undergoing sustained core helium burning.
Figure 3: Left Panel – An example of a halo field chosen to be along the minor axis of the galaxy NGC 4258. Right Panel – The I-band vs (V-I) color-magnitude diagram for the RGB stars detected in the halo of NGC 4258. To the far right is the Sobel filter edge-detector response function applied to the RGB luminosity function. The peak in the edge detector indicates the discontinuity defining the TRGB. Adapted from [46].
The TRGB method has been used widely for the determination of distances to galaxies of various types in the local universe. The application of the TRGB method far exceeds the number of measurements of Cepheid distances3. The reason is practical: Cepheids are variable stars requiring observations at many epochs to determine periods, amplitudes and light curves for the construction of time-averaged period-luminosity relations. In contrast, TRGB stars are non-variable and have constant I-band magnitudes as a function of color and metallicity, requiring only a single-epoch observation. In addition, TRGB stars can be observed in galaxies of all morphological types, whereas Cepheids are present only in late-type galaxies.
Footnote 3: Approximately 1,000 TRGB distances to about 300 galaxies are compiled in NED; less than 70 galaxies have Cepheid distances to date.
### Chicago Carnegie Hubble Program (CCHP) and the TRGB
One of the primary goals of the Chicago Carnegie Hubble Program (CCHP) is to pursue an alternative route to the calibration of SNe Ia and thereby provide an independent determination of \(H_{0}\) via measurements of the TRGB in nearby galaxies. This method has a precision equal to or better than the Cepheid Leavitt law, and its current accuracy is also comparable. The calibration of the zero point of the I-band TRGB method and its application to the extragalactic distance scale has recently been reviewed by Freedman [25].
Freedman et al. [24] presented a determination of \(H_{0}\) based on TRGB distances to 15 galaxies that were hosts to 18 Type Ia supernovae (SNe Ia). The _HST/ACS_ fields were selected to target the halos of the galaxies where the effects of dust are minimal, and, at the same time, to specifically avoid contamination by younger and brighter disk asymptotic giant branch (AGB) stars. This calibration was then applied to a sample of 99 significantly more distant SNe Ia that were observed as part of the Carnegie Supernova Project (CSP)[47]. The calibration has been updated [9; 25], and is currently based on our independent calibrations of the TRGB absolute magnitude that are internally self consistent at the 1% level. The method yields a value of \(H_{0}\) = 69.8 \(\pm\) 0.6 (stat) \(\pm\) 1.6 (sys) \(\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\). This value differs only at the 1.2\(\sigma\) level from the most recent Planck Collaboration [11] value of \(H_{0}\). It is smaller than previous estimates of the Cepheid calibration of SNe Ia [7; 8] but still agrees well, at better than the 2\(\sigma\) level. Alternatively, adopting the SNe Ia catalog from the _SHoES_ collaboration [48] results in little change with \(H_{0}\) = 70.4 \(\pm\) 1.4 \(\pm\) 1.6 \(\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\)[24].
### Other Determinations of the Hubble Constant based on the TRGB
Recently members of the _SHoES_ collaboration [49] have undertaken to provide an 'optimized unsupervised algorithm' (called CATS) to measure TRGB distances and determine \(H_{0}\). They find a value of \(H_{0}\) = 73.22 \(\pm\) 2.06 \(\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\), apparently in better agreement with the Cepheid calibration. However, it can be easily demonstrated that this approach currently contains serious flaws.
1) Take, for example, two of the galaxies in their sample, NGC 4038 and NGC 4536. The TRGB distances that their unsupervised algorithm gives the highest weight to (they allow for several TRGB distances to an individual galaxy and employ various means of combining them) are significantly closer than their own published _SHoES_ Cepheid distance measurements [8] by 0.7 and 0.6 mag, that is 30% and 40% offsets in distance, respectively, ultimately pulling \(H_{0}\) to higher values. These TRGB distances are the ones that their unsupervised method ranks as "better measurements". This result stands in stark contrast to the
excellent agreement between the published Cepheid distances in Riess et al. [8] and TRGB distances in Freedman et al. [24], which in the mean, agree to 0.007 mag.
2) In many cases, the unsupervised method mistakes the well-known asymptotic giant branch (AGB) for the RGB (e.g., NGC 4038). This is a documented problem that has been addressed by many authors previously in the literature [50; 51; 52].
3) That these distances are problematic is additionally demonstrated by the fact that in adopting their unsupervised TRGB distances, the \(rms\) dispersion in the SNe Ia peak magnitude for the TRGB SNe Ia host galaxies increases from \(\pm 0.12\) mag [24] to \(\pm 0.346\) mag (see Figure 4, Hoyt private communication). Their re-analysis increases the scatter in the SNe Ia peak magnitudes to a level that is a more than a factor of three greater than that seen in the distant SNe Ia, \(\pm 0.10\)[53].
Although an algorithmic method might be desirable in future studies, it is unambiguously clear that this current unsupervised method itself necessitates supervision, and the results cannot be used to claim that this approach brings the Cepheid and TRGB distance scales into better agreement.
Comparisons of the CCHP TRGB distances with those in the Extragalactic Distance Database (EDD) were undertaken by [54] and [55]. These comparisons provide an important external check of the TRGB method since different approaches to the the analysis were taken, and carried out completely independently by separate research groups. For example EDD used DOLPHOT and applied a maximum likelihood fitting method whereas the CCHP used DAOPHOT and an edge-detection (Sobel filter) algorithm. Adopting a consistent NGC 4258 calibration, the difference is only 0.001 \(\pm 0.048\) mag [54] (see Figure 5); i.e., the analyses and the relative distances agree remarkably well. A remaining discrepancy is the absolute calibration of the TRGB at the 0.06 mag level (see [25] and [55], line 1, Table 4). This difference results from the choice to calibrate either in the outer halo (CCHP) or the disk of NGC 4258 (EDD). The outer halo provides a dust-free and uncrowded environment.
Figure 4: Histogram of the peak SNe Ia magnitudes from Tables 1 and 2 and Equation 3 of Scolnic et al. [49] (Hoyt, private comm.) The cases where the AGB has been mistakenly identified for the RGB are shown in orange. The dispersion in the SNe Ia peak magnitudes is erroneously increased as a result of these anomalously fainter magnitudes, biasing the result and leading to a higher value of \(H_{0}\).
## 5 Anchors to the Distance Scale
At present the overall accuracy in the determination of \(H_{0}\) is limited by the small number of galaxies that 'anchor' the Cepheid and TRGB distance scales; that is, galaxies for which there are geometric distances, acting as the first stepping stones out to the more distant galaxies. In the case of the Cepheid distance scale, there are only three such anchors: the Milky Way, the Large Magellanic Cloud (LMC) and the maser galaxy NGC 4258. In the case of the TRGB, there is one additional anchor, the Small Magellanic Cloud (SMC). JAGB stars also have the Milky Way, LMC, SMC and NGC 4258 as anchors.
### Large Magellanic Cloud (LMC)
At the conclusion of the Key Project, the largest component of the systematic error budget was the contribution from the adopted uncertainty to the distance of the LMC. A distance modulus to the LMC of 18.5 mag was adopted, with a very conservative uncertainty of \(\pm\) 0.1 magnitudes, reflecting the wide range of published distance moduli at the time (18.1 to 18.7 mag) [5].
The distance modulus to the LMC has been improved significantly since the time of the Key Project, based on measurements of 20 detached eclipsing binary (DEB) stars in the LMC [57]. This method gives a distance modulus of 18.477 \(\pm\) 0.004 (stat) \(\pm\) 0.026 (sys), corresponding to a distance uncertainty of only 1.2%. The DEB value is in exact agreement with measurements of the Cepheid Leavitt law based on 3.6 \(\mu\)m mid-infrared measurements from the Spitzer Space Telescope [58, 59]. Furthermore this value is only 0.023 mag different from the Key Project value, meaning that the LMC zero-point calibration adopted at that juncture has withstood the test of time, at a \(\sim\)1% level of accuracy.
Figure 5: A comparison of TRGB distances from the EDD [55] and the CCHP [24, 34, 54, 56]. The distances are calibrated relative to NGC 4258 (blue star). A line of unit slope is shown in the top panel. In the bottom panel, the median offset value is shown (dashed line), as well as at zero offset (solid line). These two independent analyses show excellent agreement.
### Milky Way Parallaxes: Hipparcos, HST and Gaia
There have also been enormous gains in the measurement of parallaxes to Cepheids in the Milky Way in the past 20 years, from _Hipparcos_[60] to HST and its Fine Guidance Sensor [61, 62, 63] (which provided the calibration for the Spitzer Cepheid PL relation [7]), culminating most recently with measurements from _Gaia_[64, 65]. The _Gaia_ measurements are revolutionizing studies of the Milky Way; for example, see [66].
The _Gaia_ Early Data Release 3 (EDR3) database [67] contains parallaxes, proper motions, positions and photometry for 1.8 billion sources brighter than G = 21 mag [65]. At the end of its mission, _Gaia_ is expected to provide astrometry reaching tens of microarcsecond accuracy. For Milky Way Cepheids, TRGB stars and other distance indicators, this level of accuracy will ultimately set the absolute calibration to an accuracy of \(<\)1%, an accuracy critical for helping to resolve the \(H_{0}\) tension. However, this challenging high accuracy has not yet been achieved owing to a zero-point offset [68] resulting from the fact that the basic angle between the two _Gaia_ telescopes is varying. There is a variance in the parallaxes (the systematic uncertainty measured relative to the background-quasar reference frame, defined by 550,000 quasars in the International Celestial Reference System) and a zero-point offset of -17 \(\mu\)as (in the sense that the _Gaia_ parallaxes are too small). Unfortunately this offset results in a degeneracy with the absolute parallax, and is limiting the ultimate accuracy required to reach the 1% target. In addition, these variations lead to zero-point corrections that are a function of the magnitude, color, and position of the star on the sky [69, 70]. The _Gaia_ Collaboration has emphasized [71, 72] that not only is there a significant variance in these measured offsets over the sky, but the EDR3 uncertainties in the parallaxes for different objects are correlated as a function of their angular separations [64, 65].
Furthermore _Gaia_ EDR3 parallaxes uncertainties have also been shown to be underestimated [72], with the 'unit weight' uncertainties of the catalog (the factors by which the formal errors need to be increased to reflect the actual level of uncertainty) having a multiplicative factor of \(\sim\)1.2 for the majority of stars, but in some instances rising to a factor of more than 2. Unfortunately, the most significant underestimates occur for brighter stars [73], including the magnitude range over which many of the Milky Way field Cepheids lie. Of additional and serious concern for the Cepheid distance scale, the parallax offset adopted turns out to be degenerate with the metallicity coefficient adopted [74], which together with the uncertainties in the measured parallaxes, leads to a systematic floor at the 4% level. With the exception of [8] these studies agree that a 1% calibration based on _Gaia_ parallaxes has not yet been established.
### Ngc 4258
The nearby spiral galaxy NGC 4258, at a distance of 7.6 Mpc, provides an additional anchor or zero-point calibration for the local distance scale. This galaxy is host to a sample of H\({}_{2}\)O megamasers within an accretion disk that is rotating about a supermassive black hole, from which a geometric distance to the galaxy has been measured [75, 76]. (For more details on the method, see Section 8.2.) The geometric distance modulus measured most recently to NGC 4258 is \(\mu_{o}\) = 29.397 \(\pm\) 0.033 mag [76], a 1.5% measurement.
As a consistency check, the distance to NGC 4258 can be determined based on HST measurements of the TRGB in its outer halo, calibrated by the LMC [77]. Adopting the measured apparent TRGB magnitude of m\({}_{o\ F814W}^{\ \ N4258}=25.347\pm 0.014\pm 0.005\)[34], results in a distance modulus of \(\mu_{o}\) = 29.392 \(\pm\) 0.018 \(\pm\) 0.032 mag that agrees with the maser distance modulus of 29.397 \(\pm\) 0.033 mag at a level of better than 1% (\(<\)0.2\(\sigma\)).
The Cepheid calibration, however, does not yield as good agreement with that of the maser distance, and ultimately depends on the sensitivity of Cepheid luminosities to metallicity. A calibration of the Cepheid distance to NGC 4258 based on the LMC differs from the maser distance by 2.0-3.5\(\sigma\), adopting different published slopes for the metallicity correction [78]. However, since the Milky Way and NGC 4258 metallicities are very similar, a calibration of NGC 4258 based on the Milky Way should be independent of a metallicity effect. Yet, if the Milky Way is adopted as the anchor galaxy to determine the Cepheid distance to NGC 4258, a distance modulus of 29.242 \(\pm\) 0.052 is obtained, which differs from the maser distance by 7% at a 2\(\sigma\) level of significance. These kinds of differences in the anchors to the distance scale are very important to resolve in the context of assuring that a 1% \(H_{0}\) value is in hand.
## 6 Type Ia Supernovae
The numbers of well-observed SNe Ia useful for measuring \(H_{0}\) has continued to grow with time [79]. These include the nearby SNe Ia out to distances of \(\sim\)30-40 Mpc that can be calibrated using HST distances from the TRGB or Cepheids. If systematic effects due to crowding can be established to be small (but see Section 9.1 below), perhaps the calibration can be reliably extended to \(\gtrsim\)50 Mpc. The _SHoES_ collaboration now has 42 galaxies for which Cepheids have been discovered, out to a distance of 80 Mpc. The nearby SNe Ia that can be observed with HST that occur in galaxies for which the TRGB or Cepheids can be measured typically occur only about once per year [8].
We discuss below two programs that currently calibrate the Cepheid and TRGB distance scales: the Carnegie Supernova Project (CSP) and Pantheon+.
### Carnegie Supernova Project (CSP)
The goal of the CSP was to provide a homogeneous, intensive, high-cadence, multi-wavelength (\(uBVgriYJH\)) follow-up of nearby SNe Ia and SN II [80]. Not a survey program, the idea was to obtain a consistent data set with careful attention to photometric precision and systematics, critical for applications to cosmology, as well as for studying the physical properties of the supernovae themselves4. The program utilized a fixed set of instruments, photometric standard stars, and instrumental reduction procedures, catching most of the supernovae well before maximum, and with high signal to noise, avoiding the challenges otherwise faced in minimizing systematic differences between multiple data sets/instruments/etc. [47]. Optical spectra were also obtained with high cadence [81]. The bulk of the observations were carried out at Las Campanas Observatory using the 1-m Swope and 2.5-m du Pont telescopes. The first part of the CSP (CSP-I) was carried out from 2004-2009. A second phase of the CSP (CSP-II) was carried out from 2011-2015, and was optimized for the near-infrared [82, 83].
Footnote 4: The CSP data are available at [http://csp.obs.carnegiescience.edu/data](http://csp.obs.carnegiescience.edu/data).
The reduction of the CSP light-curve photometry was undertaken using an analysis package called SNooPy [53]. The Hubble diagram for the CSP-I SNe Ia sample, calibrated by Cepheid distances from [84] was presented in [53]. These authors found a value of \(H_{0}\) = 73.2 \(\pm\) 2.3 \(\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\) based on H-band data; and a value of \(H_{0}\) = 72.7 \(\pm\) 2.1 \(\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\) using B-band data. A TRGB calibration of the CSP-I sample was given by [24] and updated in [9, 25]. As discussed in Section 4.1 above, the TRGB calibration gives a slightly lower value of \(H_{0}\) = 69.8 \(\pm\) 0.6 (stat) \(\pm\) 1.6 (sys) \(\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\).
Recently [85] have used the SNe Ia data from the CSP-I and II (an increase by a factor of three in the numbers of SNe Ia over CSP-I alone) to calibrate the Cepheid distance scale, as well as the TRGB (and Surface Brightness Fluctuations, a secondary distance indicator). Using B-band light-curve fits, they find \(H_{0}\) = 73.38 \(\pm\) 0.73 \(\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\) based on a calibration of Cepheids. For the TRGB calibration, they find \(H_{0}\) = 69.88 \(\pm\) 0.76 \(\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\), both in good agreement with previously published Cepheid and TRGB studies. They conclude that the differences amongst the various calibrators can be explained as a result of systematic errors, and that taking these into account removes the existing \(H_{0}\) tension (see also Section 10).
### Pantheon+
The Pantheon+ analysis [86] currently consists of 1550 individual SNe Ia, superseding earlier Pantheon [87] and Joint Light-Curve [88] analyses. The analysis knits together and standardizes the B-band photometry from 18 individual surveys obtained with a wide variety of telescopes and instruments5. The sample includes SNe Ia in the redshift range 0 \(<\) z \(<\) 2.3; the subset used for constraining \(H_{0}\) are those for which 0.023 \(<\) z \(<\) 0.15.
Footnote 5: The Pantheon+ catalog is available at [https://github.com/PantheonPlusSH0ES/DataRelease](https://github.com/PantheonPlusSH0ES/DataRelease).
The _SHoES_ Cepheid calibration of the Pantheon+ SNe Ia sample from [86] results in a value of \(H_{0}\) = 73.04 \(\pm\) 1.04 \(\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\) for the 277 SNe Ia with 0.023 \(<\) z \(<\) 0.15, as noted previously in Section 3.2.
Ultimately, it is expected that the SNe Ia samples will continue to grow as future large-scale (and homogeneous) surveys like the Legacy Survey of Space and Time (LSST) [89] and the Nancy Grace Roman Space Telescope [90] become available.
## 7 J-Region Asymptotic Giant Branch (JAGB) Distance Scale: 2000-2023
The JAGB method is emerging as one of the most promising methods for measuring the distances to galaxies in the local universe. JAGB stars were first identified as a distinct class of objects in the LMC [91, 92], demonstrated to be very high precision distance indicators, and then successfully used to map out the back-to-front geometry of the LMC. Two decades later the method was applied in an extragalactic distance scale context [26, 27, 93]. Together, these studies have demonstrated that there is a well-defined class of carbon stars with a nearly constant luminosity in the near-infrared; i.e., an excellent standard candle for distance measurements. These (thermally-pulsating AGB) stars have a low intrinsic dispersion, specifically in the near-infrared J band, of only \(\pm\)0.2 mag [91], and they can be identified on the basis of their near-infrared colors alone, being distinguished from bluer O-rich AGB stars, as well as being segregated from redder, extreme carbon stars (see Figure 6).
Freedman & Madore (2020) measured JAGB carbon-star distances to a sample of 14 galaxies out to 27 Mpc, calibrated using the LMC and the SMC, and compared them to previously published distances using the TRGB. They found that the distance moduli agreed extremely well (at the 1% level), with a (combined) scatter amounting to only \(\pm\)4%. The good agreement with the TRGB distances suggests that the effects of metallicity for this well-defined color-range of carbon stars are small. A number of additional extensive tests of this method have recently been carried out by Lee and collaborators [28, 94, 95] as well as Zgirski et al. [96] in several nearby galaxies, confirming the excellent agreement with distances measured with the TRGB and Cepheid distance scales, and again indicating that metallicity and star formation effects are small (see Figure 7).
Figure 6: Ground-based (Magellan/FourStar) near-infrared CMDs of four nearby galaxies observed by Lee et al. (in preparation), illustrating the well defined, single peaked J-band luminosity functions characteristic of the JAGB population in the color range 1.5 \(<\) (J-K) \(<\) 2.0.
Figure 7: Comparison of ground-based JAGB and TRGB distance moduli to a dozen nearby galaxies (Lee et al. in preparation). The lower panel shows the magnified differences between the moduli which have a combined scatter of only \(\pm\)0.06 mag. For this sample of galaxies this scatter puts upper limits (of a few percent) on the impact of metallicity differences, differential internal reddening and potential star formation history differences between these galaxies.
Recent modeling of AGB star evolution has been carried out by many authors [97; 98; 99]. Significant challenges remain in the detailed modeling (e.g., treatment of convection, overshoot, winds and mass loss), but the broad outlines are well-characterized. A carbon star is defined such that the atmosphere contains more carbon than oxygen; i.e., a ratio of C/O \(>\)1. The path to becoming a carbon star occurs during the thermally pulsing evolutionary phase for AGB stars. As a result, carbon can be brought to the surface, particularly during the third and later (generally deeper) dredge-up phases [100; 101; 102]. For stars with solar metallicity, recent studies conclude that the initial mass for carbon-star formation is between 1.5 and 3.0 to 4.0 M\({}_{\odot}\)[103], with a similar range for stars with Z = 0.008 [99].
The reason for the well-constrained luminosity of carbon stars is two-fold: (1) younger, more massive (hotter) AGB stars burn their carbon at the bottom of the convective envelope before it can reach the surface of the star [104], whereas (2) for the oldest, less massive AGB stars, there is no third, deep dredge-up phase. Thus, carbon stars are formed only in the intermediate mass range where carbon-rich material can both be dredged up and survives so that it can be mixed into the outer envelope.
In summary, the JAGB method offers a number of advantages for distance measurement, as previously enumerated [27]. (1) They are easily identified by their colors and magnitudes in the infrared. (2) They have a low intrinsic dispersion in the J band of only \(\pm 0.2\) mag. (3) They are about one magnitude brighter than those defining the TRGB. (4) They are found in all galaxies that have intermediate-age populations, and the JAGB method is, therefore, applicable to a wide range of galaxy types. (5) Near-infrared observations offer the advantage of reduced intrinsic variability and reduced reddening. (6) No multi-epoch observations are required to determine periods as, for example, is the case for Cepheid and Mira variables; observations of JAGB stars in two infrared bands, at a single epoch, are all that is needed.
With further development, testing and application the JAGB method has the potential to provide an independent calibration of Type Ia supernovae (SNe Ia), especially with \(JWST\). JAGB stars are brighter than the TRGB and thus can be detected at greater distances, allowing greater numbers of calibrating galaxies for the determination of \(H_{0}\). As is the case for the TRGB and Cepheids, JAGB stars are amenable to theoretical understanding and further improved empirical calibration. Early tests show little dependence, if any, of the JAGB magnitude with metallicity of the parent galaxy (see Lee et al. [95] and Figure 9), and therefore suggest that the JAGB method has considerable promise for providing high-precision distances to galaxies in the local universe that are largely independent of distances derived from the Leavitt Law and/or the TRGB method.
## 8 Other Methods
### Surface Brightness Fluctuations (SBF)
For most distance indicators crowding of individual stars by the surrounding population of stars is a major source of systematic uncertainty; a systematic that increases in its effects as the targets being measured are found at increasing distances. Thirty-five years ago Tonry & Schneider [105] introduced a novel technique, called the Surface-Brightness Fluctuation (SBF) method that takes crowding (a systematic effect that depends on distance) and turns a quantitative measure of the crowding into a means of measuring distances. The method has recently been extensively reviewed in [106].
The SBF method applies best to elliptical galaxies, and with caution, to the bulges of bright, early-type spiral galaxies, where the effects of dust and recent star formation can be
mostly avoided. At a given surface brightness (which is by definition independent of distance) the degree of crowding of any pre-specified population of stars will increase/degrade with distance as the mean separation of those same stars also decreases inversely with distance. A measure of the observed granularity in the image, which is used to determine a distance, is found in the power spectrum of the targeted field of view.
Recent applications of the SBF method [107, 108] have led to values of \(H_{0}\) = 73.3 \(\pm\) 0.7 (stat) \(\pm\) 2.4 (sys) and \(H_{0}\) = 70.50 \(\pm\) 2.37 (stat) \(\pm\) 3.38 (sys) km s\({}^{-1}\) Mpc\({}^{-1}\). The most important error terms [106] are (i) sky background subtraction [0.02 mag], (ii) characterization of the point spread function [0.03 mag], (iii) details of the power spectrum fitting [0.02 mag], (iv) residual variance in the power spectrum, due to globular clusters and background galaxies too faint to be detected and masked directly [0.05 mag], and (v) extinction. Values in square brackets are the errors due to these terms as estimated by [106] Section 1.4.1.
Finally, it should be noted that the SBF method is a secondary distance indicator (as are other notable examples, including Type Ia supernovae and the Tully-Fisher relation) given that it is not calibrated from first principles, nor is it calibrated from geometric/parallax methods. Rather, SBF is currently being calibrated using (primarily) Cepheid and (a small number of) TRGB distances to galaxies close enough for those methods to provide a tie-in.
The very strong intrinsic-color dependence of the SBF characteristic magnitude is assumed to be due to the effects of the metallicity distribution on the RGB colors, in combination with differing contributions of AGB stars due to different star formation histories. Uncrowded, high signal-to-noise color-magnitude diagrams of the stellar populations underwriting the SBF method would be important to have for a range of integrated colors in nearby elliptical galaxies so as to quantitatively constrain any potential systematic effects.
With _JWST/NIRCam_ and other upcoming facilities, it will be possible to surmount the current 100 Mpc distance limit for SBF distances, perhaps taking it out to 300 Mpc, thus reducing the uncertainty from peculiar motions, as well as improving the statistical precision.
### Masers
H\({}_{2}\)O mega-masers provide a powerful geometric tool for measuring extragalactic distances. These astrophysical masers, often found in the accretion disks around supermassive black holes, are akin to lasers, instead operating in the microwave regime. Water molecules in these disks amplify background radiation and produce coherent emission. The radial velocity shifts exhibited by the megamaser sources, observed with high-resolution radio interferometry, allow for the detailed mapping of the rotational dynamics of the maser-bearing accretion disk. By applying Kepler's laws to the derived rotation curve, the mass of the central supermassive black hole can be determined. A direct geometric distance to the galaxy can be obtained making use of the constrained orbital dynamics and precise angular measurements provided by Very Long Baseline Interferometry (VLBI) [109]. Allowing for warps and radial structure, the approximately Keplerian rotation curve for the disk can be modeled. The nearest and best-studied galaxy, NGC 4258, at a distance of about 7.5 Mpc, is too close to provide an independent measurement of the Hubble constant (i.e., free from local velocity-field perturbations) but it serves as a geometric anchor for the distance scale.
The _Megamaser Cosmology Project_ has measured maser distances to 6 galaxies within 130 Mpc [110]. Adopting an average peculiar velocity uncertainty of \(\pm\)250 km/s they determine a value of \(H_{0}\) = 73.9 \(\pm\) 3.0 km s\({}^{-1}\) Mpc\({}^{-1}\), with a range of values spanning 71.8 to 76.9 km s\({}^{-1}\) Mpc\({}^{-1}\), allowing for different means of correcting for peculiar velocities. Sadly, the numbers of galaxies for which this technique can be applied turns out to be very small;
hence, it will never rival, for example, SNe Ia (for which there are upwards of 1,000 host galaxies) in statistical precision.
### Strong Gravitational Lensing
Strong gravitational lensing offers an independent route for determining \(H_{0}\) with the advantage that it can be carried out at cosmological distances (a one-step method), providing crucial cross-checks against measurements of the local distance scale and CMB measurements. In a gravitational lensing event, a massive foreground object (like a galaxy cluster) distorts the light from a background source (such as a more distant galaxy or quasar), resulting in multiple, often distorted, images of the source. The time delay between the arrival of light in these images, the "time-delay distance," is inversely proportional to the value of \(H_{0}\), with a smaller dependence on \(\Omega_{m}\) and \(\Omega_{\Lambda}\). Time-delay distances are derived by combining detailed modeling of the gravitational potential of the lens with precise measurements of the time delays between the multiple images [111, 112].
In practice, several key steps are involved in this method. First, high-quality imaging data of the lensing system must be obtained, most recently using HST or ground-based telescopes equipped with adaptive optics. These imaging data are then used to model the mass distribution of the lens, taking into account both luminous and dark matter components. In addition, photometric or spectroscopic monitoring of the background source is conducted to measure the time delays between the arrival of photons in the multiple images. This is a labor-intensive step, requiring observations over several months to years in order to accurately measure the variability and time delays [113].
Advancements in lens modeling techniques and the quality of data are continually improving [114, 115]. Uncertainties in the gravitational lens method arise from the complexity of the lens model, whether the lens is located in a group or cluster, or whether there is mass along the line of sight, as well as due to the assumptions on the cosmological model. An inherent challenge for the method is the'mass-sheet degeneracy', where an additional underlying mass density (mass sheet) can produce the same deflection angles and magnifications. Recently, a joint analysis of six gravitationally lensed quasars with measured time delays [115] resulted in a value of \(H_{0}=73.3^{+1.7}_{-1.8}\) km/s/Mpc (a 2.4% uncertainty), assuming a flat \(\Lambda\)CDM cosmology. However, this result is dependent on assumptions about the mass-density radial distribution (e.g., a power-law mass profile) [116]. Dropping the assumptions about the mass profile, and instead using velocity dispersion measurements to break the mass-sheet degeneracy [117], the precision then drops to 8%, with \(H_{0}=74.5^{+5.6}_{-6.1}\) km/s/Mpc. Additional imaging and spectroscopic data for 33 lenses then result in \(H_{0}=67.4^{+4.1}_{-3.2}\) km/s/Mpc, improving the precision to 5%. Observations and analysis of the multiply lensed SN Refsdal result in values of \(H_{0}=64^{+11}_{-9}\)\(\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{Mpc}^{-1}\)[118] and \(64.8^{+4.4}_{-4.3}\), \(66.6^{+4.1}_{-3.3}\), depending on the model adopted [119]. Lensed SNe Ia offer an advantage over lensed quasars due to the increased precision in the time delay measurements, as well as smaller uncertainties in the lens models.
Future improvements to this method will come with larger samples of lenses and measured time delays (to improve the statistical precision), for example, from the Vera Rubin Observatory, Euclid and the Nancy Grace Roman Observatory, and will require high signal-to-noise kinematic measurements to address the issue of the mass-sheet degeneracy, as well as detailed simulations [120].
### Gravitational Wave Sirens
Inspiraling neutron star - neutron star binary systems have offered a new means of measuring \(H_{0}\) that is completely independent of the local distance scale. In analogy with the astrophysical standard candles described earlier, the detection of gravitational waves from these systems provides a'standard siren' that can be used to estimate the luminosity distance of the system out to cosmological distances, without the need for a local (astrophysical distance scale) calibration. The method requires both the detection of gravitational, as well as, electromagnetic radiation (the latter providing the redshift).
The method was first applied with stunning success to the event GW170817, located in a galaxy at 43 Mpc [121]. The authors determined a value of \(H_{0}=70^{+12}_{-8}\,\,\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\) (see Figure 8). A number of factors contribute to the 15% uncertainty: detector noise, instrumental calibration uncertainties, uncertainty in the peculiar velocity of the host galaxy, and a geometrical factor dependent upon the covariance of distance with inclination angle. At a distance of 43 Mpc, the peculiar velocity is about 10% of the measured recessional velocity.
GW170817 was detected with high signal to noise almost immediately after LIGO was turned on in 2017. It led to the expectation that many more sources were likely to follow and that a value of \(H_{0}\) to 2% accuracy would be possible by 2023 [122] with the detection of 50 events, assuming that redshifts could be measured for each object. Sadly, as of summer, 2023, there have not yet been any comparable events, and an accurate measurement of \(H_{0}\) with this technique will require patience. Ultimately, it will provide a critical independent means of comparison with the local distance scale.
Figure 8: The marginalized posterior density distribution (blue line) for \(H_{0}\) derived from the gravitational wave detection of GW170817. Constraints from Planck and _SHoES_ are shown in green and orange, respectively. The TRGB value of \(H_{0}=69\,\,\,\mathrm{km\,s^{-1}\,Mpc^{-1}}\) is shown in red. Figure adapted from [121]. See text for details.
The Hubble Constant and the Impact of the James Webb Space Telescope (JWST)
In this section, we provide an overview, as well as a current status report, of a new CCHP long-term program using _JWST_. This program is aimed at reducing the current systematics in the local extragalactic distance scale and the measurement of \(H_{0}\). Specifically our goals are to: 1) exploit the high resolution of _JWST_ to understand and reduce the possible effects of crowding and blending of Cepheids previously observed with HST, 2) improve the corrections for dust, 3) improve the constraints on the metallicity of Cepheids and 4) provide three independent measures (Cepheids, TRGB, JAGB) of the distances to the same galaxies, thereby reducing the overall systematic distance uncertainties.
The blue sensitivity and high spatial resolution of HST made it an ideal facility for the discovery of Cepheid variables. At bluer (optical) wavelengths, the amplitudes of Cepheid variables are larger than at longer wavelengths due to the greater sensitivity of the surface brightness to temperature, thus facilitating their discovery [5]. HST's high resolution allowed Cepheids to be discovered in galaxies over a larger volume of space than could be accomplished from the ground, most recently out to distances of \(\gtrsim\)40 Mpc [8].
The superb science performance of _JWST_ has greatly exceeded early expectations in terms of sensitivity, stability, image quality, as well as spectral range [123]. Two key features make _JWST_ the optimal telescope for addressing the _accuracy_ of measurements of \(H_{0}\): its red sensitivity and higher spatial resolution. The extinction is significantly lower: A\({}_{J}\) and A\({}_{[4.4]}\) are smaller by factors of 4 and 20\(\times\) respectively, relative to the visual extinction, A\({}_{V}\); and factors of 2 and 10\(\times\) lower relative to the I-band extinction A\({}_{I}\)[124, 125]. _NIRCam_ (F115W) imaging from _JWST_[126] has a sampling resolution four times that of HST \(WFC3\) (F160W), with a FWHM of 0.04 arcsec on the former telescope and imager, versus 0.151 arcsec on the latter. In addition, in the near infrared (NIR), the objects that are causing contamination and crowding of the Cepheids are red giant and bright asymptotic giant branch stars, exacerbating crowding effects in the red, compared to optical wavelengths. _Importantly, with 4 times better resolution than HST, crowding effects are decreased by more than an order of magnitude in flux using JWST_.
We have been awarded time in Cycle 1 of _JWST_ (JWST-GO-1995: P.I. W. L. Freedman; co-I B. F. Madore) to obtain observations of 10 nearby galaxies that are hosts to type SNe Ia, as well as observations of NGC 4258, a galaxy that provides an absolute calibration through its geometric distance based on H\({}_{2}\)O megamasers (see Sections 5.3, 8.2). There are three components to the program: Three independent distances to each galaxy will be measured using Cepheids, the TRGB and _JAGB_ stars, with a particular emphasis on testing for, and decreasing the systematic uncertainties that have often historically plagued distance scale determinations. The program is designed to deal specifically with known systematic effects in the measurement of distances to nearby galaxies: extinction and reddening by dust, metallicity effects and crowding/blending of stellar images. Simply getting more nearby galaxy distances (decreasing the statistical uncertainties) is insufficient to confirm or refute whether new physics beyond the standard cosmological model is required. At this time, systematic uncertainties are (and have historically always been) the dominant component of the error budget. Our goal is to decrease the systematic errors to the 2%-level, for each of the three methods.
The primary sample for the program is a subset of the nearest galaxies that have both reliable SN Ia photometry and previously-discovered Cepheid variables [5, 127], and for which
TRGB and carbon-star distances can now also be measured. All three of these methods individually have high precision and can be independently used to calibrate SNe Ia. The observations are being carried out in the NIR at \(F115W\) (or \(J\) band) and mid-infrared F356W at 3.6\(\mu\)m with the _JWST_ Near-infrared Camera (_NIRCam_), and in parallel at \(F115W\)-band with the Near-infrared Imager and Slitless Spectrograph (_NIRISS_) [128]. Our first observations for NGC 7250 were carried out with the F444W filter at 4.4\(\mu\)m, but we have switched to F356W for the rest of the sample, owing to its higher sensitivity and better sampling. However, the F444W filter contains a CO bandhead that is sensitive to metallicity [129], and it is being used to carry out a test for metallicity effects in the galaxies, M101 and NGC 4258, as discussed further in Section 9.1 below.
Our target fields were chosen to maximize inclusion of the largest possible number of known Cepheids in the inner disk, as well as the inclusion of the outer disk to detect carbon stars, and with a rotation angle optimized for the detection of halo red giants. The disk observations are being carried out with _NIRCam_; the outer halo observations with either _NIRCam_ or parallel observations with _NIRISS_. We are carrying out the analysis using two independent software packages, DAOPHOT [130] and DOLPHOT [131], in order to provide a quantitative constraint on photometric errors that might arise due to differences in point-spread-function fitting in crowded fields.
In brief, we find (1) The high-resolution _JWST_ images of NGC 7250 demonstrate that many of the Cepheids observed with HST are significantly crowded by nearby neighbors. (2) The scatter in the _JWST_ NIR Cepheid PL relation is decreased by a factor of two compared to those from HST. (3) The TRGB and carbon stars are well-resolved, and with the Cepheid measurements, will allow measurement of three independent distances to each of these galaxies. These new results illustrate the power of _JWST_ to improve the measurement of extragalactic distances, and specifically, to address remaining systematics in the determination of \(H_{0}\).
In Figure 9 we show a color-magnitude diagram (F115W versus [F115W - F444W]) for the galaxy NGC 7250, which shows at a glance, the Cepheid instability strip, the position of the TRGB, the location of the JAGB stars, and the power of this three-in-one program. These three distance scales, all on a common photometric scale, contain valuable quantitative information as to potential systematic differences among the methods. The magnitudes are shown on an arbitrary scale, since at this stage of the analysis, the photometry is blinded. In addition, the current absolute flux calibration for _NIRCam_ is only at a level of 5% (M. Rieke, private communication); however, the desired future goal is a 1-2% absolute calibration tied to laboratory-standard measurements [132].
### JWST Cepheid Program
The _JWST_ Cepheid sample for NGC 7250 was selected based on a completely new (end to end) re-analysis of the archival _SHoES_ data [133]. This archival sample is comprised of 11 epochs of 'white light' (F350LP) photometry, with smaller numbers of (significantly lower signal-to-noise) phase points at three additional wavelengths (three at F555W, two at F814W and six at F160W). Periods and light curves were measured directly and independently using the F350LP photometry, using templates derived from well-measured Cepheids in the LMC [134]. Cepheid variable candidates were selected according to the following criteria: 1) optical colors consistent with known Cepheid variables; 2) optical amplitudes \(>0.4\) mag; 3) classified according to their light curve quality (requiring a classical'saw-tooth' shape) at F350LP; 4) the light curves and images of the Cepheid candidates were independently inspected by eye by
four team members. If there was disagreement about the quality of the candidate, it did not make the final cut; and 5) having no comparably bright nearby companions within the point spread function (PSF) at F350LP, as determined from the higher-resolution F115W data.6 These stringent criteria were chosen to reduce the uncertainties due to crowding and low signal to noise. They result in a final sample of 16 uncrowded Cepheids with well-determined light curves. The photometry for all of the Cepheid candidate variables, both before and after final selection, will be made available on github [133].
Footnote 6: If the summed flux from resolved sources in the JWST F115W images within 4 NIRCAM pixels of the Cepheid candidate (0.124 arcsec or approximately one HST WFC3 IR pixel or 0.13 arcsec) was equal to or greater than the measured flux of the star itself, the candidate was considered to be crowded and not included in the final sample.
The new _JWST_ observations are allowing us to directly assess the degree to which crowding/blending effects have affected the (4\(\times\) lower-resolution) HST photometry, on a star-by-star basis. In Figure 10, we show multiband cutout images of eight Cepheids in NGC 7250 at a distance of 20 Mpc. From left to right are images at F350LP, F555W, and F814W (from HST) and F160W and F115W (from _JWST_). The cutouts are 2 \(\times\) 2 arcsec on a side, and have been scaled as described in the figure caption. These images illustrate
Figure 9: The relative disposition of the three stellar/astrophysical distance indicators, discussed in this review, seen plotted in a _JWST_ F115W versus (F115-F444W) CMD. Cepheids are the black dots between the two vertical dashed lines, where the latter represent the red and blue limits of the instability strip. JAGB/Carbon stars are further to the red. Their mean luminosity is marked by the horizontal dotted line. Finally, the TRGB maximum J-band luminosity, as a function of color, is shown by the upward slanting yellow line at the top of the red giant branch at (F115W-F444W) \(\approx\) 1.5 mag. See also Figure 13.
Figure 10: NGC 7250 Cepheids: A sample of cutout images for the light-curve-selected Cepheids in 5 photometric bands. Each cutout is 2 arcsec on a side. The red circles enclose the location of the Cepheid candidate and are 0.2 arcsec in radius. _JWST_ J-band images are in the far right column. All other images (all four columns to the left) are from HST. Adapted from [133].
the superb resolution and the power of _JWST_ to improve the measurement of extragalactic distances. The effects of crowding, even in a galaxy as close as 20 Mpc are evident in this comparison. In the HST data, many of the Cepheid candidates are fainter than their nearby neighbors, rendering background subtraction challenging. _JWST_ images for the complete sample of Cepheids in NGC 7250 are presented in [133].
In Figure 11, we compare the Leavitt law for Cepheids in NGC 7250 observed with HST (left panel) and _JWST_ (right panel). The _JWST_ data are plotted on an arbitrary magnitude scale, as the data are still blinded. The slope is determined from the LMC, and restricted to log P \(<\) 1.8, after which the period-luminosity relation shows evidence for non-linearity. The scatter in the _JWST_ F115W data for NGC 7250 is a factor of two smaller than the _SHoES_ F160W data, which is all the more remarkable since the F115W data are for a single epoch only. In addition, a two-sigma rejection of candidates in the PL relation has been applied to the _SHoES_ F160W data; no sigma cut has been applied to the _JWST_ Cepheid candidates based on position in the PL relation.
When the data are unblinded, and an absolute calibration is established, the _JWST_ data will allow us to also improve the accuracy of the reddening corrections to the individual galaxies and their Cepheids. A standard interstellar extinction curve [124, 125] can be fit to the multi-wavelength \(F350,V,I,H\) and \(J\)-band apparent distance moduli [20, 58]. Finally, the 4.4\(\mu\)m-band can provide a direct and quantitative measure of the metallicity of the each of the Cepheids. \(Spitzer\) 4.5 \(\mu\)m observations of Cepheids in the Milky Way, the LMC and the SMC revealed a direct correlation between Cepheid metallicity and luminosity [129], a result of a CO bandhead that is present in the 4.4 \(\mu\)m filter. _JWST_ observations across the disks of M101 and NGC 4258 have been scheduled as part of our program. In particular, there is a steep metallicity gradient in M101 [135], which will allow a direct test of the metallicity sensitivity at long wavelengths. The uncertainty due to the effects of metallicity remains one of the largest sources of systematic error in the Cepheid distance scale [78].
With improved reddening measurements, a direct measure of the metallicity, and a robust estimate of crowding/blending effects on current samples, we can address three of the largest sources of systematic uncertainty in the local Cepheid distance scale. The selection criteria adopted for inclusion in our final sample of Cepheids are deliberately conservative, with the intention of avoiding systematic effects due to crowding/blending, aiming for quality over quantity. The JWST data are still blinded, so in the future there will be a significant
Figure 11: NIR Period–luminosity relations for Cepheids in NGC 7250. The left panel is HST F160W (H-band) data from the _SHoES_ collaboration[8]; the right panel is JWST F115W (J-band) data from the CCHP [133]. The scatter about the period–luminosity fit in each filter is labeled in each plot.
improvement to the distance measurements. However, near-IR photometry obtained using HST in this galaxy results in a larger scatter due to the lower spatial resolution and the lower signal to noise of the data.
It is important to keep in mind that crowding effects will become more severe with increasing distance. We note that 60% of the Riess et al. (2022) sample of galaxies in which Cepheids have been discovered lie at greater distances than NGC 7250 at 20 Mpc, and that 25% of the sample lies beyond 40 Mpc. At a distance of 40 Mpc, four times the area will be contained within a given pixel. For the most distant _SHoES_ galaxy at 80 Mpc, 16 times the area will be covered. As the need for percent-level accuracy has grown in importance, and given the level of crowding that we have seen for Cepheids in a galaxy at a distance of 20 Mpc, it remains important to demonstrate that crowding effects do not produce a systematic bias in the photometry and hence, the distance measurements for these more distant galaxies observed with HST.
### JWST Tip of the Red Giant Branch Program
As noted in Section 4, the TRGB provides one of the most precise and accurate means of measuring distances in the local universe [24]. The observed color-magnitude diagrams (CMDs) of the halos of nearby galaxies reveal a sharp discontinuity in the magnitude distribution of red giant branch stars at a well-determined luminosity, which corresponds to the location of the core helium-flash.
Measuring the TRGB in the near-IR has a number of advantages over the optical: 1) The extinction is significantly lower 2) TRGB stars are brighter in the NIR (M\({}_{J}\) = -5.1 mag [37]) than in the optical (M\({}_{I}\) = -4.05 mag [9]), making them comparable to that of Cepheids with periods of 10 days (M\({}_{J}\) (10-day Cepheid) = -5.3 mag [23]). The slopes of the RGB as a function of wavelength are well-defined [26, 37, 38, 136]. 3) The peak luminosity of the giants occurs at NIR wavelengths. The disadvantage of the near-IR is that because the magnitude of the TRGB is no longer flat, as it is in the I-band, it necessitates more accurate measurements in a second filter to measure the slope of the RGB.
As part of our _JWST_ CCHP program, the TRGB has been measured in NGC 4536, a galaxy located in the constellation Virgo, about 10 degrees south of the center of the Virgo Cluster. In Figure 12 we show an \(F814W\) versus [\(F606W\) - \(F814W\)] color magnitude diagram (CMD) [left panel] and \(F115W\) versus [\(F115W\) - \(F444W\)] CMD [right panel] [137] for NGC 4536. The downward-arching black curve in the middle of the left panel illustrates the shallow color dependence of the TRGB at optical magnitudes [56]. The theoretically predicted slope of the infrared TRGB is also shown in black in the right panel. Also plotted are stellar evolutionary curves, as described in the figure caption. All magnitudes shown are on an arbitrary scale, but the two panels are aligned, illustrating the brighter magnitudes of the TRGB in the near-infrared relative to the optical. Once again, NGC 4258 will ultimately provide a geometric zero-point calibration. See [137] for details of the analysis of these data.
### JWST Resolved Carbon-Rich AGB Stars Program
In Figure 13 we show an \(F115W\) versus [\(F115W-F444W\)] CMD for the outer disk of the galaxy NGC 7250, which illustrates immediately the feasibility of using _JWST_ and this method for distance determination. The carbon stars are located to the red of the TRGB, about one magnitude brighter than the tip, and exhibit a nearly-constant luminosity with a dispersion of only \(\pm 0.3\) mag. These single-phase observations have only a slightly larger
scatter than the intrinsic (time-averaged) scatter observed in the LMC [92]. Details of the analysis, as well as for the galaxies NGC 4536 and NGC 3972 are presented in [138].
## 10 Is There a Crisis in Cosmology?
Time will tell if cosmology is facing a crisis. It still remains at a crossroads[13]. The precision and accuracy with which extragalactic distances can be measured continue to improve, and many new facilities/programs are now either ongoing or will be online in the near future, which will lead to the continued refinement of the distance scale and to the measurement of \(H_{0}\). In Figure 14 we show a comparison of recently published values of \(H_{0}\) from [85]. To
Figure 12: Optical HST (left) and near-infrared _JWST_ (right) CMDs for stars located in the stellar halo of NGC 4536. The CMDs are aligned on their vertical axis to demonstrate the increasing brightness of RGB stars when observed in the infrared. The HST images represent a total of 14,000s in telescope exposure time, while the _JWST_ images represent just 2,800s of exposure time. The known, shallow color dependence of the optical TRGB is overplotted on the left, while the theoretically-predicted slope of the infrared TRGB is overplotted on the right, both as black curves. In both CMDs, 10 Gyr theoretical stellar evolutionary tracks are shown and colored from light yellow to dark purple for metallicities Z = 0.002, 0.004, 0.008, and Z\({}_{\odot}\). The isochrones are shifted to terminate at the observed level of the TRGB.
date, none of the local measurements of \(H_{0}\) reach the \(<\)1% precision of the Planck result that is inferred from CMB measurements.
At this juncture, and given the still outstanding issues that need to be unambiguously addressed in order to allow a 1% measurement (e.g., small numbers of anchors, crowding effects, consistency across observing wavebands, metallicity effects), it is reasonable to keep an open mind as to the ultimate resolution of this latest crisis.
The current outstanding question essentially now revolves around '_the uncertainty in the uncertainty_'; i.e., have we yet reached a level of precision and accuracy in the local distance scale that can test the CMB model, which itself is quoted to have a precision exceeding 1%. 5\(\sigma\) in experimental physics is the gold standard. How robust is the currently claimed astronomical 5\(\sigma\) result? If the result is secure at the 5-6\(\sigma\) level, then in principle, the question is settled, and no more work need be done. It is perhaps illustrative, however, to consider that if the uncertainty in \(H_{0}\) were to have been underestimated by only a factor of 1.5 and \(H_{0}\) = 72.0 \(\pm\) 1.5 \(\,{\rm km\,s^{-1}\,Mpc^{-1}}\), then the tension with the Planck results drops from 5\(\sigma\) to less than 3\(\sigma\). Similarly if \(H_{0}\) = 72.0 \(\pm\) 2.0 \(\,{\rm km\,s^{-1}\,Mpc^{-1}}\), the tension drops to 2\(\sigma\).
Figure 13: \(F115W\) versus [\(F115W\)- \(F444W\)] color-magnitude diagram for the outer region of the galaxy NGC 7250 (left panel) from [138]. The JAGB stars were measured to be within the light blue shaded region. In the right-hand panel, the GLOESS-smoothed luminosity functions for the JAGB stars is shown in light blue, and the 0.01 mag binned luminosity functions are shown in grey. Within a window of 1.50 mag wide centered on the mode, the scatter of the JAGB stars is \(\sigma=0.32\) mag.
## Summary
The advancement in measuring the distances to galaxies over the past twenty-five years has been nothing short of remarkable. Just two decades ago, achieving accuracies within a few percent for the extragalactic distance scale was virtually unthinkable. This progress can be attributed to better detectors, increased wavelength coverage, innovative new, independent methods for measuring distances, and access to space, all of which have made it possible to address systematic effects including reddening/extinction from dust, metallicity, and crowding.
The launch of _JWST_ has opened a new chapter in the measurement of extragalactic distances and \(H_{0}\). The superb resolution and unequalled sensitivity at near-infrared wavelengths is already demonstrated in the first data from the nearby galaxies, NGC 7250, NGC 3972 and NGC 4536, at distances between \(\sim\)15-20 Mpc. These early data clearly demonstrate the promise of _JWST_ for improving the measurement of extragalactic distances and the local, directly measured value of \(H_{0}\). Our program has been optimized to observe Cepheids in the spiral arms of the inner disks of galaxies, JAGB stars in the extended disks, and TRGB stars in the outer halos of galaxies. All ten of the program galaxies are SN Ia hosts; an eleventh galaxy, NGC 4258, will provide an absolute distance calibration through the geometric measurement of its distance based on H\({}_{2}\)O megamasers.
For the first time, we have _JWST_ data for Cepheids where stars located within one PSF radius, that were discovered on HST frames, can be directly identified. Limiting the sample of Cepheids to exclude the variables with nearby neighbors, results in a distance modulus that is +0.45 mag farther away (in the sense that its contribution would result in a lower value of \(H_{0}\)). Future data will reveal whether this is indicative of a systematic effect to be found in the larger sample.
While it has become a common refrain in the literature that systematic effects can no
Figure 14: Probability distributions for \(H_{0}\) for calibrations based on Cepheids [139], the TRGB [25] SBF from [85], compared to recent published values from the literature. The Planck Collaboration value from the CMB [11] shown in grey.
longer be considered as relevant for the \(H_{0}\) tension, with differences this large in a galaxy at only 20 Mpc distance, there are reasons to keep open to the possibility that "unknown unknowns", or perhaps "known unknowns", could still be significant.7
Footnote 7: With acknowledgement to Donald Rumsfeld who said “There are known known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”
How then could we imagine that a difference, with a quoted significance of 5-6\(\sigma\), is still possible to reconcile? As we have seen, H-band (F160W) photometry obtained with HST in crowded fields appears to be highly challenging, with uncertainties far exceeding the currently quoted uncertainties on the Cepheid calibration of the local distance scale. Another example of the challenge comes from an comparison of the F160W photometry published by _SHoES_ between 2016 and 2022. We note that even internally within the _SHoES_ program, large differences for the same galaxies are seen in an earlier analysis of the same data following a later analysis, even by the same group. Moreover, and of more concern as the goal is to achieve 1% level accuracies, these differences are systematic in nature. A simple comparison of the F160W distance moduli tabulated in R16 and R22 reveals an overall difference of \(-0.123\) with an \(rms\) dispersion of 0.085 mag8, with five galaxies having large differences ranging from -0.2 to -0.7 mag. (Using the median instead, the overall difference between R16 and R22 is \(-0.096\pm 0.068\) mag.) The above-quoted difference of -0.123 mag, on average, corresponds to a 6% shift in \(H_{0}\). Put another way, if the newer, more reliable F160W magnitude measurements (according to R22) had been available in 2016, it would have resulted in a value of \(H_{0}=68.97\)\(\,\mathrm{km\,s}^{-1}\,\mathrm{Mpc}^{-1}\) (and a corresponding tension of only 0.9 sigma compared to the Planck value of 67.4 \(\pm\) 0.5 \(\,\mathrm{km\,s}^{-1}\,\mathrm{Mpc}^{-1}\)). Given that the reduction of the HST near-IR photometry is still in a state of flux, as found in our independent reduction (details in [133]) and the _SHoES_ own independent analysis, it surely must signal that better near-IR data, at least, are essential to resolving both the 'local' and the CMB-versus-Cepheid \(H_{0}\) tensions.
Footnote 8: Determined by computing a mean difference for all Cepheid fluxes in each SN Ia host galaxy and taking the mean of those values. These statistics were determined for 874 Cepheids in common between R16 and R22 observed across 21 host galaxies; Hoyt, private communication.
Over the next year, we will continue to obtain data and complete the analysis of our _JWST_ CCHP program sample of ten galaxies, all with distances \(\lesssim 20\) Mpc, chosen to be close enough to minimize potential crowding effects. Ultimately, we will calibrate SNe Ia based on three independent techniques - Cepheids, the TRGB and JAGB/carbon stars - and determine a value of \(H_{0}\) with significantly reduced systematic uncertainties (including reddening by dust, differences in chemical composition, and crowding/blending of images). These data will allow us to provide an answer to one of the most important problems in cosmology today - Is there new fundamental physics required beyond standard \(\Lambda\)CDM?
## 11 Appendix: Intercomparison of the Cepheid, TRGB and JAGB
Methods and Sensitivities
### General Remarks
Having introduced each of the primary extragalactic stellar distance indicators, Cepheids in Section 3, TRGB stars in Section 4 and JAGB Stars in Section 7, we now inter-compare the pros and cons, strengths and weaknesses of each of the three methods to various known, as well as further potential sources of systematic and statistical errors. Some of these errors are a strong function of distance, others are functions of color/metallicity, surface brightness and
star formation history, with each of them changing in amplitude, and possibly in sign, as a function of wavelength/bandpass. Theory can act as a guide or even offer some understanding of the distance indicators and the sense of changes resulting from certain underlying physical parameters being varied; but none of the three types of stars being discussed here have calibrations that are derived strictly from "first principles".9
Footnote 9: The only one that come close is the TRGB.
The impact of some of the errors on these distance indicators can be controlled by optimizing the observations ahead of time (e.g., moving to the infrared to minimize reddening, or only observing in the halo, for TRGB stars), others can be dealt with after the fact (e.g., using a priori knowledge of light curve shapes and amplitudes as a function of wavelength in deriving mean magnitudes); still others (such as the total number of Cepheids to be found in a given galaxy) are simply facts to be dealt with, with those limited numbers quantified. Accurate photometry is essential in all cases. Finally, quantifying the sky backgrounds, and undertaking corrections due to contaminating stars both remain challenging for stellar photometry being undertaken in crowded fields.
The discussion below is intended to illustrate current or potential additional areas of weakness in all of the methods that have not yet been addressed, but may be important in trying to reach a goal of 1% accuracy. However, a single path will not be sufficient. As was done for the HST Key Project, the goal must be to find a solution where many paths of high accuracy converge to a mutually consistent answer.
The three distance indicators under discussion here, are drawn from three distinct and identifiable stellar populations, differing by their typical masses, their evolutionary stages, their variability (or not), their intrinsic spatial distributions within their host galaxies and last, but not least, their individual metallicities (in the interior and in their atmospheres, which may or may not be the same).
### Photometry
Photometric accuracy is a key ingredient to the ultimate determination of the local distance scale; however, obtaining accurate photometry in the target fields in galaxies sufficiently distant to host SNe Ia is non-trivial. Several crowded-field software packages are available (e.g., DAOPHOT/ALLFRAME [130, 140], DoPHOT [141, 142], DOLPLOT [143]). Bounds on the systematic uncertainties can be obtained by using more than a single analysis package, as for the case of the Key Project [5] with DAOPHOT/ALLFRAME and DoPHOT. A careful study of the galaxy NGC 3370 using DAOPHOT and DOLPHOT [144] reveals that the statistical errors returned by the photometry codes are 25-50 % smaller than the errors measured from artificial star tests. While statistical uncertainties can be overcome by having larger samples of stars, the same is not the case for systematic errors. The latter are magnitude dependent and become larger at the faint end, at the level of \(\sim\)0.1 mag (5% in flux, and ultimately, distance). These kinds of (rather significant) systematic uncertainties are often not included in the final error budgets for \(H_{0}\).
For a broad-based simulation of the effects of photometric errors, crowding and smoothing of the data for measurements of the TRGB see [52]. As discussed in the literature [144, 145] these differences amongst different software packages likely result from different choices of input parameters, including sky annuli, fitting radii, and PSF models. Ascertaining the 'correct' solution and completely eliminating the systematic effects may not yet be feasible, but the differences should be reflected in the overall uncertainty.
### Crowding
With increasing distance even an entire galaxy of stars will dissolve into a single, spatially-unresolved point of light. Aided by larger and larger aperture telescopes many of the galaxies, diffuse to the unaided eye, can be resolved into individual "brightest" stars, while the more numerous and fainter stars are reduced to the status of "surface brightness". Crowding is inevitable. Even from space, crowding limits our ability to detect and/or measure stars projected onto the main bodies of individual galaxies.
To first order, at a fixed distance, crowding is directly correlated with the local surface brightness. Virtually all of the continuum radiation, making up the surface brightness immediately surrounding any given stellar distance indicator, is itself ultimately due to individual stars (resolved or not). Those stars that are bright enough to be detected individually, transition to being called "sources of crowding". It is their spatially stochastic appearance, around and under the point spread function (PSF) of the stellar distance indicators, that is particularly vexing. For any given stellar distance indicator we can be informed as to the likelihood of it being crowded, but what exactly is hidden under its PSF can only be determined with any certainty using higher resolution imaging data to actually "look underneath", at which point the issue of crowding becomes moot... for that wavelength.
When attempting to deal quantitatively with crowding, at least four ways forward suggest themselves: (a) Running artificial star tests on the available imaging data, (b) Obtaining higher resolution imaging with the same telescope, but at shorter wavelengths, (c) Obtaining higher resolution imaging with a larger telescope operating at the same wavelength, or (c) Moving the application to lower surface brightness regions further out in the galaxy (e.g., [146]). In some cases, Cepheids for instance, moving out of the star-forming region of the galaxy may not be an option.)
### Mass
Differences in the instantaneous masses of stars, indicative of the three methods being discussed here, manifest themselves differently for each type.
The masses of Cepheids (tentatively gauged by their main sequence progenitors) are tightly tied to the periods of these variable stars. After leaving the main sequence the high-mass O and B-type stars that are en route to becoming Cepheids, evolve in the first instance at approximately constant luminosity across the Hertzsprung-Russell (HR) diagram, and into the Cepheid instability strip. Higher-mass Cepheids have lower mean densities and therefore longer periods. But the mass mapping is not unique. First-crossing Cepheids quickly traverse the instability strip and their variability ceases. They then become red supergiants, increase their luminosities and loop back to the blue re-entering the instability strip. At this point the unique tagging of mass to luminosity is broken, and then becomes triply ambiguous when the Cepheid pivots back to the red, once again increasing its luminosity as it does so. Each one of these crossings would be characterized by its own period-luminosity relation. While theory tells us that most of the time is spent in the second crossing, the presence of first and third-crossing Cepheids introduce irreducible scatter into the composite PL relation. If measuring metallicities for individual extragalactic Cepheids is unlikely, then the possibility of measuring individual masses of those same stars at tens of megaparsecs is less so.
Possible mass loss in the red supergiant phases of a Cepheid's life cycle potentially makes the situation more complicated. If that mass loss is a function of metallicity (or, even worse, if it is stochastic) then standardization is problematic without individual metallicities being measured for individual stars. While these issues may not have been a problem for
measuring distances at the 10% level, it remains to be demonstrated that they are not a very real problem in an era where 1% is the desired goal.
For TRGB stars the situation with regard to masses of the stars populating the red giant branch is more straightforward. As noted in Section 4 the evolution of stars up the red giant branch is completely controlled by the (detached) evolution of the degenerate helium core as it grows in mass from the "ash" raining down upon it from hydrogen burning in the shell directly above it. As more helium falls onto the core, the core contracts, the temperature rises and the energy output of the shell accelerates. The amount of mass in the envelope above the core turns out to be largely irrelevant to the evolution and/or instantaneous properties of the core and its shell; the envelope is simply a vertically inflated source of fuel feeding the shell. At a well defined mass the core ignites the triple-alpha (helium burning) process. It does so at a fixed temperature, defined by well-established laboratory nuclear physics, at a fixed radius and at a fixed luminosity of the shell, all of which are independent of the _total_ mass of the red giant star (and _inter alia_, independent of any mass loss that may or may not occur or not during that ascent.) Bolometrically, TRGB stars are as close to being standard candles as one can hope for [147].
JAGB stars are excellent distance indicators because of the mass-sensitive processes that are integral in down-selecting the far more numerous (and much more widely-spread-in-luminosity) hot/"blue" oxygen-rich AGB stars, thereby producing the much cooler/redder carbon-rich AGB stars within which we find the JAGB population of distance indicators (see Section 7).
### Evolutionary Status
The advanced evolutionary phases of Cepheids, as they cross the instability strip multiple times, are a strong function of their metallicities. This is especially true of their second and third crossings that are the result of "blue loops" following their evolution through the red supergiant phase outside and to the red of the instability strip. As illustrated in Figure 13, moving from the bottom, low-metallicity, panel (Z = 0.001) to the top, high-metallicity, panel (Z = 0.02), the lowest mass Cepheids especially have their blue loops shortened systematically as a function of increasing metallicity. At the highest metallicities (top panel) the blue loops are so shortened that the instability strip is totally devoid of Cepheids. Incomplete filling of the strip as a function of period could give rise to changes in the apparent width and the measured slope of the resulting PL relation, as a function of metallicity. This would be in excess of any wavelength-dependent (atmospheric) metallicity effects on the colors and luminosities of Cepheids penetrating the strip.
As already discussed above, both the TRGB and JAGB stars are evolutionarily selected by physical processes that are strictly controlled by mass.
### Spatial Distribution
As already noted, Cepheids are evolving, high-mass stars whose progenitors are young, hot O and B type stars. As such, Cepheids are still physically close to sites of star formation which include their progenitors, as well as the residual gas and dust from which they are formed. These regions are also of higher-than-average surface brightness both because of the general star formation activity at blue wavelengths, but also in the red where density waves, when they are actively involved in spiral structure, will collectively concentrate the low-mass red stars as well. Owing simply to their young ages, Cepheids are only to be found in gas-rich, dusty, high-surface-density regions of spiral and irregular galaxies, prone to large
and variable amounts of total line-of-sight extinction and greater-than-average amounts of crowding and confusion.
TRGB stars are found at the opposite extremes of each of the above situations that Cepheids find themselves in. TRGB stars are old, Population II stars that are denizens of the halos as well as the inner bulges of galaxies. An advantage is that they can always be found far from the disks of their parent galaxies where there is little or no gas or dust to dim/redden them. With the exception of a low level contribution of AGB stars, the dominant population of resolved stars in the halo is the TRGB population itself. If crowding is to occur it will statistically be RGB stars crowding other RGB stars. An easy calculation or a quick look at any given frame will show just how densely packed the TRGB stars are, and whether a more sparsely populated region should be selected or not. In any case, there are good reasons not to use TRGB stars as distance indicators if there is evidence (in the color-magnitude diagrams themselves) for younger populations of blue or red supergiants, or large numbers of intermediate-age AGB stars in the field. Such fields will have dust, and they will be more susceptible to crowding by their interloper populations. Rather than trying to compensate for working in a compromised field in terms of potential systematics, it is best to address all of the associated problems by simply moving farther out into the halo.
Figure 15: Three evolutionary HR diagrams for stars having masses ranging from 12 to 2.5 solar masses. The Cepheid instability strips are shown by vertically sloping blue and red lines. Increasing the metallicity (bottom to top) from Z = 0.001 to 0.01 and finally 0.02 illustrates the progressive depopulation of the short-period region of the Cepheid PL relation, with higher metallicity stars pulling to the red and systematically failing to make it back into the instability strip in their blue loop and attempted second “crossing”. Adapted by Bono from [148].
JAGB stars are an interesting population of stars that are very numerous, old enough to be smoothly distributed in space (e.g, see Figure 9 [149]), but still associated with the disk and its gas and dust. However, they do extend well beyond the most dusty regions, well out into the extended disk (possibly even defining it). Moreover, since the luminosities of JAGB stars are specifically defined to be measured in the J band (at 1.2 microns), the effects of dust are diminished with respect to the optical.
### Metallicity
Each of our three stellar distance indicators independently span a range of (largely non-overlapping) metallicities. Cepheids are high-mass, high-metallicity, young Population I stars. JAGB stars are intermediate-mass, intermediate-age, and intermediate (interior) metallicity stars whose polluted surface abundances bear no resemblance to their immediate progenitors, nor to their main sequence star progenitor metallicities. TRGB stars are old, low-mass, Population II stars that cover a range of low metallicities.
Empirical studies of the effect of atmospheric metallicity on the magnitudes and colors of Cepheids are still actively debated ([74][150] and references therein), not only in the optical, but now in the near and mid-infrared [151] where expectations were that line blanketing effect at least would be minimal.
Theoretical studies of the effects of interior metallicity on the evolutionary tracks of Cepheids criss-crossing the instability strip indicate that the second crossing, and especially the degree to which it penetrates the allowed region of Cepheid variability before the star turns and evolves back out of the strip to the red, is indeed a function of metallicity. As illustrated in Figure 15, moving from the bottom panel (Z = 0.001) to the top panel (Z = 0.02), the lowest mass Cepheids especially have their blue loops shortened systematically as a function of increasing metallicity. Incomplete filling of the instability strip as a function of mass (as predicted to be the case) would manifest itself as being a function of period, and would of course, change the slope and zero point of that population's PL relation.
JAGB stars, by their very nature, have atmospheres that are totally dominated by carbon that has recently been formed and convected to the surface. Whether that carbon gets to the surface or not is determined by envelope physics that is controlled by the mass of the star. Any direct knowledge of the interior metallicity of a JAGB/carbon star is completely masked by the overwhelming presence of recently introduced carbon in the atmosphere.
A fine introduction to the mapping of theory to observations of TRGB stars is found in [147]. Using the linking equations found therein it can be shown that from \(-2.0<\mathrm{[Fe/H]}<-1.0\) dex it follows that \(1.39<(V-I)_{o}<2.12\) yielding \(-4.04>M_{I}>-4.01\) mag. So to within \(\pm 0.015\) mag \(M_{I}\) is a standard candle, independent of color or metallicity over range cited above. With that in mind, [152] have made the case for applying the TRGB method using only I-band luminosity functions without a second filter (or a CMD) being required.
### Binarity
The incidence of Cepheids having companions is thought to be at least 30% [153, 154] and because of evolutionary timescale differences most of the companions are probably lower mass, bluer and certainly of lower luminosity than the parent Cepheid. How that incidence of binarity changes from galaxy to galaxy or within galaxies (as a function of metallicity, say) is unknown. The influence of companions on observed PL relations have been recently discussed in [155, 156]. It is important to treat nearby anchor galaxies (for which the binaries
may be resolved) in a self-consistent manner to those of more distant galaxies (for which it is not possible to resolve any physical binaries).
For TRGB stars, close binaries might prevent either component from ever getting to the tip because of mass transfer, while wide binaries would have the same effect as self crowding in the general field, that is depleting stars at the tip and moving the pair into the AGB region. This will blur the tip but not bias its detection [52].
One can currently only speculate as to the incidence of companions to JAGB/carbon stars. But it is likely that any surviving (orbitally distant) companions would be fainter (and certainly bluer) than these relatively bright evolved stars. Companions would only compromise the JAGB distance scale if the light contributed by them varied dramatically from galaxy to galaxy, with star formation history differences, or again, with metallicity. However, the small dispersion in the cross-comparison of JAGB distances with TRGB distances to the same galaxies [95] puts an upper limit of 0.06 mag on any bias due to the effect of variable contributions from companions to the J-band luminosity function of JAGB stars.
### Mass Loss
The masses of Cepheids can be estimated in a number of ways: (1) Using the masses of their main sequence progenitors followed into the Cepheid instability strip, (2) Using stellar pulsation modeling where one of the theoretical input parameters is the mass of the Cepheid and (3) Direct measurement of masses using Cepheids in eclipsing binary systems. For a given period the main sequence masses come in high, the stellar pulsation masses come in low (by 20-30% [157]), and the very rare examples of eclipsing binary stars containing a Cepheid confirm the lower mass estimates ([158]). Apparently Cepheids lose mass somewhere between leaving the main sequence and entering the instability strip, at least for the longest-lived second crossing. The (convective) red supergiant phase is the prime suspect, but still unproven. Without knowing the systematics and sensitivities of Cepheid mass loss it can only be speculated as to how much random or systematic noise this one effect is injecting into the observed PL relation as a function of period, age, color and/or metallicity.
JAGB stars have extended convective envelopes that are unstable to pulsations and in the extreme their redder progeny can develop winds, produce dust and lose mass. A red color cut on the JAGB selection eliminates the reddest stars whose lifetimes are apparently very short, given their small numbers compared to the bluer JAGB color-selected population.
TRGB stars are known to lose mass between the tip and the horizontal branch after the helium flash; but that is after the fact. The TRGB progenitor stars climbing the RGB may or may not be losing mass, but as noted above, the mass of the envelope has little or no effect on the instantaneous or terminal luminosity of these stars.
### Boundary Conditions
For Cepheids, the question of what determines the population of the instability strip remains uncertain and is seldom addressed. Take, for example, the underlying (internal) structure of the Cepheid instability strip, and how the triggering of variability (on or off) is externally constrained by the strip's hot/blue and cool/red edges. What is the physics governing the required depth of the He II ionization layer that turns on the variability as stars enter from the blue? What is the other mechanism that shuts down that same variability as stars exit the instability strip going to the red? And, how are these independent physical mechanisms10
controlled by metallicity, surface gravity and intrinsic temperature, say? (Not to mention helium abundance and semi-convection.) If these constraints vary say due to metallicity from galaxy to galaxy, or within galaxies, then the ridge line of the PL relation will be affected in slope and zero point, as will be the entire color/temperature width of the PL relation in which Cepheids might be discovered. See, for example [148].
Boundary conditions controlling the JAGB and/or TRGB phenomena as a whole have already been discussed under their sensitivity to mass, metallicity and evolutionary status.
### Correlated Variance
The simplest of mathematical considerations requires that for every parameter being solved for there needs to be an independent observation.
For Cepheids we are essentially trying to measure a standardizable intrinsic luminosity in some bandpass, by realizing that the dominant underlying physical parameters controlling the luminosity are temperature, radius and a wavelength-dependent bolometric correction. Interfacing with observations first requires correcting each Cepheid individually, at each wavelength being observed, for total line-of-sight interstellar extinction (in the Milky Way and in the host galaxy). If the atmospheric magnitudes and colors are differentially affected by line blanketing then reddening will be covariant with metallicity. Similarly, if metallicity calibrations are based on comparing the distance moduli of galaxies with different mean metallicities then any such calibration will have metallicity and distances being covariant [74].
Figure 16: Theoretical predictions for the sensitivity of the slopes and intercepts of both the hot (blue) and cool (red) limit of the Cepheid instability strip resulting from changes in the helium abundance and the metallicity (within a single panel) and between adopting canonical and non-canonical mass-luminosity models in the upper and lower panels, respectively. Adapted from [148].
For TRGB stars the main covariance is between luminosity and color (i.e., metallicity) of the tip stars. The correlation has a positive slope (increasing luminosity with increasing color) in the infrared, and a negative slope in the visual and blue. The crossing point of zero slope (i.e., no dependence of magnitude on color) occurs at or slightly redward of the I-band filter near 8100A.
Similarly the J band is where the slope of the JAGB/carbon star absolute magnitude is found to be independent of color (before winds develop and dust forms in the very reddest stars, which are easily excluded by very red color cuts).
### Mean Magnitudes
These final two categories are more on the technical side, and have been left for last because they are well known and their solutions are easily stated and quantified, if not all that easily achieved in practice.
Classical Cepheids can only be uniquely identified by their characteristic asymmetric saw-tooth light curves in the optical, combined with their periods, that can stretch from 2 to more than 100 days. Because their amplitudes monotonically decline with increasing wavelength, Cepheids [159] become increasingly harder to detect and characterize in the infrared alone. On the other hand obtaining mean magnitudes good to a given statistical error require fewer observations in the infrared (where random sampling is always closer to the mean in the infrared than are the same phase samples in the optical) than in the blue for example. Be that as it may, several high-precision mean magnitudes obtained for at least two different wavelengths are needed for the Cepheids when correcting their apparent magnitudes for wavelength-dependent interstellar extinction. Four bandpasses were found to be necessary for the _SHoES_[8] program of discovery and measurement.
The TRGB magnitude used in this method's distance determination is the magnitude of the discontinuity of the RGB luminosity function corrected for metallicity in all bands except for one, the I band where the slope of the metallicity-color relation changes sign from sloping down to sloping up in moving from the blue to red, effectively crossing zero in the I band around 8100A. Sufficient numbers of TRGB stars are required to fill the luminosity function up to the discontinuity and high-precision data is a benefit (see [52]). "All that is required" is good areal coverage of the halo and sufficiently long exposures with any given telescope. In principle only a single-epoch exposure in the I band is required to produce the needed apparent luminosity function. In practice two bands are generally required.
The marginalized luminosity function of JAGB stars is optimally undertaken in the J band where the color sensitivity of these color-selected carbon stars is flat with color. While all of these stars are thought to be variables (of one sort or another) the JAGB population is not selected on light curve shapes or any type of variability. Variability is simply another form of random noise that can be averaged over or simply accepted without any systematic penalty.
### Optimal Bandpasses
Measurement of a fiducial TRGB magnitude is optimally undertaken in the I band where the run of tip magnitude with color/metallicity is flattest. Using the tip as a distance indicator at longer or shorter wavelengths requires high-precision colors in order to take out the slope of the tip with color (rectification) without introducing additional noise.
Cepheids require at least two (and optimally three) sets of light curves in order to have time-averaged magnitudes that span a number of different wavelengths. These data points
are then needed, in the first instance, to correct for total line-of-sight extinction, and to facilitate the application of metallicity corrections. Two bands in the optical provide good leverage on the reddening, and an additional band as far into the near infrared as possible, is thought to be less influenced by metallicity (but see [150]) and is certainly less impacted by extinction.
The JAGB luminosity function is found to be optimally measured in the J band where the color-selected JAGB candidates are found to show minimal correlation with color. To date no calibrations have been proposed at any other wavelengths, shorter or longer, given that J-band sensitive instruments are now available on the ground and in space (on both _JWST_ and HST.
### Comments
The various challenges facing the three astrophysical/stellar distance indicators at the foundation of the local distance scale, as discussed individually in some detail above, are summarized in Table 1 below. The three distance indicators are given across the top of the table and each of the topics discussed above are listed from top to bottom. Brief notes describe the challenges facing that particular distance indicator in that particular category, while the details can be found in the corresponding subsections above.
Of the three methods, Cepheids are the most complex and challenging. This is due to their complicated evolutionary status, pulsation properties, mass loss uncertainties, the interplay of interior and atmospheric metallicities across many wavelengths, and finally the vexing issue of crowding that they face in the (dusty) high-surface-brightness inner disks of galaxies where they are confined to be located. Because of their intrinsic simplicity, TRGB stars and JAGB stars are closer to being standard candles when observed in the I and J bands, and when purposefully measured in the halos and in the radially extended disks of their host galaxies, respectively. _JWST_ observations promise to improve the measurements, as well as to further constrain systematic effects, for all three methods.
We thank our many students, postdocs, and collaborators over the last 40 years, all of whom contributed to much of the work described here, with particular thanks to Taylor Hoyt, In Sung Jang, Abigail Lee, Andy Monson, Kayla Owens, and Eric Persson. We thank Taylor Hoyt for Figure 4, and for discussions of the \(H_{0}\) tension. We also thank the University of Chicago and the Carnegie Institution for Science for their support of this research.
This research is based in part on observations made with the NASA/ESA Hubble Space Telescope obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This work is also based in part on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127. These observations are associated with HST programs #12880 and #14149 and with JWST program #1995. Financial support for this work was provided in part by NASA through HST program #16126 and JWST program #1995.
## Chapter 1 Comparison of Issues Affecting the Three Primary Distance Indicators
(see Section 11.2 for details)
\begin{tabular}{l l l} TRGB Stars & Cepheid PL Relation & J-Branch AGB Stars \\ Photometry & Photometry & Photometry \\ \hline Oglobhot and Daophot & Dolphot and Daophot & Dolphot and Daophot \\ disagree & disagree & disagree \\ Anchors & Anchors & Anchors \\ Small number of anchors & Small number of anchors & Small number of anchors \\ Crowding & Crowding & Crowding \\ \hline Clear in Halo & Unavoidable in HSB Disk & Moderate in Extended Disk \\ Mags & Mass & Mass \\ & \({}^{\star}\)Cepheid Mass Discrepancy\({}^{\star}\) & \\ Core mass dominant & Ambiguous estimates & Guided by theory \\ Evolutionary Status & Evolutionary Status & Evolutionary Status \\ \hline Well understood & Multiple crossings & 3rd Dredge-up and HBB \\ from first principles & Non-unique & \\ Spatial Distribution & Spatial Distribution & Spatial Distribution \\ Only observed in halo & Co-spatial with dust & \\ Low surface brightness & High surface brightness & Lower dust and surface brightness \\ Metallicity & Metallicity & Metallicity \\ Color calibrated & Still very controversial in sign, & \\ & amplitude and wavelength & \\ & dependence & \\ Binarity & Binarity & Binarity \\ Unknown & 30\% lower limit & Unknown \\ Mass Loss & Mass Loss & Mass Loss \\ & Considerable mass loss & Sample cut off to the red \\ None detected & after main sequence & \\ None suspected & Boundary Conditions & Boundary Conditions \\ Boundary Conditions & & \\ & Convection and metallicity & Narrow mass range for carbon star \\ Red cut-off imposed & control blue and red edges & production \\ Covariance & Covariance & Covariance \\ Color tracks metallicity & Distance and metallicity & \\ Flat with color in 1 band & & Flat with color in 1 band \\ & & \\ Mean Mean Mantiudes & Mean Mantiudes & Mean Mantiudes \\ Single epoch & Single epoch & \\ Two bands & 12+ epochs & \\ Three bands & & Two bands \\ Optimal Wavelengths & Optimal Wavelengths & \\ \hline I Band & VI + Near-IR (I or H) & \\ \end{tabular} |
2309.04260 | High-resolution APEX/LAsMA $^{12}$CO and $^{13}$CO (3-2) observation of
the G333 giant molecular cloud complex : II. Survival and gravitational
collapse of dense gas structures under feedback | We investigate the physical properties of gas structures under feedback in
the G333 complex using data of the 13CO (3-2) line in the LAsMA observation. We
used the Dendrogram algorithm to identify molecular gas structures based on the
integrated intensity map of the 13CO (3-2) emission, and extracted the average
spectra of all structures to investigate their velocity components and gas
kinematics. We derive the column density ratios between different transitions
of the 13CO emission pixel-by-pixel, and find the peak values N(2-1)/N(1-0) ~
0.5, N(3-2)/N(1-0) ~ 0.3, N(3-2)/N(2-1) ~ 0.5. These ratios can also be roughly
predicted by RADEX for an average H$_2$ volume density of ~ 4.2 * 10$^3$
cm$^{-3}$. A classical virial analysis does not reflect the true physical state
of the identified structures, and we find that external pressure from the
ambient cloud plays an important role in confining the observed gas structures.
For high column density structures, velocity dispersion and density show a
clear correlation, while for low column density structures they do not,
indicating the contribution of gravitational collapse to the velocity
dispersion. For both leaf and branch structures, $\sigma-N*R$ always has a
stronger correlation compared to $\sigma-N$ and $\sigma-R$. The scaling
relations are stronger, and have steeper slopes when considering only
self-gravitating structures, which are the structures most closely associated
with the Heyer-relation. Although the feedback disrupting the molecular clouds
will break up the original cloud complex, the substructures of the original
complex can be reorganized into new gravitationally governed configurations
around new gravitational centers. This process is accompanied by structural
destruction and generation, and changes in gravitational centers, but
gravitational collapse is always ongoing. | J. W. Zhou, F. Wyrowski, S. Neupane, I. Barlach Christensen, K. M. Menten, S. H. Li, T. Liu | 2023-09-08T11:02:07Z | http://arxiv.org/abs/2309.04260v1 | High-resolution APEX/LAsMA \({}^{12}\)CO and \({}^{13}\)CO (3-2) observation of the G333 giant molecular cloud complex : II. Survival and gravitational collapse of dense gas structures under feedback
###### Abstract
Context:Feedback from young massive stars has an important impact on the star formation potential of their parental molecular clouds.
Aims:We investigate the physical properties of gas structures under feedback in the G333 complex using data of the \({}^{13}\)CO \(J=3-2\) line observed with the LAsMA heterodyne camera on the APEX telescope.
Methods:We used the Dendergram algorithm to identify molecular gas structures based on the integrated intensity map of the \({}^{13}\)CO (3\(-\)2) emission, and extracted the average spectra of all structures to investigate their velocity components and gas kinematics.
Results:We derive the column density ratios between different transitions of the \({}^{13}\)CO emission pixel-by-pixel, and find the peak values \(N_{2-1}/N_{1-0}\approx 0.5\), \(N_{3-2}/N_{1-0}\approx 0.3\), \(N_{3-2}/N_{2-1}\approx 0.5\). These ratios can also be roughly predicted by the non-LTE molecular radiative transfer code RADEX for an average H\({}_{2}\) volume density of \(\sim 4.2\times 10^{3}\) cm\({}^{-3}\). A classical virial analysis does not reflect the true physical state of the identified structures, and we find that external pressure from the ambient cloud plays an important role in confining the observed gas structures. For high column density structures, velocity dispersion and density show a clear correlation, while for low column density structures they do not, indicating the contribution of gravitational collapse to the velocity dispersion. Branch structures show a more significant correlation between 8 um surface brightness and velocity dispersion than leaf structures, implying that feedback has a greater impact on large-scale structures. For both leaf and branch structures, \(\sigma-N*R\) always has a stronger correlation compared to \(\sigma-N\) and \(\sigma-R\). The scaling relations are stronger, and have steeper slopes when considering only self-gravitating structures, which are the structures most closely associated with the Heyer-relation.
Conclusions:Although the feedback disrupting the molecular clouds will break up the original cloud complex, the substructures of the original complex can be reorganized into new gravitationally governed configurations around new gravitational centers. This process is accompanied by structural destruction and generation, and changes in gravitational centers, but gravitational collapse is always ongoing.
## 1 Introduction
High-mass stars (M-8 M\({}_{\odot}\)) have a profound impact on the evolution of the interstellar medium (ISM). Throughout their short lifetimes (\(\sim\)10\({}^{6}\) yr), radiation-driven stellar winds from high-mass stars create HII regions in the surrounding giant molecular clouds (GMCs) (Zinnecker & Yorke, 2007; Molinari et al., 2014; Motte et al., 2018). High-mass stars end their lives in the form of supernovae (SNe) whose explosions can release \(\sim\)10\({}^{51}\) erg of energy near-instantaneously. Shocks from expanding HII regions and supernova remnants (SNRs) can accelerate and heat their surrounding gas, and add turbulence to the gas. In simulations, massive stellar feedback, including ionizing radiation, stellar winds and supernovae (Matzner, 2002; Dale et al., 2012; Rogers & Pittard, 2013; Dale et al., 2014; Rahner et al., 2017; Smith et al., 2018; Lewis et al., 2023), can suppress the star formation and destroy the natal cloud. However, whether the stellar feedback promotes or suppresses star formation remains controversial.
W49A is one of the most massive and luminous young star-forming regions in the Galaxy. As presented in Rugel et al. (2019), it is more likely that only limited parts of W49A were affected by feedback from the central stellar cluster, while stars in the outer parts of W49A formed independently. Moreover, all feedback models used in Rugel et al. (2019) predict re-collapse of the shell after the first star formation event, which means that feedback of the first formed cluster is therefore not strong enough to disperse the cloud. Previous work on the G305 region observed with the Large APEX sub-Millimeter Array (LAsMA) 7 beam receiver on the Atacama Pathfinder Experiment 12 meter submillimeter telescope (APEX) found that strong stellar winds drive turbulence in the G305 GMC in which feedback has triggered star formation by the collect and collapse mechanism (Mazumdar et al., 2021, 2021). The dense molecular gas structures inside the cloud serve as star-forming sites and their phys
ical states directly determine the star formation capability of the molecular cloud under feedback. A basic question is how the dense gas structures survive and maintain star formation activity in a strong feedback environment, which depends on the relative strength between their gravity and turbulence.
The relative importance of turbulence and gravity in massive star-forming regions is a long and widely debated topic, distinct views lead to different physical pictures of massive star formation, such as turbulent-core model (McKee & Tan, 2003; Krumholz et al., 2007), competitive-accretion model (Bonnell et al., 1997, 2001), inertial-inflow model (Padoan et al., 2020), and global hierarchical collapse model (Vazquez-Semadeni et al., 2009; Ballesteros-Paredes et al., 2011a; Hartmann et al., 2012; Vazquez-Semadeni et al., 2017, 2019). Larson's laws claim that in molecular clouds the velocity dispersion \(\sigma\) scales proportionally to the scale \(R\), and molecular clouds are approximately in virial equilibrium, with a mostly uniform column density. The Larson-relation (\(\sigma_{e}\propto R^{0.5}\)) is generally used to emphasize the importance of turbulence in molecular clouds, and turbulence acts to sustain the clouds against gravitational collapse (Larson, 1981; Solomon et al., 1987; Heyer & Brunt, 2004; Mac Low & Klessen, 2004; McKee & Ostriker, 2007; Hennebelle & Falgarone, 2012). Heyer et al. (2009) generalized the Larson-relation by extending the Larson-ratio \(L\equiv\sigma_{e}/R^{0.5}\) with the surface density \(\Sigma\) of Galactic GMCs, i.e. \(\sigma_{e}/R^{0.5}\propto 2^{0.5}\). Subsequently, the Heyer-relation has been used to emphasize the importance of gravity in molecular clouds (Ballesteros-Paredes et al., 2011b; Traficante et al., 2018, 2018; Ballesteros-Paredes et al., 2018; Vazquez-Semadeni et al., 2019; Ballesteros-Paredes et al., 2020). Especially, in the high-column density portions of star-forming regions, such as clumps or cores, the Heyer-relation always performs better than the Larson-relation (Ballesteros-Paredes et al., 2011b; Camacho et al., 2016; Traficante et al., 2018), suggesting strong gravity at relatively small scales in molecular clouds. Ibanez-Mejia et al. (2016) have shown that the Heyer-relation cannot be reproduced without self-gravity in simulations of the ISM with supernova-driven turbulence. In contrast, purely supernova-driven turbulence in the ISM generates the Larson-relation.
Regarding the explanation of the Heyer-relation, Heyer et al. (2009) claimed that it is consistent with the clouds being in virial equilibrium, as it follows directly from the condition \(2E_{\rm k}=E_{\rm g}\), where \(E_{\rm k}=M\sigma^{2}/2\), \(E_{\rm g}=3GM^{2}/SR\). Ballesteros-Paredes et al. (2011b) further pointed out that the scaling is also consistent with the clouds undergoing free-fall, in which case \(E_{\rm k}=|E_{\rm g}|\). However, the differences between the effects of free-fall and virial equilibrium in the \(\sigma_{e}/R^{0.5}\) vs. \(\Sigma\) diagram are smaller than the typical uncertainty of the observational data (Ballesteros-Paredes et al., 2011b), thus difficult to distinguish. Both explanations involve only gravitational and kinetic energy, which may be a workable approximation for relatively isolated molecular clouds, but is often too simplistic for substructures inside a molecular cloud, especially for a cloud affected by feedback, such as our target G333. When the substructures can self-gravitationally collapse, they may decouple from the surrounding environment (Peretto et al., 2023). If not, the exchange of energy with the surrounding environment will break the conversion between gravitational potential energy, \({\rm E}_{g}\), and kinetic energy, \({\rm E}_{\rm d}\), of the structures, and thus violate the Heyer-relation.
In Zhou et al. (2023), we found in the G333 complex that the larger scale inflow is driven by the larger scale cloud structure, indicating hierarchical structure in the GMC and gas inflow from large to small scales. The large-scale gas inflow is driven by gravity, implying that the molecular clouds in the G333 complex may be in a state of global gravitational collapse. However, the broken morphology of some very infrared bright structures in the G333 complex also indicates that feedback is disrupting star-forming regions. Here we are going to address the question of how the dense molecular structures survive and maintain the gravitational collapse state in a strong feedback environment.
## 2 Data
### LAsMA data
The observations and data reduction have been described in detail in Zhou et al. (2023). We mapped a 3.4\({}^{\circ}\times 1.2^{\circ}\) area centered at \((l,b)=(332.33^{\circ},-0.29^{\circ})\) using the APEX telescope (Gusten et al., 2006). 1 The 7 pixel Large APEX sub-Millimeter Array (LAsMA) receiver was used to observe the \(J=3-2\) transitions of \({}^{12}\)CO (\(\nu_{\rm rest}\sim 345.796\) GHz) and \({}^{13}\)CO (\(\nu_{\rm rest}\sim 330.588\) GHz) simultaneously. The local oscillator frequency was set at 338.190 GHz in order to avoid contamination of the \({}^{13}\)CO (3\(-2\)) spectra due to bright \({}^{12}\)CO (3\(-2\)) emission from the image side band. Observations were performed in a position switching on-the-fly (OTF) mode. The data were calibrated using a three load chopper wheel method, which is an extension of the "standard" method used for millimeter observations (Ulich & Haas, 1976) to calibrate the data in the corrected antenna temperature \(T_{A}^{*}\) scale. The data were reduced using the GILDAS package2. The final data cubes have a velocity resolution of 0.25 km s\({}^{-1}\), an angular resolution of 19.5\({}^{\prime\prime}\) and a pixel size of 6\({}^{\prime\prime}\). A beam efficiency value \(\eta_{mb}=0.71\)(Mazumdar et al., 2021) was used to convert intensities from the \(T_{A}^{*}\) scale to main beam brightness temperatures, \(T_{mb}\).
Footnote 1: This publication is based on data acquired with the Atacama Pathfinder EXperiment (APEX). APEX is a collaboration between the Max-Planck-Institut für Radioastronomie, the European Southern Observatory and the Onsala Space Observatory.
Footnote 2: [http://www.iram.fr/IRAMFR/GILDAS](http://www.iram.fr/IRAMFR/GILDAS)
### Archival data
To allow for column density estimates using different \({}^{13}\)CO transitions, we also collect \({}^{13}\)CO (1\(-\)0) data from the Mopra-CO survey (Burton et al., 2013) and \({}^{13}\)CO (2\(-\)1) from the SEDIGSM survey (Schuller et al., 2021). However, the two surveys only cover galactic latitudes within \(\pm 0.5^{\circ}\), while our LAsMA observations cover the latitudes range of \(\sim-0.29\pm 0.6^{\circ}\). Thus we only consider the overlap region of the three surveys. The data were smoothed to a common angular resolution \(\sim\)35\({}^{\prime\prime}\) and a velocity resolution \(\sim\)0.25 km s\({}^{-1}\).
The observed region was covered in the infrared range by the Galactic Legacy Infrared Midplane Survey (GLIMPSE, Benjamin et al., 2003). The GLIMPS images, obtained with the Spitzer Infrared Array Camera (IRAC) at 4.5 and 8.0 \(\mu\)m, were retrieved from the Spitzer archive. The angular resolution of the images in the IRAC bands is \(\sim 2^{\prime\prime}\). We also used 870 \(\mu\)m continuum data from the APEX Telescope Survey of the Galaxy (ATLASGAL, Schuller et al., 2009) combined with lower resolution data from the Planck spacecraft, which are sensitive to a wide range of spatial scales at a resolution of \(\sim 21^{\prime\prime}\)(Cesengeri et al., 2016). Furthermore, we use Hi-GAL data (Molinari et al., 2010) processed using the PPMAP procedure (Marsh et al., 2015), which provides column density and dust temperature maps with a resolution of \(\sim 12^{\prime\prime}\) (the maps are available online3, Marsh et al., 2017).
## 3 Results
### Dendrogram structures
We identify dense gas structures using the Dendrogram algorithm. As described in Rosolowsky et al. (2008), the Dendrogram algorithm decomposes intensity data (a position-position map or a position-position-velocity cube) into hierarchical structures called leaves, branches and trunks. The relationship between those structures is shown in Fig.1. Trunks are the largest continuous structures at the bottom of hierarchical structures ("bases"), but, by definition, they can also be isolated leaves ("i-leaves") without any parent structure. Thus there are two kinds of "trunks", they are called "bases" and "i-leaves" in this work. Clustered leaves ("c-leaves") are defined as small-scale, bright structures at the top of the tree that do not decompose into further substructures, they are the smallest structures inside "branches". Branches are the relatively large scale structures in the tree, and they can be broken down into substructures. Between "bases" and "c-leaves", all hierarchical substructures are "branches", thus branches can span a wide range of scales. When we treat "bases" as the largest branches, and combine c-leaves and i-leaves, then there are only two kinds of structures (i.e. leaves and branches). However, in some cases, it is necessary to differentiate between i-leaves and c-leaves. In general, c-leaves are concentrated in regions of relatively high column density, i-leaves are low column density structures distributed at the periphery, as shown in Fig. 2. There are no definite limits to the size of the structures at different levels. The size of a leaf structure in a low column density region may be larger than a branch structure in a high column density region. As shown in Fig.3(a), there is considerable overlap of the scales between leaf and branch structures. In general, branches are larger scale structures than leaves. The physical properties of the overlapping parts of leaf and branch structures may be similar, we should remember the mixing in the discussion of the scaling relations below.
Using the _astrodendaro_ package 4, there are three major input parameters for the Dendrogram algorithm: _min_value_ for the minimum value to be considered in the dataset, _min_delta_ for a leaf that can be considered as an independent entity, _min_npix_ for the minimum area of a structure. From these parameters, we can see that the algorithm does not consider the velocity component and velocity range of the identified structure carefully. The structure is mainly identified according to the intensity, thus the velocity division of a structure is only a result of its intensity division. There is no criterion for a continuous velocity range across a dense structure. However, the velocity range of a structure is crucial for the estimation of its fundamental physical quantities, such as velocity dispersion and mass. Moreover, strict differentiation of the velocity components should be based on the spectral line profiles rather than the intensity thresholds in the algorithm. In this work, instead of identifying structures in the PPV cube, we first identify the intensity peaks on the integrated intensity (Moment 0) map of \({}^{13}\)CO (3\(-\)2) emission, and then extract the average spectrum of each structure to investigate their velocity components and gas kinematics.
Footnote 4: [https://dendrograms.readthedocs.io/en/stable/index.html](https://dendrograms.readthedocs.io/en/stable/index.html)
For the Moment 0 map, a 5\(\sigma\) threshold has been set, so we therefore only require the smallest area of a structure to be larger than one beam and do not set any other parameters, thereby reducing the dependence of structure identification on the algorithm parameters. In Fig. 4, the structures identified by the Dendrogram algorithm correspond well to the peaks on the integrated intensity maps. In order to retain as many structures as possible, the parameter _min_npix_ was set to one beam, because the hierarchical structures in Dendrogram mean that a leaf structure under strict parameter settings can be a branch structure under loose parameter settings. Moreover, the average spectra fitting described below will also further screen the structures, allowing to exclude structures with poorly defined line profiles.
The algorithm approximates the morphology of each structure as an ellipse, which is used in this work. We do not use the mask output by the algorithm because different parameter settings around the intensity peak will give different masks. In the Dendrogram algorithm, the long and short axes of an ellipse \(a\) and \(b\) are the rms sizes (second moments) of the intensity distribution along the two spatial dimensions. However, as shown in Fig.5, \(a\) and \(b\) will give a smaller ellipse, compared to the size of the identified structure. Thus we tried to enlarge the ellipse by 2 and 3 times, and found that multiplying a factor of 2 is appropriate, similar to a factor of 1.91 suggested in Solomon et al. (1987); Rosolowsky and Leroy (2006). Then the effective physical radius of an ellipse is \(R_{\mathrm{eff}}=\sqrt{2a\times 2b}*d\), here \(d=3.6\) kpc for the distance to the G333 complex (Lockman, 1979; Bains et al., 2006).
### Velocity components
Based on the Moment 0 map of the \({}^{13}\)CO (3\(-\)2) emission, 3608 structures are extracted by the Dendrogram algorithm, consisting of 1626 clustered leaves, 1367 branches and 615 trunks (486 isolated leaves and 129 bases). In the discussion below, we put bases into branches. We extract and fit the averaged spectra of 3608 structures individually using the fully automated Gaussian decomposer GAUSSPY+ (Lindner et al., 2015; Riener et al., 2019). The parameter settings of the decomposition are the same as in Zhou et al. (2023). According to the line profiles, all averaged spectra are divided into three categories:
1. Structures with single velocity components regarded as independent structures (type1, single, 65%, Fig.6(a));
2. Structures with more than one peak, which are separated (type2, separated, 19%, Fig.6(b));
3. Structures with more than one peak, blended together (type3, blended, 16%,, Fig.6(c) and (d)).
Spectra averaged across regions which show a single peak in their line profiles probably represent independent structures. From the line profile, we can also determine the complete ve
Figure 1: Hierarchical structures identified by the Dendrogram algorithm. A segment of the Dendrogram tree of sub-region S3 in Fig.2(c) is used to illustrate the structure types output by the Dendrogram algorithm.
locity range of a structure. In order to ensure that other physical quantities (such as column density, temperature) match with the fitted line-width, as shown in Fig.5, we take the velocity range of each structure or each velocity component as [v\({}_{c}\)
Figure 2: Different kinds of structures traced by \({}^{13}\)CO (3\(-\)2) emission classified in Sec.3.2. (a) Type1 (single velocity component) leaf structures: (b) Type2 (separated velocity components) leaf structures; (c) Type3 (blended velocity components) leaf structures. Orange boxes mark the subregions divided in Zhou et al. (2023); (d) Type1 (orange) and type2 (magenta) branch structures. In panels (a), (b) and (c), orange and red ellipses represent i-leaves and c-leaves, respectively.
FWHM,v\({}_{c}\)+FWHM], which is necessary to calculate the physical quantities for structures with more than one velocity component.
For type3 structures with significant overlapping velocity components, complete decomposition cannot easily be obtained, thus the decomposition uncertainties directly affect the reliability of the subsequent analysis. In this work, we focus on the structures with independent line profiles (type1 and type2). As shown in Fig.2, a high-column density structure does not necessarily imply a complex line profile, thus discarding type3 structures won't produce significant sample bias. It is also important to emphasize that for any analysis involving continuum emission without velocity information, only typed structures will be considered, and sub-regions 5 and 7 marked in Fig.2(c) will also be excluded due to the heavy blending of velocity components described in Zhou et al. (2023).
For a type2 structure, we determine the physical size scales of different velocity components based on their velocity ranges. In Fig.6(b), the total velocity range for deriving the Moment 0 map of the structure is [-60, -35] km s\({}^{-1}\), the area of a type2 structure on Moment 0 map is \(s\) and includes \(n\) pixels. For two velocity components in a type2 structure, we can also obtain their Moment 0 maps \(m01\) and \(m02\) in their velocity ranges [v\({}_{c1}\)-FWHM\({}_{1}\),v\({}_{c1}\)+FWHM\({}_{1}\)] and [v\({}_{c2}\)-FWHM\({}_{2}\),v\({}_{c2}\)+FWHM\({}_{2}\)], respectively, \(m01\) and \(m02\) contain \(n1\) and \(n2\) pixels, then their area are \((n1/n)*s\) and \((n2/n)*s\), which can be used to estimate the physical size scales of the two velocity components.
Generally, the elliptic approximation for the identified structures is good for small-scale leaf structures, but for some large-scale branch structures, due to their complex morphology, it cannot be satisfactory. We therefore exclude branch structures with complex morphology, if the proportion of empty pixels within the effective ellipse of each structure on the Moment 0 map is larger than 1/3. Another reason to exclude these morphologically complex structures is that they may not give good effective radius, velocity dispersion and density estimates. In Fig.2(d), the remaining branch structures correspond well to the background integrated intensity. For each structure, its velocity range and effective ellipse are used to extract the basic physical quantities based on the column density, temperature, optical depth cubes derived from the LTE analysis in Sec.3.3.3.
Branch structures are often contained within other branch structures. Some branch structures have similar central coordinates, scales, and morphology, thus they should be regarded as the same structure to avoid being repeatedly counted. Two branch structures with the area \(s1\) and \(s2\) (\(s1\)\(>\)\(s2\)) are considered repetitive if they meet the conditions: 1. The distance between their central coordinates is less than 1 beam size; 2. (\(s1\)-\(s2\))/\(s2\)\(<\)1/3. The two clustering conditions can pack up the similar branch structures. Each clustering may contain multiple structures, and we keep only one of them in the subsequent analysis. This step will exclude nearly half of branch structures. Thus the duplication of branch structures identified by the Dendrogram algorithm is a big issue, it must be considered before making analysis for the identified structures.
### Column density
In this section, we derive the column density of the entire observed field using different methods to find the best estimates for the masses of the identified structures.
#### 3.3.1 Continuum emission
Fig.7(a) and (b) present the dust temperature and column density maps derived from the Hi-GAL data using the PPMAP procedure (Marsh et al. 2015). Since there are some missing values on the PPMAP column density map, we also produced the H\({}_{2}\) column density map using ATLASGAL+Planck 870 \(\mu\)m data following the formalism of Kauffmann et al. (2008):
\[N_{\rm H_{2}} = 2.02\cdot 10^{20}\ {\rm cm}^{-2}\left({\rm e}^{1.439({\rm d}/{ \rm mm})^{-1}({\rm\Upsilon}/10\ {\rm K})^{-1}}-1\right)\left(\frac{\lambda}{{\rm mm}} \right)^{3} \tag{1}\] \[\cdot\left(\frac{\kappa_{\nu}}{0.01\ {\rm cm}^{2}\ {\rm g}^{-1}} \right)^{-1}\left(\frac{F_{\nu}}{{\rm mJy\ beam}^{-1}}\right)\left(\frac{\theta _{\rm HPBW}}{10\ {\rm arcsec}}\right)^{-2},\]
where \(F_{\nu}\) is the flux density, \(\theta_{\rm HPBW}\) is the beam FWHM, \(\kappa_{\nu}=0.0185\ {\rm cm}^{2}\ {\rm g}^{-1}\)(Csengeri et al. 2016). Assuming a single dust temperature is a crude simplification, therefore we calculate N\({}_{\rm H_{2}}\) pixel-by-pixel by combining the ATLASGAL+Planck 870 \(\mu\)m flux map with Herschel dust temperatures derived with the PPMAP procedure. We only use pixels that are above the \(\sim\)5\(\sigma\) noise level, \(\sim\) 0.3 Jy/beam (Urquhart et al. 2018). From Fig.7(b) and (c), we can see the column density derived from ATLASGAL+Planck 870 \(\mu\)m data and Herschel multi-wavelength data agree with each other, both in their spatial distribution and magnitude.
#### 3.3.2 Molecular line
In this work, we focus on the G333 complex, and limit the velocity range of the \({}^{13}\)CO emission to [-60,-35] km s\({}^{-1}\)(Zhou et al. 2023). To derive the column densities from the \({}^{13}\)CO emission, we assume local thermodynamic equilibrium (LTE) and a beam filling factor of unity. Following the procedures described in Garden et al. (1991) and Mangum & Shirley (2015), for a rotational transition from upper level \(J+1\) to lower level \(J\), we can derive
Figure 3: (a) Effective radius (scale) and (b) Column density distribution of Dendrogram structures. Only the type1 and type2 structures (see Fig.2) are included in the distributions. The probability density is estimated by the kernel density estimation (KDE) method.
the total column density by:
\[N_{tot}=\frac{3k}{8\pi^{3}\mu^{2}B(J+1)}\frac{(T_{\rm ex}+hB/3k)\exp[hBJ(J+1)/kT_{ \rm ex})]}{1-\exp(-h\nu/kT_{\rm ex})}\int\tau{\rm dv}, \tag{2}\]
Article number, page 6 of 17
Figure 4: Masks of leaf structures identified by the Dendrogram algorithm toward the sub-regions marked in Fig.2(c). Only leaf structures are shown in here.
Figure 5: A piece of sub-region S3 in Fig.2(c) is used to illustrate the structures identified by the Dendrogram algorithm. (a) The black contours show the masks of Dendrogram leaves. The long and short axes of the smallest ellipse \(a\) and \(b\) are the rms sizes (second moments) of the intensity distribution. The ellipses in the second and third layers are enlarged by factors of 2 and 3 in size compared to the smallest one. The middle ellipse visibly corresponds best to the mask; (b) Typical line profile of a leaf structure. The velocity range of the structure is [v\({}_{c}\)-FWHM,v\({}_{c}\)+FWHM].
\[\tau=-\ln[1-\frac{T_{\rm mb}}{J(T_{\rm ex})-J(T_{\rm bg})}], \tag{3}\]
\[\int\tau{\rm dv}=\frac{1}{J(T_{ex})-J(T_{\rm bg})}\frac{\tau}{1-e^{-\tau}}\int T _{\rm mb}{\rm dv}, \tag{4}\]
\[J(T)=\frac{h\nu/k}{e^{h\nu/kT}-1}, \tag{5}\]
where \(B=\nu/[2(J+1)]\) is the rotational constant of the molecule, \(\mu\) is the permanent dipole moment (\(\mu=0.112\) Debye for \({}^{13}\)CO). \(T_{\rm bg}=2.73\) is the background temperature, and \(\int T_{\rm mb}{\rm dv}\) represents the integrated intensity. In the above formulas, the correction for high optical depth was applied (Frerking et al. 1982; Goldsmith et al. 2008; Areal et al. 2019). Assuming optically-thick emission of \({}^{12}\)CO emission, we can estimate the excitation temperature \(T_{\rm ex}\) following the formula (Garden et al. 1991; Pineda et al. 2008)
\[T_{\rm ex,3-2}=\frac{16.6{\rm K}}{\ln[1+16.6/(^{12}T_{\rm peak,3-2}+0.038)]}, \tag{6}\]
\[T_{\rm ex,1-0}=\frac{5.53{\rm K}}{\ln[1+5.53/(^{12}T_{\rm peak,1-0}+0.818)]}, \tag{7}\]
where \({}^{12}T_{\rm peak,3-2}\) and \({}^{12}T_{\rm peak,1-0}\) are the observed \({}^{12}\)CO (3-2) and \({}^{12}\)CO (1-0) peak brightness temperature. For the \({}^{13}\)CO (2-1) transition, we do not have \({}^{12}\)CO (2-1) data and we assume \(T_{\rm ex,2-1}=T_{\rm ex,3-2}\).
The distribution of the excitation temperature derived from \({}^{12}\)CO (3-2) in Fig.7(d) is somewhat similar to the distribution of the dust temperature derived from Herschel data shown in Fig.7(a), especially in high-column density regions. We transfer the column densities of \({}^{13}\)CO to H\({}_{2}\) column densities by taking the abundance ratio X\({}_{\rm i,CO}\) of H\({}_{2}\) compared with \({}^{13}\)CO as \(\sim 7.1\times 10^{5}\) (Frerking et al. 1982).
#### 3.3.3 Column density cube
A similar procedure as presented in Sec.3.3.2 can be performed for each velocity channel in the \({}^{13}\)CO (3-2) cube to obtain a column density cube, which allows to eliminate the effect of potential overlap of different velocity components on the mass estimation.
#### 3.3.4 Column densities from different \({}^{13}\)CO transitions
There are several factors that affect the mass estimate: 1.) the overlap of different velocity components; 2.) the observed molecular line transition; 3.) the choice between using molecular line or continuum emission. For the first factor, we have decomposed the velocity components in Zhou et al. (2023) and here we only focus on the peak3 component defined in Zhou et al. (2023) with the velocity range [-60,-35] km s\({}^{-1}\). For the second factor, Leroy et al. (2022) measured the low-\(J\)\({}^{12}\)CO line ratio R\({}_{21}\equiv\)\({}^{12}\)CO (2-1)/\({}^{12}\)CO (1-0), R\({}_{32}\equiv\)\({}^{12}\)CO (3-2)/\({}^{12}\)CO (2-1), R\({}_{31}\equiv\)\({}^{12}\)CO (3-2)/\({}^{12}\)CO (1-0), using whole-disk CO maps of nearby galaxies, and found galaxy-integrated mean values in 16%\(-\)84% of the emission of \(R_{21}=0.65\) (0.50\(-\)0.83), \(R_{32}=0.50\) (0.23\(-\)0.59), and \(R_{31}=0.31\) (0.20\(-\)0.42). Hence, the 3-2 transition of \({}^{12}\)CO resulted in significantly smaller column density estimates compared to the 1\(-\)0 transition. To check whether different transitions of \({}^{13}\)CO show a similar behavior in a Galactic giant molecular cloud, we collected \({}^{13}\)CO (2\(-\)1) and \({}^{13}\)CO (1\(-\)0) emission of the G333 complex, as described in Sec.2.2.
In Sec.3.3.2, we have derived the column density of different transitions by a LTE analysis. As shown in Fig. 8, the quality of the \({}^{13}\)CO \(J\)=1\(-\)0 data is not as good as for \({}^{13}\)CO \(J\)=2\(-\)1 and \(J\)=3\(-\)2, thus we set a column density threshold (\(>10^{21}\) cm\({}^{-2}\)) to exclude the unreliable low-column density emission from the \(J\)=1\(-\)0 transition before the comparison. Fig. 9 shows the distribution of pixel-by-pixel column density ratios between different \({}^{13}\)CO transitions. The peak values in the distributions are
\[\frac{N_{2-1}}{N_{1-0}}\approx 0.5 \tag{8}\]
Figure 6: Typical \({}^{13}\)CO (3–2) line profiles of Dendrogram structures.
\[\frac{N_{3-2}}{N_{1-0}}\approx 0.3 \tag{9}\]
\[\frac{N_{3-2}}{N_{2-1}}\approx 0.5. \tag{10}\]
Except for the slightly lower ratio between 2\(-\)1 and 1\(-\)0 transitions, the ratios calculated from different \({}^{13}\)CO transitions are comparable with the results derived from \({}^{12}\)CO emission in Leroy et al. (2022), although the ratios from \({}^{12}\)CO emission are derived from integrated intensity, rather than column density in the case of \({}^{13}\)CO.
Figure 7: Temperature and column density maps of the entire field. (a) and (b) Dust temperature and column density distributions in the G333 complex and the G331 GMC derived from Hi-GAL data processed by PPMAP; (c) Column density distribution in the G333 complex and the G331 GMC derived from ATLASGAL+Planck 870 \(\mu\)m data; (d) Excitation temperature distribution in the G333 complex derived from \({}^{12}\)CO (3-2) emission by a LTE analysis in the velocity range [-60,-35] km s\({}^{-1}\).
#### 3.3.5 Non-LTE estimates
The Non-LTE molecular radiative transfer algorithm RADEX was used to further test the above results: 1. The column density derived from \({}^{13}\)CO \(J=3-2\) transition is significantly lower than \(J=1-0\) transition; 2. The ratios of the column density derived from different transitions of \({}^{13}\)CO emission. We use the following input parameters for RADEX: we take \(T_{\rm kin}\)=25 K as the kinematic temperature from Fig.7(a). It is a mean value of the temperature in Fig.7(a) for the relatively high-column density regions in Fig.7(b), which covers the main emission of \({}^{13}\)CO J=3-2. As background temperature we use again \(T_{\rm bg}\)=2.73 K. A line-width of \(\sim\)2.5 km s\({}^{-1}\) is taken from the peak value of all type1 c-leaves, as shown in Fig.10. For the H\({}_{2}\) volume density log\({}_{10}\) (n\({}_{\rm H_{2}}\)) and \({}^{13}\)CO column density log\({}_{10}\) (N\({}_{\rm CO}\)), we compute grids in the volume and column density range of [2,6] and [13,17], respectively. Then we obtain the intensity \(T_{\rm R}\) output from RADEX. Assuming \(T_{\rm ex}\)= 25 K, using the equations listed in Sec.3.3.2, for the \(J+1\) to \(J\) transition, the column density in the energy level \(J\) can be calculated as
\[N_{\rm J}=\frac{2J+1}{Q}N_{\rm tot,CO}\exp[-\frac{hBJ(J+1)}{kT_{\rm ex}}], \tag{11}\]
where the partition function \(Q\) is given by \(kT_{\rm ex}/hB+1/3\), \(B\) is the rotational constant of the molecule. The rotation temperature \(T_{\rm rot}\) can be estimated by the equation
\[\frac{N_{\rm i}}{N_{\rm i}}=\frac{g_{\rm j}}{g_{\rm i}}\exp[-\frac{E_{\rm j}-E_ {\rm i}}{kT_{\rm rot}}], \tag{12}\]
where N\({}_{\rm j}\) and N\({}_{\rm i}\) are the column densities of any two levels i and j of statistical weights g\({}_{\rm j}\) and g\({}_{\rm i}\) and energies E\({}_{\rm j}\) and E\({}_{\rm i}\). Using the equations listed in Sec.3.3.2 again, now the column density N\({}_{\rm CO,rot}\) can be derived by T\({}_{\rm rot}\) and T\({}_{\rm R}\). We also derived the column density N\({}_{\rm CO,ex}\) by assuming T\({}_{\rm ex}\)= T\({}_{\rm kin}\) = 25 K. Finally, N\({}_{\rm CO,rot}\) and N\({}_{\rm CO,ex}\) are compared with the \({}^{13}\)CO column density N\({}_{\rm CO,radex}\) input in RADEX. As shown in Fig.11, for N\({}_{\rm CO,ex}\), using the \({}^{13}\)CO (3\(-\)2) emission together with the LTE equations indeed gives lower column density estimates than using the 2\(-\)1 and 1\(-\)0 emission, consistent with the results in Sec.3.3.4. \({\rm N_{CO,J=1-0,ex}}\) provides upper limits of the column density derived by different \({}^{13}\)CO transitions, thus it is used to calibrate the column density derived from \({}^{13}\)CO (3-2) emission in this work. Generally, for each transition, the column density N\({}_{\rm CO,rot}\) is higher than N\({}_{\rm CO,ex}\). The differences of N\({}_{\rm CO,rot}\) derived from different transitions are also smaller than that of N\({}_{\rm CO,ex}\). Moreover, N\({}_{\rm CO,rot}\) is closer to the fiducial N\({}_{\rm CO,J=1-0,ex}\) than N\({}_{\rm CO,ex}\). Therefore, using T\({}_{\rm rot}\) to derive the column density is better than using T\({}_{\rm ex}\). However, both \(J=2-1\) and \(J=1-0\) data only cover part of the entire observed field, so that we cannot obtain the rotational temperature in the full region and do not use it here.
In Fig.11, we also investigate changes of the column density ratios N\({}_{\rm CO,ex}\)\(J\)=3\(-\)2/\(J\)=1\(-\)0, \(J\)=2\(-\)1/\(J\)=1\(-\)0 and \(J\)=3\(-\)2/\(J\)=2\(-\)1 with the RADEX input \({}^{13}\)CO column density N\({}_{\rm CO,radex}\) for different volume densities n\({}_{\rm H_{2}}\). Interestingly, when n\({}_{\rm H_{2}}\) is around \(4.2\times 10^{3}\) cm\({}^{-3}\) (the third column of Fig.11), the predicted ratios are close to the values in Sec.3.3.4. At all three ratios. With 4.2 \(\times\) 10\({}^{3}\) cm\({}^{-3}\) in the range of the typically volume densities of H\({}_{2}\) traced by \({}^{13}\)CO (Shirley, 2015; Liu et al., 2016; Schuller et al., 2017; Finn et al., 2021), the column density ratios predicted by RADEX are consistent with the ratios derived using LTE from observations of different \({}^{13}\)CO transitions.
Figure 8: H\({}_{2}\) column density distribution in the G333 complex derived from \({}^{13}\)CO emission. (a) 1-0 transition; (b) 2-1 transition; (c) 3-2 transition.
In summary, we divide the column density derived from \({}^{13}\)CO J=3-2 emission by a correction factor 0.3 before converting to H\({}_{2}\) column density.
### Mass estimation
#### 3.4.1 Mass
The mass of each identified structure is calculated by
\[M=\mu_{\rm H_{2}}\rm m_{H}\sum{}N(\rm H_{2})(R_{\rm pixel})^{2}, \tag{13}\]
where \(\mu_{\rm H_{2}}=2.8\) is the molecular weight per hydrogen molecule, \(m_{\rm H}\) is the hydrogen atom mass, \(R_{\rm pixel}\) is the size of a pixel. The sum is performed within the elliptical cylinder in the column density cube. As described in Sec. 3.1, the elliptical cylinder has a bottom area \(A\)=\(\pi\times 2a\times 2b\times d^{2}\) and a height range [v\({}_{c}\)-FWHM,\({}_{v}\)+FWHM], \(a\) and \(b\) are the long and short axes of the ellipse, here \(d=3.6\) kpc for the distance to the G333 complex (Lockman, 1979; Bains et al., 2006). Then the average surface density of each structure is calculated by \(\Sigma\)= \(M/A\), and the average column density as \(N\)=\(\Sigma\)/(\(\mu_{\rm H}\),\(\rm m_{H}\)).
#### 3.4.2 Molecular line vs. continuum emission mass estimates
As described in Zhou et al. (2023), sub-regions 5 and 7 have significant overlap of different velocity components, thus they should be excluded for column density estimates based on continuum emission. In Fig.9(d), the column density derived from ATLASGAL+Planck 870 \(\mu\)m data is comparable with that estimated from Hi-GAL data processed by the PPMAP procedure. As shown in Fig. 2 and Fig. 12(a), i-leaves are relatively low-column density structures distributed at the periphery, thus we expect that they will be less massive on average than c-leaves, considering i-leaves and c-leaves structures have similar scales in Fig.3(a). However, in Fig.12(a), i-leaves and c-leaves show similar masses based on the continuum emission. In addition, the continuum mass distribution is relatively narrow, indicating that the contrast between high-column density and low-column density structures is not as clear as that derived from molecular line emission due to line-of-sight contamination. In Fig.12(b), we only consider the structures with mean column density greater than 10\({}^{22}\) cm\({}^{-2}\), now the distribution of the masses derived from molecular line emission is similar to that derived from continuum emission, after considering the mass correction factor 0.3. Therefore, in the subsequent analysis, we only adopt the masses estimated from molecular line emission.
### Virial analysis
Having measured the basic physical quantities of the identified structures, we can now investigate their physical properties.
#### 3.5.1 Virial parameter
To investigate the energy balance within the extracted structures, we determine the gravitational potential energy and internal kinetic energy to compute the virial parameter (McKee, 1989;
Figure 10: Line-width distribution of all typical c-leaves. The probability density is estimated by the kernel density estimation (KDE) method.
Figure 9: Distribution of column density ratios: (a) derived from the \({}^{13}\)CO (3-2) and (2-1) emission; (b) derived from \({}^{13}\)CO (3-2) and (1-0); (c) derived from \({}^{13}\)CO (2-1) and (1-0); (d) derived from ATLASGAL+Planck 870 \(\mu\)m and Hi-GAL data. The probability density is estimated by the kernel density estimation (KDE) method.
Bertoldi & McKee 1992):
\[E_{g}=-\frac{3}{5}a_{1}a_{2}\frac{GM^{2}}{R} \tag{14}\]
\[E_{k}=\frac{3}{2}M\sigma_{\rm tot}^{2}. \tag{15}\]
The factor \(a_{1}\) measures the effects of a nonuniform density distribution and the factor \(a_{2}\) the effect of the clump's ellipticity. The virial parameter of each decomposed structure is calculated by:
\[\alpha_{\rm vir}=2E_{k}/\left|E_{g}\right|=\frac{5}{a_{1}a_{2}}\frac{\sigma_{ \rm tot}^{2}R}{GM}, \tag{16}\]
with \(\sigma_{\rm tot}=\sqrt{\sigma_{\rm tot}^{2}+c_{s}^{2}}\) as the total velocity dispersion, \(R\) the effective radius, \(G\) the gravitational constant, parameter \(a_{1}\) equals to \((1-k/3)/(1-2k/5)\) for a power-law density profile \(\rho\propto r^{-k}\), and \(a_{2}=(\arcsin e)/e\) as the geometry factor. Here, we assume a typical density profile of \(k=1.6\) for all decomposed structures (Butler & Tan 2012; Palau et al. 2014; Li et al. 2019). The eccentricity \(e\) is determined by the axis ratio of the dense structure, \(e=\sqrt{1-(b/a)^{2}}\), \(a\) and \(b\) are the long and short axes of the ellipse. Non-magnetized cores with \(\alpha_{\rm vir}<2\), \(\alpha_{\rm vir}\sim 1\) and \(\alpha_{\rm vir}<1\) are considered to be gravitationally bound, in hydrostatic equilibrium and gravitationally unstable, respectively (Bertoldi & McKee 1992; Kauffmann et al. 2013). Those with \(\alpha_{\rm vir}>2\) could be gravitationally unbound, and are therefore either pressure-confined, or in the process of dispersal.
Fig.13(a) shows the distribution of virial parameters for all identified structures. We can see more than half of the leaf structures are gravitationally unbound and only a small fraction are in gravitational collapse. However, in Zhou et al. (2023), we argue that molecular clouds in the G333 complex are in a state
Figure 11: The calculation results of RADEX. First row: correlation between RADEX input column densities with column densities of different \({}^{13}\)CO transitions derived by LTE equations with T\({}_{\rm rec}\) and T\({}_{\rm ex}\) using T\({}_{\rm R}\) computed by RADEX for different volume densities; Second, third and fourth rows: Column density ratios of different \({}^{13}\)CO transitions derived by LTE equations with T\({}_{\rm ex}\) and T\({}_{\rm ex}\) as a function of the RADEX column density input for different volume densities. Vertical lines mark the peak values of the \({}^{13}\)CO column density derived from \({}^{13}\)CO J=3-2, J=2-1, J=1-0 and ATLASGAL+Planck 870 \(\mu\)m emission (from left to right) shown in Fig.7, Fig.8 and Fig.9, here the abundance ratio X\({}_{{}^{13}CO}\) of H\({}_{2}\) compared with \({}^{13}\)CO \(\sim 7.1\times 10^{5}\) is used. Cyan circles mark the ratios predicted by RADEX when the input \({}^{13}\)CO column density takes the peak values of the column density derived by different methods in Sec.3.3.
of global gravitational collapse, since the ubiquitous density and velocity fluctuations towards hubs imply the widespread presence of local gravitational collapse. Our previous work provides a more comprehensive approach to study the gas kinematics in the clouds. The dense structures in the clouds are connected to the surrounding environment through filaments, the gravitational state of the structures can be reflected by the velocity gradients along the filaments, which indicate the converging motions toward gravitational centers (hubs). In Fig. 14, except for the low-column density structures, most of structures have obvious correlation between velocity dispersion and column density, which indicates a gravitational origin of velocity dispersion, as discussed later in Sec.4.1. Thus most of structures must be gravitationally bound, even in a state of gravitational collapse.
That more than half of the structures have virial parameters larger than 2 in Fig.13, seems likely from not considering other forces that can bind the structures. In the interstellar medium, each of the structures is embedded in larger scale structures and one can therefore assume that they are confined by various external pressures (Keto and Myers, 1986; Lada et al., 2008; Field et al., 2011; Leroy et al., 2015; Kirk et al., 2017; Chen et al., 2019; Li et al., 2020). To reconcile the gravitational collapse evidence from Zhou et al. (2023) with the apparent small fraction of gravitational unstable gas structures based on the classical virial analysis, we discuss in the following the additional effect of external pressure from ambient cloud structures.
#### 3.5.2 Pressure-confined hydrostatic equilibrium
Previous studies have suggested that the external pressure provided by the larger scale molecular cloud gas might help to confine dense structures in molecular clouds (Spitzer, 1978; McKee, 1989; Elmegreen, 1989; Ballesteros-Paredes, 2006; Kirk et al., 2006; Lada et al., 2008; Pattle et al., 2015; Kirk et al., 2017; Li et al., 2020). The external pressure energy can be calculated by
\[E_{p}=-4\pi P_{\rm cl}R^{3}, \tag{17}\]
and then the new estimation of the virial parameter is
\[\alpha_{\rm vir}=2E_{k}/\{E_{g}\}+\{E_{p}\}. \tag{18}\]
External pressure can have various origins, such as the turbulent pressure from the HI halo of molecular clouds (Elmegreen, 1989), the recoil pressure from photodissociation regions (PDRs) (Field et al., 2011), the infall ram pressure, or other intercloud pressures (Bertoldi and McKee, 1992; Lada et al., 2008; Belloche et al., 2011; Camacho et al., 2016). Here we mainly consider the external pressure from the ambient cloud for each decomposed structure using
\[P_{\rm cl}=\pi G\Sigma\Sigma_{r}, \tag{19}\]
where \(P_{\rm cl}\) is the gas pressure, \(\bar{\Sigma}\) is the mean surface density of the cloud, \(\Sigma_{r}\) is the surface density at the location of each structure (McKee, 1989; Kirk et al., 2017). We assumed that \(\Sigma_{r}\) is equal to half of the observed column density at the footprint of each decomposed structure (Kirk et al., 2017; Li et al., 2020).
Figure 12: \(\rm M_{CO}\) and \(\rm M_{r}\) represent the mass derived from \({}^{13}\)CO (3–2) emission and ATLASGAL+Planck 870 \(\mu\)m data, respectively. (a) Mass distribution of all type1 leaf structures; (b) Mass distribution of type1 c-leaves satisfied the density condition \(>\)10\({}^{22}\) cm\({}^{-2}\). The probability density is estimated by the kernel density estimation (KDE) method.
Figure 13: Virial parameters of dense gas structures. (a) Virial parameters of all type1 and type2 structures. Dashed lines mark the positions \(a_{i\nu}=1\) and \(a_{i\nu}=2\). The probability density is estimated by the kernel density estimation (KDE) method; (b) Correlation between virial parameter \(a_{i\nu}=2E_{k}/\{E_{g}\}\) and average column density of structures; (c) Correlation between virial parameter \(a_{i\nu}=2E_{k}/(E_{g}+E_{p})\) and average column density above the threshold \(\sim\)3.2 \(\times\) 10\({}^{21}\) cm\({}^{-2}\).
The total mass of the G333 complex is calculated by Eq. 13 as \(\sim\)1.03 \(\times\) 10\({}^{6}\) M\({}_{\odot}\), comparable with the mass of \(\sim\) 1.7 \(\times\) 10\({}^{6}\) M\({}_{\odot}\) calculated in Miville-Deschenes et al. (2017) using CO (1\(-\)0) emission. Using the sum of all non-empty pixels as the total area, the mean surface density of the G333 complex is \(\sim\) 0.071 g cm\({}^{-2}\) (or \(\sim\)340 M\({}_{\odot}\) pc\({}^{-2}\)), corresponding to a column density of \(\sim\)1.5 \(\times\) 10\({}^{22}\) cm\({}^{-2}\). The G333 complex is located in the molecular ring of the Milky Way, where the mean surface density is \(\sim\)200 M\({}_{\odot}\) pc\({}^{-2}\)(Heyer & Dame, 2015). Given that the G333 complex is the most ATLASGAL clump rich giant molecular cloud complex in the southern Milky Way, it should have a density higher than the mean value. The mean surface density of the G333 GMC in Miville-Deschenes et al. (2017) and Nguyen et al. (2015) are \(\sim\) 120 M\({}_{\odot}\) pc\({}^{-2}\) and \(\sim\) 220 M\({}_{\odot}\) pc\({}^{-2}\), respectively (see Sec.4.5 of Zhou et al. (2023) for more details). Adopting a conservative estimate, we take the value of \(\bar{\Sigma}\)\(\sim\) 200 M\({}_{\odot}\) pc\({}^{-2}\) as a lower limit, corresponding to a column density of \(\sim\)9 \(\times\)10\({}^{21}\) cm\({}^{-2}\).
There are also many leaf structures with lower-column density, usually distributed in the periphery of the clouds. Eq. 19 may be not valid for them, because \(\Sigma_{r}\) in the equation is integrated from the cloud surface to depth \(r\) of each structure in the cloud (Kirk et al., 2017). We therefore need to set a density threshold for the structures to determine whether they are eligible to be bound by the external pressure from the ambient cloud. In Fig. 14, high-column density and low-column density structures show different behaviors, the turning point corresponds to a column density value \(\sim\)3.2 \(\times\) 10\({}^{21}\) cm\({}^{-2}\), which will be used as the threshold.
We treat gravitationally unbound branch structures as c-leaves. Here we ignore i-leaves structures, they are isolated structures at the cloud periphery and are unlikely to be confined by external pressure from the ambient cloud. For c-leaves and branches, the proportion of the structures above the density threshold (\(\sim\)3.2 \(\times\) 10\({}^{21}\) cm\({}^{-2}\)) is 82.5%. The proportions of \(\alpha_{vir}=2E_{k}/E_{g}<2\) and \(\alpha_{vir}=2E_{k}/E_{g}\geq 2\) are 45.3% and 54.7%, respectively. After considering the external pressure (\(\alpha_{vir}=2E_{k}/(E_{g}+E_{p})\)), their proportions become 93% and 7%. When accounting for external pressure, the majority of the structures are gravitationally bound, and susceptible to gravitational collapse, as shown in Fig. 13(c). The peripheral structures, however, are at low-column densities and less bound by external pressure, therefore are likely to be dispersed by feedback.
Here we do not consider the HI halo of molecular clouds and also ignore the external pressure exerted by HII regions. The latter should also be important due to the strong feedback in the G333 complex. But the energy injected into the clouds from HII regions might also destroy the clouds, therefore we only consider the external pressure from the ambient cloud in binding the structures. The rough estimate in this section shows the important role of the external pressure in confining the observed gas structures.
### Scaling relation
The physical states of the structures can also be reflected by the scaling relations. Fig.14(a) show the velocity dispersion-scale relations of i-leaves, c-leaves and branch structures. It appears that only branch structures show a clear correlation between velocity dispersion and scale, i-leaves roughly inherit the trend of branch structures' \(\sigma-R\) relation extending from large to small scales, they can be barely linked behind branch structures in Fig.14(a), although their velocity dispersion shows no significant correlation with scale, similar to c-leaves, which deviate more pronounced from the Larson-relation. The red dashed line, which is fitted to branch and i-leaves structures, has a gradient of 0.33\(\pm\)0.01.
Fig.14(b) shows the velocity dispersion-column density relation. For c-leaves, there is a moderate correlation between velocity dispersion and column density, the Pearson coefficient is \(\sim\)0.45. Interestingly, we can see different behaviors of high-column density and low-column density structures. For high-column density structures, the velocity dispersion and column density show a clear correlation, while low-column density structures do not. In recent simulations (Ganguly et al., 2022; Weis et al., 2022), dense structures roughly follow the Heyer-relation, and less dense structures show no trend with the column density, thus populate a low-density tail in the Heyer-relation, as shown in Fig.14(c). For a more convenient comparison with \(\sigma-R\) and \(\sigma-N\) relations, we convert the Heyer-relation \(\sigma/R^{0.5}\propto N^{0.5}\) to the form \(\sigma\propto(R*N)^{0.5}\)(Eq. 3 in Ballesteros-Paredes et al., 2011). Both of them should have a slope of 0.5.
From Fig.14(a) and (b), we conclude that the velocity dispersion of branch structures correlates with both scale and column density, that the velocity dispersion of c-leaves is only sensitive to column density, while the velocity dispersion of i-leaves has no significant dependence on either scale or column density.
The structures with column density \(>\)3.2 \(\times\) 10\({}^{21}\) cm\({}^{-2}\) can be divided into two types: those that can collapse after adding external pressure (\(a_{vir}=2E_{k}/(E_{g}+E_{p})<1\), pressure-assisted collapse), and those that can collapse by self-gravity alone (\(a_{vir}=2E_{k}/E_{g}<1\), self-gravitating collapse). Now we have three structure sets: all identified structures, the structures in pressure-assisted collapse and the structures in self-gravitating collapse, where the latter is a subset of the former. The scaling relations of the three structure sets are shown in Fig. 14, Fig. 15 and Fig. 16, respectively. For both leaf and branch structures, \(\sigma-N*R\) has always a stronger correlation compared to \(\sigma-N\) and \(\sigma-R\). Moreover, the scaling relations show a stronger correlation and steeper slope when applied to self-gravitating structures, hence they best follow the Heyer-relation.
### Feedback
In the study of the G305 molecular cloud complex, Mazumdar et al. (2021) argued that the 8 \(\upmu\)m emission can be a good indicator of feedback strength. We calculated the average 8 \(\upmu\)m surface brightness over each structure to measure the strength of feedback on each structure. In Fig. 17, the 8 \(\upmu\)m surface brightness shows a strong positive correlation with column density for both c-leaves and branch structures, which might be an indication for triggering in the G333 complex. However, branch structures show a more obvious correlation between 8 \(\upmu\)m surface brightness and velocity dispersion than c-leaves, consistent with the results in Mazumdar et al. (2021), implying that feedback has a greater impact on large-scale structures. The small-scale structures are embedded in large-scale structures, thus less affected by feedback. For large-scale structures, as shown in Fig.2(c), the more evolved sub-region 1 and sub-region s2b are fragmenting into several pieces, potentially torn apart by the expanding HII regions. These results may explain why the velocity dispersion of branch structure has clear correlation with scale, leaf structure does not, as shown in Fig.14. However, in Fig.15, after filtering low-column density and gravitationally unbound structures, velocity dispersion and scale of c-leaves appear to show better correlation than that in Fig.14, the correlation between the 8 \(\upmu\)m surface brightness and velocity dispersion of c-leaves is also improved in Fig.17, which mean that leaf structure is also affected by feedback. Here we should remember that
there is a considerable overlap of the scales between leaf and branch structures, as described in Sec.3.1.
Our analysis is based on structural identification, when we say that feedback increases the density and velocity dispersion of the structures, provided that these structures can exist stably. \(a_{air}=2E_{k}/(E_{g}+E_{p})<1\) and \(a_{air}=2E_{k}/E_{g}<1\) structures can be more tenacious in feedback than other structures, and thus exhibit better scaling relations in Fig. 15, Fig. 16 and Fig.17. Dale et al. (2014) examined the effects, in simulations, of photoionization and momentum-driven winds from O-stars on molecular clouds, and found that feedback is highly destructive to clouds with lower mass and density, but have little effect on more massive and denser clouds.
## 4 Discussion
### The origin of velocity dispersion
As discussed in Ballesteros-Paredes et al. (2011); Traficante et al. (2018); Ballesteros-Paredes et al. (2018); Li et al. (2023), high-column density clumps or cores exhibit larger velocity dispersion than low-column density ones due to gas motions in gravitational collapse, as shown in Fig. 14(b) and Fig. 15(b), where the positive correlation between velocity dispersion and column density of c-leaves and branch structures indicates the gravitational origin of velocity dispersion. Combined with the discussions in Sec.3.6 and Sec.3.7, we conclude that both gravitational collapse and feedback contribute significantly to the velocity dispersion of large-scale structures. For small-scale structures, gravitational collapse is an important source of velocity dispersion, while the contribution of feedback needs more discussion in future work.
### The Heyer-relation in feedback
In Sec.3.6, self-gravitating structures can better fit the Heyer-relation. Considering that global collapse may lag behind the local collapse in the cloud (Heitsch et al., 2008), structures collapsing under self-gravity can be relatively independent of (or "decoupled from") the surrounding environment. Then the explanations of the Heyer-relation in Heyer et al. (2009) and Ballesteros-Paredes et al. (2011) can hold. Contrarily, for non-self-gravitating structures, the exchange of energy with the surrounding environment will break the conversion between E\({}_{g}\) and E\({}_{4}\), thus breaking the Heyer-relation.
Sun et al. (2018) measured cloud-scale molecular gas properties in 15 nearby galaxies, and observed an excess in the velocity dispersion \(\sigma\) at low surface density \(\Sigma\) relative to the expected relation for self-gravity-dominated gas. This behavior leads to a shallower \(\sigma-\Sigma\) relation in several galaxies, clearly deviating from a \(\sigma-\Sigma^{0.5}\) relation extrapolated from the high surface density regime. One of their explanation for the deviation is that gas structures in the low surface density regime may be more susceptible to external pressure originating from the ambient medium or motions due to the galaxy potential, similar to our above explanation.
Figure 14: Scaling relations of leaf and branch structures. (a) \(\sigma-R\); (b) \(\sigma-N\); (c) \(\sigma-N*R\). \(\sigma\), \(R\) and \(N\) are velocity dispersion, effective radius and column density of each structure, respectively. ’P’ represents the Pearson coefficient.
### Cloud disruption and collapse under feedback
Peretto et al. (2023) performed an analysis of 27 infrared dark clouds (IRDCs) embedded within 24 molecular clouds. They found that the clumps are decoupling from their surrounding cloud and concluded that the observations are best explained by a universal global collapse of dense clumps embedded within stable molecular clouds, thus discovering direct evidence of a transition regime in the dynamical properties of the gas within individual molecular clouds. As discussed in Heitsch et al. (2008); Ballesteros-Paredes et al. (2011), the decoupling may be due to the global collapse of the molecular cloud lagging behind the local collapse of dense clumps in the cloud. Invoking the notion of the "funnel" structure in PPV space (Zhou et al., 2023), a similar statement is that relatively small-scale hub-filament structures will have a more recognizable "funnel" morphology than large-scale ones due to their strong local gravitational field. Substructure s3a in the G333 complex is a vivid example of the decoupling highlighted in Fig.1 of Zhou et al. (2023), which is collapsing into a hub-filament structure and separating from its surrounding environment.
One implication in the work of Peretto et al. (2023) is that star formation is likely to be mostly confined to parsec-scale collapsing clumps, also consistent with the results in Zhou et al. (2022, 2023). In our previous works, for both ALMA and LAsMA data, most of the fitted velocity gradients concentrate on scales \(\sim 1\,\)pc, a scale that is considered to be the characteristic scale of massive clumps (Urquhart et al., 2018). Velocity gradients measured around 1 pc show that the most frequent velocity gradient is \(\sim 1.6\,\)km\(\,\)s\({}^{-1}\,\)pc\({}^{-1}\). Assuming free-fall, we estimate the kinematic mass corresponding to 1 pc is \(\sim 1190\,\)M\({}_{\odot}\), which is also comparable with the typical mass of clumps in the ATLASGAL survey (Urquhart et al., 2018). Thus parsec-scale clumps are probably gravity-dominated collapsing objects.
The results in Zhou et al. (2022, 2023); Peretto et al. (2023) show that the physical properties of parsec-scale clumps in two very different physical environments (infrared dark and infrared bright) are comparable. Thus feedback in infrared bright star-forming regions, such as the G333 complex, will not significantly change the physical properties of parsec-scale clumps, also consistent with the survey results that most Galactic parsec-scale massive clumps seem to be gravitationally bound no matter how evolved they are (Liu et al., 2016; Urquhart et al., 2018; Evans et al., 2021). Although the clumps are exposed to feedback and part of their velocity dispersion is due to feedback, as shown in Sec.3.7, the clumps are still self-gravitating sufficiently to continue their collapse, even after the lower density material has been disrupted and is being dispersed. Watkins et al. (2019) found that stellar feedback from O stars does not have much of an impact on the dynamical properties of the dense gas that has already been assembled, but does clearly modify the structure of the larger scale clouds. The broken morphology of some very infrared bright structures in the G333 complex also indicates that the feedback is disrupting the molecular clouds.
The effects of feedback in star-forming regions can redistribute, disperse and enhance preexisting gas structures, and create new structures (Elmegreen and Lada, 1977; Dale et al., 2007; Lee and Chen, 2007; Nagakura et al., 2009; Krumholz et al., 2014; Fukui et al., 2021). According to the physical picture described in Zhou et al. (2023), the hub-filament structures at different scales may be organized into a hierarchical system, extending up to the largest scales probed, through the coupling of gravitational centers at different scales. Large-scale velocity gradients always involve many intensity peaks, and the larger scale inflow is driven by the larger scale structure, implying that the clustering of local small-scale gravitational structures can act as the gravitational center on larger scale. Given that the hierarchical hub-filament structures or the coupling of local gravitational centers in molecular cloud, and feedback does not impact much the dynamical properties of the dense gas, thus although the feedback disrupting the molecular clouds will break up the original cloud complex, the substructures of the original complex can be reorganized into new gravitationally governed configurations around new gravitational centers. This process is accompanied
by structural destruction and generation, and changes in gravitational centers, but gravitational collapse is always ongoing.
## 5 Summary
We investigated the kinematics and dynamics of gas structures under feedback in the G333 complex. The main conclusions are as follows:
1. The dense gas structures were identified by the Dendrogram algorithm based on the integrated intensity map of \({}^{13}\)CO (3\(-\)2). We obtained 3608 structures, their averaged spectra were extracted and fitted one by one. According to the line profiles, all averaged spectra were divided into three categories. Physical quantities of each structure were calculated based on their line profiles.
2. The column density of the entire observed field was derived from ATLASGAL+Planck 870 \(\mu\)m data, Hi-GAL data, and different transitions of \({}^{13}\)CO (\(J\)=1-0, 2-1 and 3-2). We investigated the column density ratios between them pixel-by-pixel, and found that the column density derived from ATLASGAL+Planck 870 \(\mu\)m data is comparable with that estimated from Hi-GAL data. Molecular line emission gives significantly lower column density estimates than those derived from the continuum emission. The peak values of the column density ratios between different transitions of \({}^{13}\)CO emission are \(N_{2-1}/N_{1-0}\approx 0.5\), \(N_{3-2}/N_{1-0}\approx 0.3\), \(N_{3-2}/N_{2-1}\approx 0.5\). These ratios can be roughly reproduced by the Non-LTE molecular radiative transfer algorithm RADEX for typical volume densities of \(\sim 4.2\times 10^{3}\) cm\({}^{-3}\). Thus we adopted a correction factor of 0.3 to calibrate the column density derive from \({}^{13}\)CO \(J=3-2\) to be more representative of the total column density.
3. Classical virial analysis, suggesting many structures to be unbound, does not reflect the true physical state of the identified structures. After considering external pressure from the ambient cloud, almost all the structures with column density more than the threshold \(\sim\)3.2 \(\times\) 10\({}^{21}\) cm\({}^{-2}\) are gravitationally bound, even undergoing gravitational collapse.
4. The positive correlation between velocity dispersion and column density of c-leaves and branch structures reveals the gravitational origin of velocity dispersion.
5. We use the average 8 \(\mu\)m surface brightness as indicator of feedback strength, which shows a strongly positive correlation with the column density of both c-leaves and branch structures. However, branch structures show a more significant correlation between 8 \(\mu\)m surface brightness and velocity dispersion than c-leaves, implying that feedback has a greater impact on large-scale structures. We concluded that both gravitational collapse and feedback contribute significantly to the velocity dispersion of large-scale structures. For small-scale structures, gravitational collapse is an important source of velocity dispersion, while the contribution of feedback needs more discussion in future work.
6. For both leaf and branch structures, \(\sigma-N*R\) always has a stronger correlation compared to \(\sigma-N\) and \(\sigma-R\). The scaling relations are stronger, and have steeper slopes when considering only self-gravitating structures, which are the structures most closely associated with the Heyer relation. However, due to the strong feedback in the G333 complex, only a small fraction of the structures are in a state of self-gravitational collapse.
7. Although the feedback disrupting the molecular clouds will break up the original cloud complex, the substructures of the original complex can be reorganized into new gravitationally governed configurations around new gravitational centers. This process is accompanied by structural destruction and generation, and changes in gravitational centers, but gravitational collapse is always ongoing.
###### Acknowledgements.
We would like to thank the referee for the detailed comments and suggestions that significantly improve and clarify this work. This publication is based on data acquired with the Atacama Pathfinder Experiment (APEX) under programme ID M-0109.F-9514A-2022. APEX has been a collaboration between the Max-Planck-Institut fur Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory. This research made use of astrodendro, a Python package to compute dendrograms of Astronomical data ([http://www.dendrograms.org/](http://www.dendrograms.org/)). J. W. Zhou thanks E. Vazquez-Semadeni for the helpful discussions. This work has been supported by the National Key R&D Program of China (No. 2022YFA1603101). Tie Liu acknowledges the supports by National Natural Science Foundation of China (NSFC) through grants No. 102703601 and No.1122307, the international partnership program of Chinese Academy of Sciences through grant No.114231KYSB20200009, and Shanghai Pujiang Program 20P14145500.
|
2309.07769 | Cosmological Correlators Through the Looking Glass: Reality, Parity, and
Factorisation | We consider the evolution of quantum fields during inflation, and show that
the total-energy singularities appearing in the perturbative expansion of the
late-time Wavefunction of the Universe are purely real when the external states
are massless scalars and massless gravitons. Our proof relies on the tree-level
approximation, Bunch-Davies initial conditions, and exact scale invariance
(IR-convergence), but without any assumptions on invariance under de Sitter
boosts. We consider all $n$-point functions and allow for the exchange of
additional states of any mass and integer spin. Our proof makes use of a
decomposition of the inflationary bulk-bulk propagator of massive spinning
fields which preserves UV-convergence and ensures that the time-ordered
contributions are purely real after we rotate to Euclidean time. We use this
reality property to show that the maximally-connected parts of wavefunction
coefficients, from which total-energy singularities originate, are purely real.
In a theory where all states are in the complementary series, this reality
extends to the full wavefunction coefficient. We then use our reality theorem
to show that parity-odd correlators (correlators that are mirror asymmetric)
are factorised and do not diverge when the total-energy is conserved. We pay
special attention to the parity-odd four-point function (trispectrum) of
inflationary curvature perturbations and use our reality/factorisation theorems
to show that this observable is factorised into a product of cubic diagrams
thereby enabling us to derive exact shapes. We present examples of couplings
between the inflaton and massive spin-1 and spin-2 fields, with the
parity-violation in the trispectrum driven by Chern-Simons corrections to the
spinning field two-point function, or from parity-violating cubic interactions
which we build within the Effective Field Theory of Inflation. | David Stefanyszyn, Xi Tong, Yuhang Zhu | 2023-09-14T14:59:55Z | http://arxiv.org/abs/2309.07769v2 | # Cosmological Correlators Through the Looking Glass:
###### Abstract
We consider the evolution of quantum fields during inflation, and show that the total-energy singularities appearing in the perturbative expansion of the late-time Wavefunction of the Universe are purely real when the external states are massless scalars and massless gravitons. Our proof relies on the tree-level approximation, Bunch-Davies initial conditions, and exact scale invariance (IR-convergence), but without any assumptions on invariance under de Sitter boosts. We consider all \(n\)-point functions and allow for the exchange of additional states of any mass and integer spin. Our proof makes use of a decomposition of the inflationary bulk-bulk propagator of massive spinning fields which preserves UV-convergence and ensures that the time-ordered contributions are purely real after we rotate to Euclidean time. We use this reality property to show that the _maximally-connected_ parts of wavefunction coefficients, from which total-energy singularities come from, are purely real. In a theory where all states are in the complementary series, this reality extends to the full wavefunction coefficient. We then use our reality theorem to show that parity-odd correlators (correlators that are mirror asymmetric) are _factorised_ and do not diverge when the total-energy is conserved. We pay special attention to the parity-odd four-point function (trispectrum) of inflationary curvature perturbations and use our reality/factorisation theorems to show that this observable is factorised into a product of cubic diagrams thereby enabling us to derive exact shapes. We present examples of couplings between the inflaton and massive spin-1 and spin-2 fields, with the parity-violation in the trispectrum driven by Chern-Simons corrections to the spinning field two-point function, or from parity-violating cubic interactions which we build within the Effective Field Theory of Inflation.
###### Contents
* 1 Introduction
* 2 Massive spinning fields during inflation
* 2.1 Cosmological condensed matter physics
* 2.2 Cosmological collider physics
* 2.3 Mass parameter comparison
* 3 General properties of Wick-rotated propagators
* 3.1 Cosmological condensed matter physics
* 3.2 Cosmological collider physics
* 4 Reality and factorisation
* 4.1 Light fields: the wavefunction reality
* 4.2 Adding heavy fields: the total-energy reality
* 4.3 Factorising parity-odd correlators
* 5 Exact parity-odd trispectra
* 5.1 Example 1: spin-1 exchange in CC with chemical potential
* 5.2 Example 2: spin-2 exchange in CCM
* 5.3 Example 3: spin-2 exchange in CC
* 6 Conclusions and outlook
* A General solution of \(\Delta G_{\sigma}^{(h)}\)
* B Proofs via the in-in/Schwinger-Keldysh formalism
* C Reality from Hermitian analyticity
* D Beyond scale invariance: reality in other FLRW spacetimes
Introduction
The fundamental observables of inflationary cosmology are late-time cosmological correlators, namely expectation values of quantum fields evaluated at the end of inflation. In the simplest models of inflation we are interested in correlations between the two massless fields that survive the rapid expansion of the background spacetime: the Goldstone boson of broken time translations, and the graviton. These correlators provide the initial conditions for Hot Big Bang cosmology, and observations of the Cosmic Microwave Background and Large Scale Structure then allow us to in principle distinguish between different models of the very early universe by measuring these spatial correlations. Given the very high energies that could characterise inflation, additional massive particles can be produced from the vacuum and decay into the light states that make it to the end of inflation. Such heavy states leave distinctive imprints on late-time correlators that encode their masses (through time evolution) and spins (through kinematics), thereby leading to the tantalising prospect of using the early universe as a very energetic _cosmological collider_[1, 2, 3, 4, 5].
Traditionally, such correlators are computed in de Sitter space perturbation theory using the in/Schwinger-Keldysh formalism (see e.g. [6, 7, 8] for reviews), or the Wavefunction of the Universe. Such computations quickly become very complicated due to the background time dependence which, in contrast to flat-space, means that the time integrals one must compute are very non-trivial. Further complications arise from the fact that the mode functions of massive fields in de Sitter space are usually characterised by Hankel functions (or similar functions) which are harder to integrate compared to the plane wave solutions which are familiar in flat-space. Such complications provide a stumbling block in ones quest to understand the late-time effects of massive fields during inflation, and more generally to understand the fundamental properties of cosmological correlators.
This has motivated the _cosmological bootstrap programme_ which aims to develop new computational techniques from which one can compute cosmological correlators while avoiding the unobservable time evolution of quantum fluctuations and the associated complicated nested time integrals. The idea is to use general physical principles such as symmetries, locality and unitarity to directly fix the structure of boundary correlators. This programme is motivated by the highly successful S-matrix bootstrap programme where scattering amplitudes are computed while avoiding the complications of Feynman diagrams [9, 10, 11, 12, 13, 14]. Much progress on understanding the structure of cosmological correlators has been made in recent years focusing on correlators in exact de Sitter space [15, 16, 17, 18, 19], correlators arising from interactions with broken de Sitter boosts [20, 21], the role of analyticity and unitarity in these late-time observables [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32], constraints from bulk locality [33], the effects of additional degrees of freedom during inflation [34, 35, 36, 37, 38, 39, 40, 41], loop effects [42, 43, 44, 45, 46, 47, 48, 49, 50], graviton correlation functions [51, 52, 53, 54], double-copy structures [55, 56, 57], non-linear realisations [58, 59, 60, 61], scattering equations [62, 63], the connections between scattering amplitudes and correlators [64, 65, 66, 67, 68], Mellin space representations [49, 50, 69, 70, 71, 72, 73], differential representations [74, 75], non-perturbative effects [76, 77], relations to geometry [78, 79, 80, 81], and with much more fun to be had in the coming years. For reviews see [82, 83, 84].
In this work we focus on the Wavefunction of the Universe and the associated wavefunction coefficients which contain the dynamical information about the evolution of quantum fields in de Sitter space. It is now well-known that for fields that satisfy Bunch-Davies initial conditions, i.e. for fields that have Minkowski-like behaviour in the far past, wavefunction coefficients have a restricted set of singularities. Indeed, wavefunction coefficients are only singular when the total-energy of the external states vanishes, or when the total-energy entering a sub-graph vanishes.4 For physical configurations i.e. for real momenta, such singularities cannot be reached, however much of our understanding of wavefunction coefficients and cosmological correlators stems from analytically continuing away from real momenta in which case
energies can become negative and singularities can be probed. For example, the leading total-energy singularity of \(n\)-point functions allows us to probe the corresponding flat-space (boost-breaking) scattering amplitude [64, 65, 68, 22] which provides a non-trivial link between the cosmological bootstrap and the S-matrix bootstrap, while at four-points the additional singularities are partial-energy ones on which wavefunction coefficients factorise into a product of a lower-point wavefunction coefficient and a scattering amplitude, see e.g. [18]. When wavefunction coefficients are rational, which is often the case for massless scalars and gravitons [85], the residues of these partial-energy poles can be fixed by unitarity and energy shifts [26, 33].
In general, wavefunction coefficients in momentum space are complex functions of the external kinematics and cosmological correlators correspond to taking the real or imaginary parts, at least at tree-level. For parity-even interactions it is the real part that contributes to correlators, while for parity-odd interactions it is the imaginary part [37]. While it is known that the tree-level wavefunction coefficients in simple massless theories are purely real [86] (thereby making parity-odd correlators a neat probe of exotic inflationary physics), in this paper we greatly extend this concept of reality. More precisely, we show that tree-level wavefunction coefficients with massless scalar external states have purely real total-energy poles (both leading and sub-leading) as long as the bulk interactions are IR-convergent. Our proof is valid for all \(n\)-point functions at tree-level and we allow for Feynman diagrams corresponding to the exchange of massive spinning fields of any mass and integer spin. More concretely, we show that contributions to wavefunction coefficients from the maximally-connected parts of Feynman diagrams are purely real, where "maximally-connected" corresponds to the contributions to the integrands with the maximal number of \(\theta\)-functions (and this is where total-energy singularities come from). Our proof makes use of Wick rotations of the time integrals that compute these wavefunction coefficients and a simple property of the bulk-bulk propagator of massive fields in de Sitter space: the time-ordered part is purely real after the time variables are both Wick rotated by \(90^{\circ}\) in the complex plane. This property follows as a simple consequence of the differential equation that the bulk-bulk propagator must satisfy given that it is a Green's function. We provide more explicit and detailed proofs within two different set-ups for describing massive spinning fields during inflation: _cosmological condensed matter physics_ (CCM) and _cosmological collider physics_ (CC).
The former was introduced in [87] and requires a sizeable coupling between the new massive degrees of freedom and the time-dependent inflaton. In this set-up fields are classified with respect to how they transform under the unbroken group of symmetries which for cosmology is spatial rotations. The theory should in addition have all of the symmetries of the Effective Field Theory of Inflation (EFToI) [88] and this can be guaranteed thanks to particular couplings with the inflaton which can be built straightforwardly using the building blocks of the EFToI [87]. From the point of view of spontaneous symmetry breaking and the coset construction, such new degrees of freedom are classified as matter fields which can couple to the Goldstone boson of broken time translations. The masses of these fields are not restricted by the Higuchi bound [89], which illustrates that they cannot exist in an exactly de Sitter invariant theory. This set-up is somewhat similar to condensed matter systems where linearly realised Lorentz boosts are not used to construct the effective theory (and hence the name). The latter is perhaps more familiar to the reader and corresponds to describing massive degrees of freedom as representations of the de Sitter group. The masses of these fields must adhere to a lower bound in order for the theory to remain unitary, which is the Higuchi bound [89]. The Lagrangians for such fields are known [90, 91, 92], but quickly become very complicated due to the need to include auxiliary fields (which ultimately enforce the transverse and traceless conditions on the fields as required by the degrees of freedom counting). One can instead work directly with the equations of motion which take a simpler form [93]. Although the free theories for these new degrees of freedom are de Sitter invariant, de Sitter boosts can be broken when we come to couple these fields to the inflaton. This can again be done within the language of the EFToI as done in [5]. We will review both set-ups in Section 2.
In each case we consider light and heavy fields i.e. those in the complementary and principle series,
respectively. We concentrate on the reality properties of the Wick-rotated bulk-bulk propagators which, in cosmology, are composed of the usual time-ordered Feynman propagator and a factorised term necessitated by the boundary conditions. For light fields, we show that the full bulk-bulk propagator is purely real after Wick rotation. For heavy fields, we show that although the full bulk-bulk propagator is complex in general, we are able to add and subtract factorised contributions to the full bulk-bulk propagator in such a way that we cancel the imaginary parts of the Wick-rotated Feynman propagator. The new time-ordered part, which enjoys reality after rotation, and which is now a sum of the Feynman propagator and a factorised contribution, is referred to as the _connected propagator_. This decomposition is diagrammatically represented in (1.1). In addition, this connected propagator enjoys the crucial property that it vanishes in the far past, which ensures that Feynman diagrams constructed using this propagator maintain UV convergence of the associated time integrals. The detailed treatment of the propagator realities differs from one set-up to another. In the CCM set-up, we allow for parity violation in the free theory of the massive spinning field coming from a chemical potential term with a single spatial derivative in the action [94, 95, 96, 38]. This splits the helicities and changes the mode functions from Hankel functions to Whittaker functions. The propagator realities then require cancellations once we sum over the helicities. In the limit of a vanishing chemical potential with a parity-conserving propagator, reality holds for each helicity mode separately with the proof for light fields already appearing in [37]. In the CC set-up we maintain parity in the free theory of the massive spinning field (except for spin-1 which is essentially identical to the CCM case) and again show that for light fields the Wick-rotated propagator is purely real, for each helicity mode, while for heavy fields it is again only the connected part that is purely real. In order to arrive at this conclusion we use the fact that the transverse and traceless conditions for these fields relate modes with the same helicity via differential operators that have simple properties under Wick rotation. These proofs make up Section 3.
In Section 4 we then use these general properties of Wick-rotated bulk-bulk propagators to prove that total-energy poles of wavefunction coefficients with external massless scalars are purely real under the assumptions of IR-convergence, scale invariance and the tree-level approximation. Our proof does not rely on de Sitter boosts and is therefore directly applicable to inflationary correlators. We also point out that our proof applies to external gravitons, and to wavefunction coefficients with an even number of conformally coupled scalars. Our approach is to extract the connected part of the full bulk-bulk propagator and show that diagrams that only involve connected propagators (maximally-connected diagrams) are purely real, and indeed total-energy singularities come from such diagrams. This is a general result, but if we restrict ourselves to the exchange of light fields only, then the full wavefunction coefficient is real. In Appendix C, we offer a complementary proof of the reality of total-energy poles using the _Hermitian analyticity_ properties of the external bulk-boundary propagators and the internal bulk-bulk ones. This property combined with exact scale invariance allows us to draw the same conclusions we arrived at using Wick rotations. Hermitian analyticity of bulk-bulk propagators in the context of the CCM scenario has been established in [24], and in Appendix C we extend the analysis to the CC scenario.
As a proof of the usefulness of this observation, we consider the parity-odd scalar trispectrum in Section 5, which has recently gained some attention [97, 98, 99, 100, 101, 102, 103, 104]. This correlator is fixed by the imaginary parts of wavefunction coefficients and given that total-energy poles are real, they do not contribute to the parity-odd trispectrum (unless there is some IR divergence as in [101], or if they come from loop diagrams as in [42]). This implies that this observable is in fact factorised at tree-level, and can only have partial-energy poles. In computing cosmological correlators the primary difficulties arise when one computes the nested time integrals coming from the time-ordered propagators. Since here these contributions are purely real, computing the parity-odd trispectrum due to the exchange of massive spinning fields reduces to computing lower-order, factorised time integrals which have known closed-form solutions. We present a number of _exact_ parity-odd trispectra for both the CCM and CC descriptions of massive spinning fields focusing primarily on spin-1 and spin-2. We consider different sources of parity-violation: parity-violating bulk
interactions and parity violation in the free theory describing the massive field. The latter case is usually studied in the context of cosmological chemical potentials, where it is known that the chemical potential can assist particle production [96, 105] and boost the cosmological collider signal [106, 107, 108, 109, 94, 95, 110, 111, 108, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 221, 232, 241, 242, 243, 244, 245, 246, 247, 248, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 422, 434, 44, 45, 46, 47, 48, 49, 411, 44, 49, 42, 44, 46, 48, 49, 43, 44, 49, 44, 45, 46, 47, 49, 45, 47, 49, 46, 48, 49, 47, 49, 48, 49, 40, 411, 422, 43, 44, 44, 46, 49, 45, 49, 46, 49, 47, 49, 48, 49, 49, 40, 41, 42, 43, 44, 44, 46, 49, 45, 47, 49, 48, 49, 49, 41, 45, 49, 46, 49, 47, 49, 48, 49, 49, 41, 42, 43, 44, 44, 46, 49, 45, 49, 46, 47, 49, 48, 49, 49, 41, 42, 43, 44, 45, 46, 49, 47, 49, 48, 49, 49, 49, 41, 42, 43, 44, 45, 46, 49, 47, 49, 48, 49, 49, 49, 41, 42, 43, 44, 45, 46, 49, 48, 49, 49, 42, 44, 46, 49, 45, 47, 49, 49, 46, 49, 47, 49, 48, 49, 49, 49, 41, 42, 43, 44, 45, 46, 49, 47, 49, 48, 49, 49, 49, 41, 42, 43, 44, 45, 46, 49, 47, 49, 48, 49, 49, 49, 49, 49, 41, 42, 43, 44, 45, 46, 49, 49, 41, 42, 43, 44, 45, 46, 49, 47, 49, 48, 49, 49, 49, 49, 49, 41, 42, 43, 44, 45, 46, 49, 48, 49, 49, 49, 41, 42, 43, 44, 45, 46, 49, 47, 49, 48, 49, 49, 49, 49, 41, 42, 43, 44, 45, 46, 49, 49, 41, 42, 43, 44, 45, 46, 49, 47, 49, 48,
\[B_{4}^{\text{PO}}=\sum\text{constants}\times\text{kinematics}\times(\text{ hypergeometric function})_{L}\times(\text{hypergeometric function})_{R} \tag{1.2}\]
where \((L,R)\) correspond to kinematic structures with partial-energy (\(E_{L,R}\)) singularities. There are no total-energy singularities. In all cases the time evolution is characterised by a product of hypergeometric functions.
Notations and conventionsThroughout this paper, we adopt natural units \(c=\hbar=1\) and work with the \((-+++)\) metric sign convention. Fourier transforms are defined by
\[f(\mathbf{x})=\int\frac{d^{3}k}{(2\pi)^{3}}e^{i\mathbf{k}\cdot\mathbf{x}}f( \mathbf{k})\equiv\int_{\mathbf{k}}e^{i\mathbf{k}\cdot\mathbf{x}}f(\mathbf{k}). \tag{1.3}\]
We will always adhere to exact translational and rotational invariance. In fact, we shall work in the Poincare patch of de Sitter space with the metric
\[ds^{2}=a^{2}(\eta)(-d\eta^{2}+d\mathbf{x}^{2})\,\quad a(\eta)=-\frac{1}{H \eta}\, \tag{1.4}\]
as an approximation for an inflationary spacetime. We will mainly work with the wavefunction formalism and define the wavefunction coefficients by the expansion
\[\Psi[\varphi]=\exp\Bigg{[}+\sum_{n=2}^{\infty}\frac{1}{n!}\int_{\mathbf{k}_{1} \cdots\mathbf{k}_{n}}\psi_{n}(\{k\},\{\mathbf{k}\})(2\pi)^{3}\delta^{3}\bigg{(} \sum_{\mathbf{a}=1}^{n}\mathbf{k}_{\mathbf{a}}\bigg{)}\varphi(\mathbf{k}_{1}) \cdots\varphi(\mathbf{k}_{n})\Bigg{]}\, \tag{1.5}\]
Figure 1: Scale invariance plus a Bunch-Davies vacuum implies that the maximally-connected parts of wavefunction coefficients are purely real where the maximally-connected part is given by replacing all bulk-bulk propagators \(G\) with the connected propagators \(C\). Such a maximally-connected part is depicted as a hatched grey blob in this figure. We refer to this result as the \(k_{T}\)-reality since all total-energy singularities come from the maximally-connected wavefunction coefficients. This is a general result that applies for the exchange of fields of any mass and integer spin. If the exchanged fields are light i.e. they are in the complementary series, then the full wavefunction coefficient is real (not just the maximally-connected part) since in this case \(C\) coincides with \(G\). If we further consider parity-odd correlation functions, then the wavefunction reality implies that such \(n\)-point correlators are factorised since only the imaginary part of maximally-connected wavefunction coefficients contribute to these observables and the imaginary parts vanish.
here \(\varphi\) denotes a general field with indices suppressed. The bulk-boundary and bulk-bulk propagators in the wavefunction formalism are given by
\[K(\eta,k) =\frac{\varphi^{*}(k,\eta)}{\varphi^{*}(k,\eta_{0})}\, \tag{1.6}\] \[G(\eta_{1},\eta_{2},k) =P(k)\left[\left(K^{*}(\eta_{1},k)K(\eta_{2},k)\theta(\eta_{1}- \eta_{2})+(\eta_{1}\leftrightarrow\eta_{2})\right)-K(\eta_{1},k)K(\eta_{2},k) \right]\, \tag{1.7}\]
with \(P(k)=\varphi(\eta_{0},k)\varphi^{*}(\eta_{0},k)\) denoting the power spectrum at \(\eta=\eta_{0}\). When computing the wavefunction coefficients, we adopt the amplitude Feynman rules by including a factor of \(i\) for every vertex (our bulk-bulk propagator therefore differs from that in [22] by a factor of \(i\)). Wick rotation turns out to be extremely crucial in this paper. Hence for clarity, we adopt \(\chi>0\) to denote the Wick-rotated conformal time, defined by
\[\eta=ie^{i\epsilon}\chi\,\quad\epsilon\to 0^{+}. \tag{1.8}\]
Under this transformation, the propagators are dressed with tildes to indicate Wick rotation,
\[\tilde{K}(\chi,k) =K(\eta,k)\, \tag{1.9}\] \[\tilde{G}(\chi_{1},\chi_{2},k) =G(\eta_{1},\eta_{2},k). \tag{1.10}\]
We will pay much attention to the total energy of a diagram with external momenta \(\{\mathbf{k}_{1},\cdots,\mathbf{k}_{n}\}\) given by
\[k_{T}=k_{1}+\cdots+k_{n}\, \tag{1.11}\]
where \(k_{\mathbf{a}}=|\mathbf{k}_{\mathbf{a}}|,\mathtt{a}=1,\cdots,n\) are the external energy variables. Note that correlators with a prime denotes the removal of an overall momentum-conserving \(\delta\)-function, e.g.
\[\left\langle\varphi(\mathbf{k}_{1})\cdots\varphi(\mathbf{k}_{n}) \right\rangle=(2\pi)^{3}\delta^{3}\left(\sum_{\mathtt{a}=1}^{n}\mathbf{k}_{ \mathtt{a}}\right)\left\langle\varphi(\mathbf{k}_{1})\cdots\varphi(\mathbf{k} _{n})\right\rangle^{\prime}=(2\pi)^{3}\delta^{3}\left(\sum_{\mathtt{a}=1}^{n} \mathbf{k}_{\mathtt{a}}\right)B_{n}^{\varphi}(\mathbf{k}_{1},\cdots,\mathbf{k} _{n}). \tag{1.12}\]
In the case of 4-point correlation functions, we evoke the Mandelstam-like variables
\[\mathbf{s} =\mathbf{k}_{1}+\mathbf{k}_{2}\, \mathbf{t} =\mathbf{k}_{1}+\mathbf{k}_{3}\, \mathbf{u} =\mathbf{k}_{1}+\mathbf{k}_{4}\,\] \[s =|\mathbf{k}_{1}+\mathbf{k}_{2}|\, t =|\mathbf{k}_{1}+\mathbf{k}_{3}|\, u =|\mathbf{k}_{1}+\mathbf{k}_{4}|\, \tag{1.13}\]
Figure 2: The first line is the usual relationship between (\(s\)-channel) four-point correlators and wavefunction coefficients. In the second line we have decomposed the full bulk-bulk propagator into the connected and factorised parts. The \(k_{T}\)-reality tells us that the maximally-connected part is purely real which in turn implies that the parity-odd part of the four-point function is factorised (as above).
which satisfy the non-linear relation
\[k_{1}^{2}+k_{2}^{2}+k_{3}^{2}+k_{4}^{2}=s^{2}+t^{2}+u^{2}, \tag{1.14}\]
by momentum conservation. We define a dimensionless curvature trispectrum \(\mathcal{T}\) by
\[B_{4}^{\zeta}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3},\mathbf{k}_{4})=(2 \pi)^{6}\Delta_{\zeta}^{6}\frac{(k_{T}/4)^{3}}{(k_{1}k_{2}k_{3}k_{4})^{3}} \mathcal{T}(\mathbf{k}_{1},\mathbf{k}_{2},\mathbf{k}_{3},\mathbf{k}_{4})\,. \tag{1.15}\]
For spinning fields, the CCM and CC set-ups historically chose different conventions for polarisation tensors. We choose to respect these distinct conventions and use different fonts to avoid confusion:
Cosmological Condensed Matter Scenario: \[\mathrm{e}^{(h)}_{i_{1}\cdots i_{S}}\,\] \[\text{Cosmological Collider Scenario: }\mathfrak{e}^{(h)}_{i_{1} \cdots i_{S}}\.\] (1.16)
The different properties of these tensors will be explained in Section 2. Throughout this paper, we make use of a number of mathematical formulae for Hankel and Whittaker functions, which can be found in Chapter 10 and Chapter 13 of NIST [111].
## 2 Massive spinning fields during inflation
As we explained in the introduction, in this work we are interested in \(n\)-point functions of massless scalar fields (and gravitons) that can be generated at tree-level due to interactions with massive spinning fields. In this section we introduce these massive spinning fields and derive their mode functions. We consider the two different cases of interest, cosmological condensed matter physics and cosmological collider physics, separately. This section does not contain any new results, so readers familiar with these descriptions of massive spinning fields (including parity-violating corrections from the chemical potential) can skip to Section 3.
### Cosmological condensed matter physics
We begin with the description of massive spinning fields during inflation advocated in [87]. The idea is to classify states with respect to the unbroken rotational symmetries, rather than as representations of the full de Sitter group. Fields therefore only have spatial indices, and we denote a field of spin \(S\) by \(\Sigma^{i_{1}\cdots i_{S}}\). For this field to carry \(2S+1\) degrees of freedom, it should be traceless but not transverse. To ensure that the symmetries of the EFToI are respected, this field is promoted to \(\Sigma^{\mu_{1}\cdots\mu_{S}}\) where the new temporal components depend on the Goldstone of broken time translation \(\pi\). For example, for \(S=1\) we have [87]
\[\Sigma^{0}(\pi,\Sigma^{i})=-\frac{\partial_{i}\pi\Sigma^{i}}{1+\dot{\pi}}\, \tag{2.1}\]
while for \(S=2\) we have
\[\Sigma^{00}(\pi,\Sigma^{ij})=\frac{\partial_{i}\pi\partial_{j}\pi\Sigma^{ij}} {(1+\dot{\pi})^{2}}\,\qquad\Sigma^{0j}(\pi,\Sigma^{ij})=-\frac{\partial_{i}\pi\Sigma^{ij}}{1+ \dot{\pi}}. \tag{2.2}\]
We then write down a quadratic action for \(\Sigma^{\mu_{1}\cdots\mu_{S}}\) which will introduce quadratic terms for the spatial components, and interactions between the spatial components and the inflaton with the same coefficients. This illustrates the fact that such a theory cannot exist in the absence of the inflaton. The general action
with at most two derivatives is
\[S_{2}=\frac{1}{2S!}\int d^{4}x\sqrt{-g}\Bigg{[} \left(1-c^{2}\right)n^{\mu}n^{\lambda}\nabla_{\mu}\Sigma^{\nu_{1} \cdots\nu_{S}}\nabla_{\lambda}\Sigma_{\nu_{1}\cdots\nu_{S}}-c^{2}\nabla_{\mu} \Sigma^{\nu_{1}\cdots\nu_{S}}\nabla^{\mu}\Sigma_{\nu_{1}\cdots\nu_{S}}\] \[-\delta c^{2}\nabla_{\mu}\Sigma^{\mu\nu_{2}\cdots\nu_{S}}\nabla_{ \lambda}\Sigma^{\lambda}_{\ \nu_{2}\cdots\nu_{S}}-\left(m^{2}+Sc^{2}H^{2}\right) \Sigma^{\nu_{1}\cdots\nu_{S}}\Sigma_{\nu_{1}\cdots\nu_{S}}\] \[-2S\kappa\,n^{\mu}\epsilon_{\mu\rho\gamma\lambda}\Sigma^{\rho\nu _{2}\cdots\nu_{S}}\,\nabla^{\gamma}\Sigma^{\lambda}_{\ \nu_{2}\cdots\nu_{S}}\Bigg{]}\, \tag{2.3}\]
where \(n^{\mu}n_{\mu}=-1\) is a timelike unit vector that defines a preferred frame in which the spatial rotations remain intact. Notice that in addition to the first four terms which appear in [87], we have also included a fifth term which has a single derivative and is parity-odd. We in principle have five free parameters but we can fix one using our freedom to normalise \(\Sigma^{\mu_{1}\cdots\mu_{S}}\) which leaves us with four: \(c,\delta c,m^{2}\) and \(\kappa\) which respectively correspond to two speed-of-sound parameters, the mass, and the _chemical potential_. The free theory for \(\Sigma^{i_{1}\cdots i_{S}}\) is then
\[S_{2}=\frac{1}{2S!}\int d\eta d^{3}xa(\eta)^{2}\Bigg{[} \sigma_{i_{1}\cdots i_{S}}^{\prime 2}-c^{2}(\partial_{j}\sigma_{i_{1} \cdots i_{S}})^{2}-\delta c^{2}(\partial_{j}\sigma_{ji_{2}\cdots i_{S}})^{2}\] \[-a(\eta)^{2}m^{2}\sigma_{i_{1}\cdots i_{S}}^{2}-2Sa(\eta)\kappa \epsilon_{ijk}\sigma_{il_{2}\cdots l_{S}}\partial_{j}\sigma_{kl_{2}\cdots l_ {S}}\Bigg{]}\, \tag{2.4}\]
where we have defined \(\sigma_{i_{1}\cdots i_{S}}=a^{-S}\Sigma_{i_{1}\cdots i_{S}}\) and have converted to conformal time. Here all scale factors are manifest and indices are raised and lowered with the Kronecker symbol \(\delta_{ij}\). We see that the kinetic term for this field is the same as that of a canonical scalar in de Sitter. If we had instead tried to directly construct the most general action with at most two derivatives for \(\sigma_{i_{1}\cdots i_{S}}\) that respects rotational invariance and scale invariance, we would also have arrived at (2.4).5 For \(\Sigma^{i_{1}\cdots i_{S}}\) to be traceless, the trace has to be taken with respect to the induced metric on constant inflaton slices [87]. This implies that in (2.4) we can take \(\sigma_{i_{1}\cdots i_{s}}\) to be traceless with respect to \(\delta_{ij}\) up to terms that are quadratic in \(\sigma_{i_{1}\cdots i_{S}}\) and at least quadratic in \(\pi\). When discussing the free theory for this massive spinning field we therefore take it to satisfy \(\delta_{ij}\sigma_{ijl_{3}\cdots l_{S}}=0\).
Footnote 5: A parity-odd term with one spatial derivative and one time derivative is degenerate with the terms in (2.4) up to a total derivative.
We now convert the action to momentum space and decompose the field in terms of its helicities via
\[\sigma_{i_{1}\cdots i_{S}}(\eta,\mathbf{x})=\sum_{h=-S}^{S}\int_{\mathbf{k}} \sigma_{h}(\eta,k)\mathrm{e}^{(h)}_{i_{1}\cdots i_{S}}(\mathbf{k})e^{i\mathbf{ k}\cdot\mathbf{x}}\, \tag{2.5}\]
where \(\sigma_{h}(\eta,k)\) are the mode functions and the traceless polarisation tensors satisfying
\[\left[\mathrm{e}^{(h)}_{i_{1}\cdots i_{S}}(\mathbf{k})\right]^{*} =\mathrm{e}^{(h)}_{i_{1}\cdots i_{S}}(-\mathbf{k})\, \tag{2.6}\] \[\mathrm{e}^{(h)}_{i_{1}\cdots i_{S}}(\mathbf{k})\mathrm{e}^{(h^{ \prime})}_{i_{1}\cdots i_{S}}(-\mathbf{k}) =S!\,\delta_{hh^{\prime}}\, \tag{2.7}\]
with the first condition following from the reality of the fields in position space, and the second a normalisation choice. For a given helicity the polarisation tensor is a function of \(\hat{\mathbf{k}}\) and two polarisation directions \(\hat{\mathbf{e}}^{\pm}\) which are orthogonal to \(\hat{\mathbf{k}}\), and satisfy (2.6) and (2.7). More explicitly, we can construct the polarisation directions as
\[\hat{\mathbf{e}}^{\pm}(\hat{\mathbf{k}})=\frac{\hat{\mathbf{n}}-(\hat{ \mathbf{n}}\cdot\hat{\mathbf{k}})\hat{\mathbf{k}}\pm i\,\hat{\mathbf{k}} \times\hat{\mathbf{n}}}{\sqrt{2[1-(\hat{\mathbf{n}}\cdot\hat{\mathbf{k}})^{2 }]}}\, \tag{2.8}\]
where \(\hat{\bf n}\) is an arbitrary unit vector not parallel to \(\hat{\bf k}\). Modes with \(h=0\) are functions of \(\hat{\bf k}\) only, modes with \(|h|=S\) are functions of \(\hat{\bf e}^{\pm}\) only, while intermediate modes are functions of both \(\hat{\bf k}\) and \(\hat{\bf e}^{\pm}\), with \(|h|\) powers of the latter. This structure ensures that modes of different helicity decouple in the quadratic action, while the normalisation choice ensures that each mode has a canonical kinetic term. For example, for \(S=1\) we have
\[{\rm e}^{(0)}_{i}=i\hat{k}_{i}\,\qquad{\rm e}^{(\pm 1)}_{i}=\hat{e}^{\pm}_{i }\, \tag{2.9}\]
while for \(S=2\) we have
\[{\rm e}^{(0)}_{ij}=\sqrt{3}\left(\hat{k}_{i}\hat{k}_{j}-\frac{1}{3}\delta_{ij} \right)\,\qquad{\rm e}^{(\pm 1)}_{ij}=i(\hat{k}_{i}\hat{e}^{\pm}_{j}+\hat{k}_{j} \hat{e}^{\pm}_{i})\,\qquad{\rm e}^{(\pm 2)}_{ij}=\sqrt{2}\hat{e}^{\pm}_{i} \hat{e}^{\pm}_{j}. \tag{2.10}\]
Further details on these polarisation structures can be found in e.g. [5, 24]. Using these properties of the polarisation tensors, (2.4) becomes a decoupled action for each mode function:
\[S_{2}=\frac{1}{2}\sum_{h=-S}^{S}\int_{\bf k}d\eta a^{2}(\eta) \left[\sigma_{h}^{\prime 2}-c_{h,S}^{2}k^{2}\sigma_{h}^{2}-m^{2}a(\eta)^{2} \sigma_{h}^{2}-2h\kappa ka(\eta)\sigma_{h}^{2}\right]\, \tag{2.11}\]
where
\[c_{h,S}^{2}=c^{2}+\frac{S^{2}-h^{2}}{S(2S-1)}\delta c^{2}. \tag{2.12}\]
In deriving this expression we have used \(i\epsilon_{ijk}k_{j}\hat{e}^{\pm}_{k}=\pm k\hat{e}^{\pm}_{i}\). The fact that the parity-violating term splits the helicities is now clear given the helicity factor \(h\) in the final term. In addition to this term, we see that each mode has the same mass but different speed of sound which depends on the two original speed of sound parameters, the spin and the helicity. The equation of motion for each helicity mode is then
\[\left(\eta^{2}\frac{\partial^{2}}{\partial\eta^{2}}-2\eta\frac{ \partial}{\partial\eta}+c_{h,S}^{2}k^{2}\eta^{2}+\frac{m^{2}}{H^{2}}-\frac{2h \kappa}{H}k\eta\right)\sigma_{h}(\eta,k)=0. \tag{2.13}\]
The solution to this equation with Bunch-Davies initial conditions i.e. the solution that has Minkowski-like behaviour in the far past and satisfies the de Sitter Wronskian condition is
\[\sigma_{h}(\eta,k)=e^{-\pi\tilde{k}/2}\frac{-H\eta}{\sqrt{2c_{h, S}k}}W_{i\tilde{\kappa},\nu}(2ic_{h,S}k\eta)\,\quad\nu=\sqrt{\frac{9}{4}-\frac{m^{2}}{H^{2}}}\,\quad \tilde{\kappa}=\frac{h\kappa}{c_{h,S}H}\, \tag{2.14}\]
where \(W_{a,b}(z)\) is the Whittaker \(W\)-function. This solution is valid for both light (Im \(\nu=0\)) and heavy (Re \(\nu=0\)) fields. In the limit that the chemical potential vanishes we would expect to recover the solution of [87] corresponding to the solution of a massive scalar field in de Sitter space. This can be verified using the relation
\[W_{0,\nu}(2ic_{h,S}k\eta)=\sqrt{\frac{\pi}{2}}\sqrt{-c_{h,S}k \eta}e^{i\pi(\nu+1/2)/2}H^{(1)}_{\nu}(-c_{h,S}k\eta)\, \tag{2.15}\]
where \(H^{(1)}_{\nu}(z)\) is the Hankel function of the first kind. We will discuss the corresponding power spectra and propagators in Section 3. Notice that in this CCM scenario, the introduction of chemical potential \(\kappa\) is quite natural since it serves as a next-to-leading order correction to the dispersion relation of massive spinning fields in the gradient expansion, and is consistent with spatial rotations and scale invariance. More precisely, it appears as a linear term in momentum in the dispersion relation of a massive field,
\[w^{2}({\bf k}_{p}^{2},{\bf S}\cdot{\bf k}_{p})=m^{2}+2\kappa \,{\bf S}\cdot{\bf k}_{p}+\left[\left(\delta c^{2}+\frac{S}{2S-1}\delta c^{2} \right){\bf k}_{p}^{2}-\frac{\delta c^{2}}{S(2S-1)}({\bf S}\cdot{\bf k}_{p})^{ 2}\right]+{\cal O}\left(|{\bf k}_{p}|^{3}\right)\, \tag{2.16}\]
where \(\mathbf{k}_{p}\equiv\mathbf{k}/a(t)\) and \(\mathbf{S}\) are the physical momentum and the spin angular momentum of the field mode, respectively. Such a linear correction to the massive field's dispersion relation is not sign-definite, and alters the analytic structure of its equation of motion (2.13), leading to enhanced particle production when \(m\gtrsim\kappa\)[96]. In the case where \(m\lesssim\kappa\), however, modes with a negative linear term may experience a transient tachyonic phase where \(w^{2}<0\) and grow exponentially, i.e. \(\sigma\sim e^{-i\omega t}\sim e^{|\omega|t}\). Such a tachyonic growth is eventually halted by the finite mass, leading to \(w^{2}\approx m^{2}>0\) in the IR limit \(\mathbf{k}_{p}\to 0\). Nevertheless, the exponential growth during the tachyonic period may overproduce particles and threaten perturbativity or even the inflationary background. Therefore, in favour of theoretical control, we will require \(m-\kappa\gtrsim-H\) throughout this paper.
### Cosmological collider physics
We now turn to the more familiar description of massive spinning fields in de Sitter/inflation where they are representations of the de Sitter group. In this section we primarily follow [5] and the refer the reader there for further details. Such fields can certainly exist in the absence of the inflaton, in contrast to CCM. A spin-\(S\) bosonic field in this CC set-up is described by a symmetric rank-\(S\) tensor that satisfies:
\[(\Box-m_{S}^{2})\Phi_{\mu_{1}\cdots\mu_{S}}=0\,\qquad\nabla^{\nu}\Phi_{\nu\mu_{ 2}\cdots\mu_{S}}=0\,\qquad\Phi^{\nu}_{\phantom{\nu}\nu\mu_{2}\cdots\mu_{S}}=0\, \tag{2.17}\]
where the mass parameter is \(m_{S}^{2}=m^{2}-(S^{2}-2S-2)H^{2}\). On-shell we therefore have a transverse and traceless rank-\(S\) tensor that satisfies a wave equation. To solve this system we work in a \(3+1\) decomposition where the components are of the form \(\Phi_{\eta\cdots\eta i_{1}\cdots i_{n}}\) with \(0\leqslant n\leqslant S\). We further convert to momentum space and decompose each of these components in terms of helicities via
\[\Phi_{\eta\cdots\eta i_{1}\cdots i_{n}}(\eta,\mathbf{k})=\sum_{h=-n}^{n}\Phi_{ n,S}^{h}(\eta,k)\mathfrak{e}^{(h)}_{i_{1}\cdots i_{n}}(\hat{\mathbf{k}}). \tag{2.18}\]
Each mode function is therefore labelled by three numbers corresponding to the spacetime spin (\(S\)), spatial spin (\(n\)), and helicity component of the spatial spin (\(h\)). We have made a distinction between the polarisation tensors used here (\(\mathfrak{\mathfrak{c}}\)) compared to above in the CCM case (\(\mathfrak{\mathfrak{c}}\)). This is because a different normalisation is employed in [5] compared to what we used above, and while discussing the CC scenario we will follow the conventions of [5] to (hopefully) avoid confusion for the reader. The polarisation tensors here are still functions of \(\hat{\mathbf{k}}\) and two polarisation directions \(\hat{\mathbf{e}}^{\pm}\), however rather than satisfying (2.6) and (2.7), they are chosen to satisfy
\[\left[\mathfrak{e}^{(h)}_{i_{1}\cdots i_{n}}(\hat{\mathbf{k}}) \right]^{*} =\mathfrak{e}^{(-h)}_{i_{1}\cdots i_{n}}(\hat{\mathbf{k}})\, \tag{2.19}\] \[\mathfrak{e}^{(h)}_{i_{1}\cdots i_{S}}(\mathbf{k})[\mathfrak{e}^ {(h)}_{i_{1}\cdots i_{S}}(\mathbf{k})]^{*} =\frac{(2S-1)!!(S+|h|)!}{2^{|h|}[(2|h|-1)!!]^{2}S!(S-|h|)!} \mathfrak{e}^{(h)}_{i_{1}\cdots i_{|h|}}(\mathbf{k})[\mathfrak{e}^{(h)}_{i_{1} \cdots i_{|h|}}(\mathbf{k})]^{*}\, \tag{2.20}\]
where the polarisation tensor with lowest index is defined as
\[\mathfrak{e}^{(h)}_{i_{1}\cdots i_{|h|}}(\mathbf{k})=2^{|h|/2}\hat{e}^{h}_{i_{ 1}}\cdots\hat{e}^{h}_{i_{|h|}}. \tag{2.21}\]
In both cases (CCM and CC) the numerical factors that multiply the \(\hat{\mathbf{k}}\) and \(\hat{\mathbf{e}}^{\pm}\) factors can be made purely real or purely imaginary. The magnitudes of these factors are fixed by (2.7) and (2.20) in the two cases, while in the CCM case the condition (2.6) fixes the factors to be imaginary when there are an odd number of \(\hat{\mathbf{k}}\)'s, and real when there is an even number, while (2.19) fixes the factors to be always real. This phase difference will ultimately be inconsequential since it can be absorbed into the mode functions which are always only fixed up to a phase.
As an illustration let us spell out the \(S=1\) case. If we decompose \(\Phi_{\mu}\) into its time and space components \(\Phi_{\eta}\) and \(\Phi_{i}\), then \((\Box-m_{1}^{2})\Phi_{\mu}=0\) becomes
\[\Phi^{\prime\prime}_{\eta}-\left(\partial_{i}^{2}-\frac{m^{2}}{H^{ 2}\eta^{2}}+\frac{2}{\eta^{2}}\right)\Phi_{\eta}=\frac{2}{\eta}\partial_{i}\Phi _{i}\, \tag{2.22}\] \[\Phi^{\prime\prime}_{i}-\left(\partial_{j}^{2}-\frac{m^{2}}{H^{2 }\eta^{2}}\right)\Phi_{i}=\frac{2}{\eta}\partial_{i}\Phi_{\eta}\, \tag{2.23}\]
while the transverse constraint is
\[\Phi^{\prime}_{\eta}-\frac{2}{\eta}\Phi_{\eta}=\partial_{i}\Phi_{i}. \tag{2.24}\]
The \(\Phi_{\eta}\) field carries only a \(h=0\) mode, while the \(\Phi_{i}\) components carry both \(h=0\) and \(h=\pm 1\) modes. We then write
\[\Phi_{\eta}=\Phi^{0}_{0,1}\,\qquad\Phi^{(0)}_{i}=\Phi^{0}_{1,1}\mathfrak{c}^ {0}_{i}\,\qquad\Phi^{(\pm 1)}_{i}=\Phi^{\pm 1}_{1,1}\mathfrak{c}^{\pm 1}_{i}. \tag{2.25}\]
The polarisation vectors are chosen to be
\[\mathfrak{c}^{(0)}_{i}=\hat{k}_{i}\,\qquad\mathfrak{c}^{(\pm 1)}_{i}=\hat{e}^ {\pm}_{i}. \tag{2.26}\]
The equations of motion then decouple for each mode function and are given by
\[\Phi^{0}_{0,1}{}^{{}^{\prime\prime}}-\frac{2}{\eta}\Phi^{0}_{0,1}{ }^{{}^{\prime}}+\left(k^{2}+\frac{m^{2}}{H^{2}\eta^{2}}+\frac{2}{\eta^{2}} \right)\Phi^{0}_{0,1}=0\, \tag{2.27}\] \[\Phi^{0}_{1,1}{}^{{}^{\prime\prime}}-\frac{k^{2}\eta^{2}}{k^{2} \eta^{2}+m^{2}/H^{2}}\frac{2}{\eta}\Phi^{0}_{1,1}{}^{{}^{\prime}}+\left(k^{2}+ \frac{m^{2}}{H^{2}\eta^{2}}\right)\Phi^{0}_{1,1}=0\,\] (2.28) \[\Phi^{\pm 1}_{1,1}{}^{{}^{\prime\prime}}+\left(k^{2}+\frac{m^{2}}{H^{ 2}\eta^{2}}\right)\Phi^{\pm 1}_{1,1}=0\, \tag{2.29}\]
subject to the transverse constraint
\[\Phi^{0}_{1,1}=-\frac{i}{k}\left(\Phi^{0}_{0,1}{}^{{}^{\prime}}-\frac{2}{\eta }\Phi^{0}_{0,1}\right). \tag{2.30}\]
The equation of motion for the \(h=\pm 1\) modes does not contain the Hubble friction term since the Maxwell kinetic term is conformally invariant. The solutions to these equations with Bunch-Davies initial conditions are
\[\Phi^{0}_{0,1} = \frac{\sqrt{\pi}}{2}\frac{Hk}{m}e^{i\pi(\nu_{1}+1/2)/2}(-\eta)^{3 /2}H^{(1)}_{\nu_{1}}(-k\eta)\, \tag{2.31}\] \[\Phi^{0}_{1,1} = i\frac{\sqrt{\pi}}{4}\frac{H}{m}e^{i\pi(\nu_{1}+1/2)/2}(-\eta)^{ 1/2}[k\eta(H^{(1)}_{\nu_{1}+1}(-k\eta)-H^{(1)}_{\nu_{1}-1}(-k\eta))-H^{(1)}_{ \nu_{1}}(-k\eta)]\,\] (2.32) \[\Phi^{\pm 1}_{1,1} = \frac{\sqrt{\pi}}{2}e^{i\pi(\nu_{1}+1/2)/2}(-\eta)^{1/2}H^{(1)}_{ \nu_{1}}(-k\eta)\, \tag{2.33}\]
where
\[\nu_{1}=\sqrt{\frac{1}{4}-\frac{m^{2}}{H^{2}}}\, \tag{2.34}\]
and the normalisation constants have been fixed by demanding that the commutation relations are the usual ones [5]. We see here a feature that immediately distinguishes this set-up from the CCM one: some of the mode functions in this case are given by a sum of Hankel functions with degenerate order parameters. The dynamics in the two set-ups therefore different.
The story for spin-\(S\) is similar. Modes with helicity \(h\) can come from all components with \(n\geqslant|h|\), and those with \(n=|h|\) satisfy [5]
\[\Phi^{h}_{|h|,S}{}^{{}^{\prime\prime}}-\frac{2(1-|h|)}{\eta}\Phi^{h}_{|h|,S}{}^{ {}^{\prime}}+\left(k^{2}+\frac{m^{2}}{H^{2}\eta^{2}}-\frac{(S+|h|-2)(S-|h|+1)}{ \eta^{2}}\right)\Phi^{h}_{|h|,S}=0. \tag{2.35}\]
The other mode functions with the same helicity but with \(n>|h|\) are then obtained iteratively from the transverse and traceless conditions which fix6
Footnote 6: As noticed in [35], there is a typo in the corresponding formula in [5], (A.70): the coefficient of the \(B_{m,n+1}\) terms should be \(+1\) rather than \(-1\).
\[\Phi^{h}_{n+1,S}=-\frac{i}{k}\left(\Phi^{h}_{n,S}{}^{{}^{\prime}}-\frac{2}{ \eta}\Phi^{h}_{n,S}\right)+\sum_{m=|h|}^{n}B_{m,n+1}\Phi^{h}_{m,S}\, \tag{2.36}\]
where
\[B_{m,n}=\frac{2^{n}n!}{m!(n-m)!(2n-1)!!}\frac{\Gamma[\frac{1}{2}(1+m+n)]}{ \Gamma[\frac{1}{2}(1+m-n)]}. \tag{2.37}\]
One can also solve the recursion relation above and write the mode functions with higher spatial spin as a linear differential operator \(\hat{\mathcal{D}}\) acting on the lowest-spatial-spin one,
\[\Phi^{h}_{n,S}(\eta,k)\equiv\hat{\mathcal{D}}_{h,n}(i\eta,k)\Phi^{h}_{|h|,S}( \eta,k)\,\qquad n>|h|. \tag{2.38}\]
This form of the general mode function will become useful later in Section 3.2. Note that for a given \(n\), \(B_{m,n}\) is only non-zero if \(n\) and \(m\) differ by an even number. This is because the terms proportional to \(B_{m,n}\) come from subtracting traces from \(\Phi_{\eta\cdots\eta i_{1}\cdots i_{n}}\). The factor of \(i\) in this expression comes from converting to momentum space, and no additional factors of \(i\) appear since here the coefficients of all polarisation tensors are taken to be real. The solution to (2.35) with Bunch-Davies initial conditions is
\[\Phi^{h}_{|h|,S}=e^{i\pi(\nu_{S}+1/2)/2}Z^{h}_{S}(-k\eta)^{3/2-|h|}H^{(1)}_{ \nu_{S}}(-k\eta)\, \tag{2.39}\]
with
\[\left(Z^{h}_{S}\right)^{2}=\frac{\pi}{4}\frac{1}{k}\left(\frac{k}{H}\right)^ {2S-2}\frac{[(2|h|-1)!!]^{2}S!(S-|h|)!}{(2S-1)!!(S+|h|)!}\frac{\Gamma(\frac{1} {2}+|h|+\nu_{S})\Gamma(\frac{1}{2}+|h|-\nu_{S})}{\Gamma(\frac{1}{2}+S+\nu_{S} )\Gamma(\frac{1}{2}+S-\nu_{S})}\, \tag{2.40}\]
which is again fixed by demanding that we have the usual commutation relations [5], and
\[\nu_{S}=\sqrt{\left(S-\frac{1}{2}\right)^{2}-\frac{m^{2}}{H^{2}}}. \tag{2.41}\]
As we saw explicitly for \(S=1\), the mode functions with \(n>|h|\) will be given by a sum of Hankel functions.
The different solutions fall into different classes depending on the mass of the field. The _principle series_ corresponds to heavy masses with Re \(\nu_{S}=0\), i.e. \(m^{2}\geqslant H^{2}(S-1/2)^{2}\). The _complementary series_ corresponds to light masses where Im \(\nu_{S}=0\), however, the Higuchi bound sets a lower bound on the mass such that the theory remains unitary.7 The complementary series is then defined by \(S(S-1)<m^{2}/H^{2}<(S-1/2)^{2}\). Finally, we have the discrete series for which \(m^{2}=H^{2}[S(S-1)-T(T-1)]\) for \(S,T=0,1,2,\cdots\), with \(T\leqslant S\). In these cases there is an additional gauge symmetry that reduces the number of propagating degrees of freedom and corresponds to partial masslessness [112, 113]. We will discuss the corresponding power spectra and propagators in the following section.
Footnote 7: For \(S\geqslant 2\) the necessity of the Higuchi bound can be seen at the level of the equations of motion once the mode functions have been decoupled where it ensures that the mass term contributions do not become tachyonic.
### Mass parameter comparison
As emphasised in [87], in the CCM case the masses of the fields have a wider range as they are not constrained by the Higuchi bound. For comparison, in Figure 3 we show the distribution of the dimensionless mass parameter on the complex plane for light/heavy fields in both the CCM and the CC scenarios. From (2.13) we see that for light fields in the CCM scenario \(\nu\) cannot exceed the massless boundary of \(\nu=3/2\), while for heavy fields we can keep increasing the mass to keep increasing the value of \(\text{Im}\ \nu\). Similarly, in the CC scenario we see from (2.41) that \(\text{Im}\ \nu_{S}\) continues to increase as we increase the mass while in the principle series. For the complementary series the mass parameter can take values \(0<\nu_{S}<1/2\), as dictated by the Higuchi bound. The discrete series, which also lines along the real line in the complex plane of \(\nu_{S}\), corresponds to isolated points and with masses that we will also refer to as "light".
## 3 General properties of Wick-rotated propagators
We now move on to the propagators that are required to perturbatively compute wavefunction coefficients and cosmological correlators. For each field we have a bulk-boundary propagator for external lines in Feynman diagrams, and a bulk-bulk propagator for internal lines. In this section we will discuss some important properties of the bulk-bulk propagators of massive spinning fields: we will show that after Wick rotation, the time-ordered parts are always purely real regardless of the mass and spin, while for light fields i.e. those in the complementary series, the full bulk-bulk propagator is purely real. To derive these properties we consider the CCM and CC cases separately since the proofs are slightly different in the two cases. For details on the Feynman rules for wavefunction calculations we refer the reader to [22].
### Cosmological condensed matter physics
We begin with the power spectrum of massive spinning fields introduced in Section 2.1. The late-time two-point function is
\[\langle\sigma_{i_{1}\cdots i_{S}}(\eta_{0},\mathbf{k})\sigma_{j_{1}\cdots j_{ S}}(\eta_{0},-\mathbf{k})\rangle^{\prime}=\sum_{h=-S}^{S}P_{\sigma}^{(h)}(\eta_{0},k) \mathrm{e}_{i_{1}\cdots i_{S}}^{(h)}(\mathbf{k})\mathrm{e}_{j_{1}\cdots j_{S} }^{(h)}(-\mathbf{k})\, \tag{3.1}\]
where
\[P_{\sigma}^{(h)}(\eta_{0},k) \equiv\sigma_{h}(\eta_{0},k)\sigma_{h}^{*}(\eta_{0},k) \tag{3.2}\] \[=e^{-\pi\tilde{\kappa}}\frac{H^{2}\eta_{0}^{2}}{2c_{h,S}k}W_{i \tilde{\kappa},\nu}(2ic_{h,S}k\eta_{0})W_{-i\tilde{\kappa},\nu}(-2ic_{h,S}k \eta_{0}). \tag{3.3}\]
Figure 3: The dimensionless mass parameters \(\nu/\nu_{S}\) for light fields and heavy fields in CCM (left) and CC (right) scenarios.
This expression is valid for both light and heavy fields (i.e. for both purely real and purely imaginary \(\nu\)), and we have used \([W_{a,b}(z)]^{*}=W_{a^{*},b^{*}}(z^{*})\) and the symmetry property \(W_{a,-b}(z)=W_{a,b}(z)\). The power spectrum of each helicity mode is different due to the \(c_{h,S}\) and \(\tilde{\kappa}\) dependence. Parity violation is then encoded in the asymmetry between opposite helicities. It can be further shown that the IR expansion of the power spectrum contains enhanced oscillations in \(\eta_{0}\) due to particle production assisted by the chemical potential, which leads to lifted cosmological collider signals (see, e.g. [38]). The bulk-boundary propagator of a given helicity mode, which we will denote as \(K_{\sigma}^{(h)}(\eta,k)\), should satisfy the equation of motion (2.13) subject to the boundary conditions:
\[\lim_{\eta\to\eta_{0}}K_{\sigma}^{(h)}(\eta,k)=1,\qquad\lim_{\eta\to-\infty(1- ie)}K_{\sigma}^{(h)}(\eta,k)=0. \tag{3.4}\]
The first condition simply requires us to add an appropriate normalisation factor, while the second condition requires the bulk-boundary propagator to be fixed by \(\sigma_{h}^{*}(\eta,k)\) since this is the negative frequency solution that vanishes in the far past and projects onto the Bunch-Davies vacuum. We can therefore write the _indexed_ bulk-boundary propagator
\[K_{i_{1}\cdots i_{S}\,j_{1}\cdots j_{S}}(\eta,{\bf k})=\frac{1}{S!}\sum_{h=-S }^{S}K_{\sigma}^{(h)}(\eta,k){\rm e}^{(h)}_{i_{1}\cdots i_{S}}({\bf k}){\rm e} ^{(h)}_{j_{1}\cdots j_{S}}(-{\bf k})\, \tag{3.5}\]
as a linear combination of the _helical_ bulk-boundary propagator
\[K_{\sigma}^{(h)}(\eta,k)=\frac{\sigma_{h}^{*}(\eta,k)}{\sigma_{h}^{*}(\eta_{0 },k)}. \tag{3.6}\]
The factor of \({\rm e}^{(h)}_{i_{1}\cdots i_{S}}({\bf k}){\rm e}^{(h)}_{j_{1}\cdots j_{S}}(- {\bf k})\) in (3.5) ensures that the different helicity modes remain decoupled during propagation, and the factor of \(1/S!\) follows from the normalisation (2.7). The bulk-bulk propagator for each helicity mode, which we denote as the helical bulk-bulk propagator \(G_{\sigma}^{(h)}(\eta_{1},\eta_{2},k)\), satisfies
\[\left(\eta_{1}^{2}\frac{\partial^{2}}{\partial\eta_{1}^{2}}-2\eta_{1}\frac{ \partial}{\partial\eta_{1}}+c_{h,S}^{2}k^{2}\eta_{1}^{2}+\frac{m^{2}}{H^{2}}- \frac{2h\kappa}{H}k\eta_{1}\right)G_{\sigma}^{(h)}(\eta_{1},\eta_{2},k)=-iH^{2 }\eta_{1}^{2}\eta_{2}^{2}\delta(\eta_{1}-\eta_{2})\, \tag{3.7}\]
and is subjected to the boundary conditions:
\[\lim_{\eta_{1},\eta_{2}\to\eta_{0}}G_{\sigma}^{(h)}(\eta_{1},\eta_{2},k)=0\, \qquad\lim_{\eta_{1},\eta_{2}\to-\infty(1-i\epsilon)}G_{\sigma}^{(h)}(\eta_{1},\eta_{2},k)=0. \tag{3.8}\]
The second condition again ensures that we project onto the vacuum in the far past while the first ensures that this propagator takes us between two bulk points rather than from a bulk point to the boundary. The solution to this equation is then
\[G_{\sigma}^{(h)}(\eta_{1},\eta_{2},k) =[\sigma_{h}(\eta_{1},k)\sigma_{h}^{*}(\eta_{2},k)\theta(\eta_{1} -\eta_{2})+(\eta_{1}\leftrightarrow\eta_{2})]-\frac{\sigma_{h}(\eta_{0},k)}{ \sigma_{h}^{*}(\eta_{0},k)}\sigma_{h}^{*}(\eta_{1},k)\sigma_{h}^{*}(\eta_{2},k )\, \tag{3.9}\] \[=-2iP_{\sigma}^{(h)}(\eta_{0},k)K_{\sigma}^{(h)}(\eta_{2},k)\ {\rm Im}K_{\sigma}^{(h)}(\eta_{1},k)\theta(\eta_{1}-\eta_{2})+(\eta_{1} \leftrightarrow\eta_{2}). \tag{3.10}\]
The manifestly time-ordered parts of this expression correspond to the usual Feynman propagator (time-ordered two-point function), while the \(\eta_{0}\)-dependent terms are required to satisfy the future boundary condition. We can again dress the helical propagators with polarisation tensors and write the indexed propagator
\[G_{i_{1}\cdots i_{S}\,j_{1}\cdots j_{S}}(\eta_{1},\eta_{2},{\bf k})=\sum_{h=-S }^{S}G_{\sigma}^{(h)}(\eta_{1},\eta_{2},k){\rm e}^{(h)}_{i_{1}\cdots i_{S}}({ \bf k}){\rm e}^{(h)}_{j_{1}\cdots j_{S}}(-{\bf k})\, \tag{3.11}\]
which satisfies
\[\Bigg{[}\left(\eta_{1}^{2}\frac{\partial^{2}}{\partial\eta_{1}^{2}}-2 \eta_{1}\frac{\partial}{\partial\eta_{1}}+c^{2}\eta_{1}^{2}k^{2}+\frac{m^{2}}{H^{ 2}}\right)\delta_{l(i_{1}}\delta_{|m|i_{2}}-\frac{2iS\kappa\eta_{1}}{H}k_{n} \epsilon_{nl(i_{1}}\delta_{i_{2}|m|}\] \[+\delta c^{2}\eta_{1}^{2}\left(k_{l}k_{(i_{1}}\delta_{|m|i_{2}}- \frac{S-1}{2S-1}k_{l}k_{m}\delta_{(i_{1}i_{2}}\right)\Bigg{]}G_{i_{3}\cdots i_{ S})lmj_{1}\cdots j_{S}}(\eta_{1},\eta_{2},\mathbf{k})\] \[=-iH^{2}\eta_{1}^{2}\eta_{2}^{2}\delta(\eta_{1}-\eta_{2})\sum_{h=- S}^{S}\mathrm{e}^{(h)}_{i_{1}\cdots i_{S}}(\mathbf{k})\mathrm{e}^{(h)}_{j_{1} \cdots j_{S}}(-\mathbf{k})\, \tag{3.12}\]
where we symmetrise according to e.g. \(S_{(ab)}=(S_{ab}+S_{ba})/2\). We would now like to consider the properties of this indexed bulk-bulk propagator after we Wick rotate both time variables by \(90^{\circ}\) in the complex plane. We are therefore interested in the properties of \(\hat{G}^{(h)}_{\sigma}(\chi_{1},\chi_{2},k)\), c.f. (1.8).
First consider the LHS of (3.12) where the differential operator that acts on the bulk-bulk propagator is purely real after Wick rotation: the only term that is odd in \(\eta_{1}\) comes with a factor of \(i\) which itself comes from converting to momentum space. It is scale invariance of the quadratic action that ensures that this differential operator is real after Wick rotation. Indeed, time derivatives are forced to appear as \(\eta\frac{\partial}{\partial\eta}\), while spatial derivatives yield factors of \(i\eta\mathbf{k}\). We therefore have a real differential operator acting on the indexed propagator. Moving to the RHS, we first note that the polarisation sum over all helicity modes is purely real which follows from the reality of the fields in position space.8 After Wick rotation we also do not have the factor of \(i\) on the RHS since in this "Euclidean" picture we are computing \(\psi\sim e^{-S}\) rather than \(\psi\sim e^{iS}\). The RHS is therefore manifestly real after Wick rotation. _We therefore conclude that the time-ordered parts of the Wick-rotated bulk-bulk propagator, which are the parts that are fixed by (3.12), are purely real_. This does not imply that the full bulk-bulk propagator is purely real after Wick rotation as this discussion still allows for the possibility of adding complex contributions that satisfy the homogeneous equation. This also does not imply that the Feynman propagator i.e. the manifestly time-ordered parts in (3.9) are purely real after Wick rotation; we may have to add factorised terms that satisfy the homogeneous equation to make this reality property manifest (this is precisely what happens as we will see below).9 As long as we are considering fields that are real in position space with scale invariant free theories, this property of the bulk-bulk propagator will hold. Let's see this explicitly by working with (3.9).
Footnote 8: This can be easily checked using our basis of polarisation tensors by noting that \(\hat{\mathbf{e}}^{\pm}(\hat{\mathbf{k}})=[\hat{\mathbf{e}}^{\mp}(\hat{\mathbf{ k}})]^{*}\), c.f. (2.8).
Light fieldsFirst consider light fields (Im \(\nu=0\)) where it was shown in [37] that when \(\tilde{\kappa}=0\) the manifestly time-ordered parts are not real, but once we add the factorised term as required by the future boundary condition, the _full_ bulk-bulk propagator is real. We can check if this remains true when \(\tilde{\kappa}\neq 0\). We first note that for light fields the bulk-bulk propagator is independent of \(\eta_{0}\). Indeed, we have
\[\frac{\sigma_{h}(\eta_{0},k)}{\sigma_{h}^{*}(\eta_{0},k)}=\frac{W_{i\tilde{ \kappa},\nu}(2ic_{h,S}k\eta_{0})}{W_{-i\tilde{\kappa},\nu}(-2ic_{h,S}k\eta_{0} )}\xrightarrow{\eta_{0}\to 0}-e^{(\nu+\frac{1}{2})i\pi}\frac{\Gamma\left( \frac{1}{2}+\nu+i\tilde{\kappa}\right)}{\Gamma\left(\frac{1}{2}+\nu-i\tilde{ \kappa}\right)}. \tag{3.13}\]
The bulk-bulk propagator for a given helicity mode can then be written as
\[G^{(h)}_{\sigma}(\eta_{1},\eta_{2},k)=\frac{H^{2}\eta_{1}\eta_{2} }{2c_{h,S}k}\frac{\Gamma\left(\frac{1}{2}+\nu+i\tilde{\kappa}\right)}{\Gamma( 1+2\nu)}\\ \times\left[M_{-i\tilde{\kappa},\nu}(-2ic_{h,S}k\eta_{1})W_{-i \tilde{\kappa},\nu}(-2ic_{h,S}k\eta_{2})\theta(\eta_{1}-\eta_{2})+(\eta_{1} \leftrightarrow\eta_{2})\right]\, \tag{3.14}\]
where we have introduced the Whittaker-\(M\) function \(M_{a,b}(z)\) which is related to \(W_{a,b}(z)\) by
\[\frac{1}{\Gamma(1+2\nu)}M_{i\bar{\kappa},\nu}(z)=\frac{e^{\pm(i\bar{\kappa}-\nu- \frac{1}{2})i\pi}}{\Gamma\left(\frac{1}{2}+\nu+i\bar{\kappa}\right)}W_{i\bar{ \kappa},\nu}(z)+\frac{e^{\mp\pi\bar{\kappa}}}{\Gamma\left(\frac{1}{2}+\nu-i \bar{\kappa}\right)}W_{-i\bar{\kappa},\nu}(e^{\pm i\pi}z). \tag{3.15}\]
In order to arrive at (3.14) we have used the rotation that does not cross the branch cut of \(W_{a,b}(z)\) which lies on the negative real axis. We have also used
\[M_{i\bar{\kappa},\nu}(ze^{\pm i\pi})=\pm ie^{\pm i\pi\nu}M_{-i\bar{\kappa},\nu} (z)\, \tag{3.16}\]
to make sure that the two arguments in (3.14) have the same sign. Note that (3.13) follows from (3.15) since for small arguments \(W_{a,b}(z)\) dominates over \(M_{a,b}(z)\) (for light fields). The function \(M_{a,b}(z)\) is another solution to the Whittaker differential equation, and also satisfies \([M_{a,b}(z)]^{*}=M_{a^{*},b^{*}}(z^{*})\). We can now consider Wick rotating (3.14), and to do so we rotate both time variables clockwise to avoid the branch cuts on the negative real axis (recall that \(\eta_{1},\eta_{2}\leqslant 0\)). We then have
\[\tilde{G}^{(h)}_{\sigma}(\chi_{1},\chi_{2},k)=-\frac{H^{2}\chi_{1 }\chi_{2}}{2c_{h,S}k}\frac{\Gamma\left(\frac{1}{2}+\nu+i\bar{\kappa}\right)}{ \Gamma(1+2\nu)}\\ \times[M_{-i\bar{\kappa},\nu}(2c_{h,S}k\chi_{1})W_{-i\bar{\kappa},\nu}(2c_{h,S}k\chi_{2})\theta(\chi_{2}-\chi_{1})]+(\chi_{1}\leftrightarrow \chi_{2})\, \tag{3.17}\]
with \(\chi_{1},\chi_{2}\geqslant 0\). The structure of the \(\theta\)-functions can be understood with a simple flat-space toy example. Consider the bulk-bulk propagator [78]
\[G_{\text{flat}}(\eta_{1},\eta_{2},k) =\frac{1}{2k}\left(e^{ik(\eta_{2}-\eta_{1})}\theta(\eta_{1}-\eta_ {2})+e^{ik(\eta_{1}-\eta_{2})}\theta(\eta_{2}-\eta_{1})-e^{ik(\eta_{1}+\eta_{ 2})}\right)\] \[=\frac{1}{k}\sinh(-ik\eta_{1})e^{ik\eta_{2}}\theta(\eta_{1}-\eta_ {2})+(\eta_{1}\leftrightarrow\eta_{2})\, \tag{3.18}\]
where the \(\theta\)-functions ensure convergence in the far past when \(\eta=-\infty(1-i\epsilon)\). Now consider the clockwise rotation we used above. Since we now integrate from \(0\) to \(+\infty\), we need the exponential damping to come from the largest of the two variables. We therefore have
\[\tilde{G}_{\text{flat}}(\chi_{1},\chi_{2},k)=\frac{1}{k}\sinh(k\chi_{1})e^{-k \chi_{2}}\theta(\chi_{2}-\chi_{1})+(\chi_{1}\leftrightarrow\chi_{2})\, \tag{3.19}\]
as in [15]. This example captures the important features for our bulk-bulk propagator given that the Whittaker functions are also exponentially damped \((W)\)/growing \((M)\) for large argument.
We can now use the properties of the Whittaker functions and the \(\Gamma\)-functions to conclude that taking the complex conjugate of (3.17) can be compensated for by sending \(\tilde{\kappa}\to-\tilde{\kappa}\) which is equivalent to \(h\to-h\). We therefore conclude that the bulk-bulk propagator is helically real i.e.
\[\left[\tilde{G}^{(h)}_{\sigma}(\chi_{1},\chi_{2},k)\right]^{*}=\tilde{G}^{(-h )}_{\sigma}(\chi_{1},\chi_{2},k)\qquad\text{(light fields)}. \tag{3.20}\]
This very nice relationship between the bulk-bulk propagator of different helicity modes is not quite enough for us to conclude that the full propagator in (3.11) is real; we also need a relationship between the product of polarisation tensors of different helicities. The reality of this full polarisation sum implies that
\[[\text{e}^{(h)}_{i_{1}\cdots i_{S}}(\mathbf{k})\text{e}^{(h)}_{j_{1}\cdots j_{S }}(-\mathbf{k})]^{*}=\text{e}^{(-h)}_{i_{1}\cdots i_{S}}(\mathbf{k})\text{e}^ {(-h)}_{j_{1}\cdots j_{S}}(-\mathbf{k})\, \tag{3.21}\]
since contributions with different \(|h|\) have a different number of \(\mathbf{k}\) factors so cannot be related by complex conjugation, while a single contribution from a single helicity has both real and imaginary components. These two relationships therefore allow us to conclude that the full indexed bulk-bulk propagator (3.11), when the field is light, is real after Wick rotation:
\[\boxed{\tilde{G}_{i_{1}\cdots i_{S}\,j_{1}\cdots j_{S}}(\chi_{1},\chi_{2}, \mathbf{k})}^{*}=\tilde{G}_{i_{1}\cdots i_{S}\,j_{1}\cdots j_{S}}(\chi_{1}, \chi_{2},\mathbf{k})\,\qquad\text{(light fields, CCM)}. \tag{3.22}\]
Heavy fieldsLet's now consider heavy fields where \(\text{Re}\ \nu=0\). For convenience let us therefore write \(\nu=i\mu\) with \(\mu>0\). The primary difference here compared to the light field case is that the \(\eta_{0}\) dependence in the bulk-bulk propagator no longer cancels out. This is perhaps easiest to see in the \(\tilde{\kappa}=0\) limit where the mode functions are Hankel functions. As we send \(\eta_{0}\to 0^{-}\), each Hankel function has two comparable oscillating contributions, in contrast to the light field case where they both have one decaying contribution and one growing one, and this ensures that the ratio is time-dependent. For \(\tilde{\kappa}\neq 0\) the story is the same. Furthermore, one can easily check that the Wick-rotated time-ordered parts of \(G_{\sigma}^{(h)}\) do not satisfy a relation of the form of (3.20), and therefore the Wick-rotated Feynman propagator once we sum over the helicities is not manifestly real. In contrast to the light field case, the factorised terms we add to satisfy the future boundary condition cannot cancel the imaginary parts of the rotated Feynman propagator since they depend on \(\eta_{0}\) while the Feynman part does not. However, as we have already alluded to, it is possible to add and subtract factorised contributions in such a way that we can make the time-ordered parts of the bulk-bulk propagator manifestly real after Wick rotation.
To see how this can work let us decompose the full bulk-bulk propagator into two parts which we will refer to as the _connected part_ (\(C\)) and the _factorised part_ (\(F\)):
\[G_{i_{1}\cdots i_{S}\,j_{1}\cdots j_{S}}(\eta_{1},\eta_{2},\mathbf{k})=C_{i_{ 1}\cdots i_{S}\,j_{1}\cdots j_{S}}(\eta_{1},\eta_{2},\mathbf{k})+F_{i_{1} \cdots i_{S}\,j_{1}\cdots j_{S}}(\eta_{1},\eta_{2},\mathbf{k})\, \tag{3.23}\]
where for a given helicity mode these parts of the propagator are
\[C_{\sigma}^{(h)}(\eta_{1},\eta_{2},k) =[\sigma_{h}(\eta_{1},k)\sigma_{h}^{*}(\eta_{2},k)\theta(\eta_{1} -\eta_{2})+(\eta_{1}\leftrightarrow\eta_{2})]+\Delta G_{\sigma}^{(h)}(\eta_{1},\eta_{2},k)\, \tag{3.24}\] \[F_{\sigma}^{(h)}(\eta_{1},\eta_{2},k) =-\frac{\sigma_{h}(\eta_{0},k)}{\sigma_{h}^{*}(\eta_{0},k)}\sigma_ {h}^{*}(\eta_{1},k)\sigma_{h}^{*}(\eta_{2},k)-\Delta G_{\sigma}^{(h)}(\eta_{1},\eta_{2},k)\, \tag{3.25}\]
where we have added a new contribution to each, \(\Delta G_{\sigma}^{(h)}\), in such a way that the full propagator is unchanged. Diagrammatically, we represent this propagator decomposition as
\[\begin{array}{ccccc}G&\hskip-14.226378pt=&\hskip-14.226378pt\begin{array}[] {ccccc}C&\hskip-14.226378ptF\\ \hskip-14.226378pt+&\hskip-14.226378pt-&\hskip-14.226378pt-&\hskip-14.226378pt \hskip-14.226378pt\hskip-14.226378pt\hskip-14.
To summarise, we define the helical connected bulk-bulk propagator to satisfy the following conditions:
1. \(C^{(h)}_{\sigma}\) satisfies the same equation as the bulk-bulk propagator, i.e. it satisfies (3.7).
2. \(C^{(h)}_{\sigma}\) vanishes exponentially fast in the far past under the \(i\epsilon\)-prescription.
3. \(C^{(h)}_{\sigma}\) is helically real after Wick rotation i.e. it satisfies (3.28).
In order to fix \(\mathcal{A}_{h}\) it is wise to first write the connected propagator in terms of \(M_{a,b}(z)\) only since its analytic continuation satisfies a simple relation c.f. (3.16), compared to that of \(W_{a,b}(z)\) c.f. (3.15). We can eliminate all copies of \(W_{a,b}(z)\) using
\[W_{i\tilde{\kappa},i\mu}(z)=\frac{\Gamma(-2i\mu)}{\Gamma\left(\frac{1}{2}-i \tilde{\kappa}-i\mu\right)}M_{i\tilde{\kappa},i\mu}(z)+\frac{\Gamma(2i\mu)}{ \Gamma\left(\frac{1}{2}-i\tilde{\kappa}+i\mu\right)}M_{i\tilde{\kappa},-i\mu}( z)\, \tag{3.30}\]
and use (3.16) to make sure that all arguments lie on the positive imaginary axis such that we can rotate each time variable by \(90^{\circ}\) clockwise to make all arguments lie on the positive real axis. It is then a simple task to demand (3.28) and fix \(\mathcal{A}_{h}\). The most general solution of \(\mathcal{A}_{h}\) is derived in detail in Appendix A. In short, we find
\[\mathcal{A}_{h}(\kappa,\mu)=\frac{i\pi\operatorname{sech}(\pi\tilde{\kappa})} {\Gamma\left(\frac{1}{2}-i\tilde{\kappa}-i\mu\right)\Gamma\left(\frac{1}{2}-i \tilde{\kappa}+i\mu\right)}. \tag{3.31}\]
This now gives us a connected bulk-bulk propagator that is manifestly real after we Wick rotate, which we will use in Section 4 to prove the reality of total-energy poles, and a factorised bulk-bulk propagator which we will use in Section 5 to compute exact parity-odd trispectra.10
Footnote 10: In [30] and a recent paper [46], a different decomposition of the bulk-bulk propagator is employed where it is split into a retarded propagator and a factorised part. The retarded part enjoys the property that it is purely imaginary (in Lorentzian time), however it does not vanish in the far past. Our connected propagator is not imaginary in Lorentzian time (it is complex), rather it is real in Euclidean time and vanishes in the far past. It would be interesting to understand the results we will derive in this paper using the retarded propagator rather than the connected one. We expect the fact that the contribution to wavefunction coefficients from the retarded propagator is an even function in the exchanged energy, as shown in [46], to play a crucial role in such an analysis. We thank Scott Melville for discussions on these points.
### Cosmological collider physics
We now turn our attention back to the cosmological collider physics set-up and derive properties of the bulk-bulk propagator. We do not find it necessary to discuss the power spectra and bulk-boundary propagators here since they are not required for our needs and details can be found in [5]. The full propagator with covariant indices would naturally be written as \(G_{\mu_{1}\cdots\mu_{S}\,\nu_{1}\cdots\nu_{S}}(\eta_{1},\eta_{2},\mathbf{k})\) but as we did when we discussed the mode functions, we will consider a \(3+1\) decomposition and consider the properties of propagators with spatial indices:
\[G_{i_{1}\cdots i_{n}\,j_{1}\cdots j_{n}}(\eta_{1},\eta_{2},\mathbf{k})\equiv G _{\eta\cdots\eta i_{1}\cdots i_{n}\,\eta\cdots\eta j_{1}\cdots j_{n}}(\eta_{1},\eta_{2},\mathbf{k})\, \tag{3.32}\]
where as before \(0\leqslant n\leqslant S\). We can further decompose into helicities and write
\[G_{i_{1}\cdots i_{n}\,j_{1}\cdots j_{n}}(\eta_{1},\eta_{2},\mathbf{k})=\sum_{ h=-n}^{n}G^{h}_{n,S}(\eta_{1},\eta_{2},k)\mathfrak{c}^{(h)}_{i_{1}\cdots i_{n}}( \mathbf{k})\left[\mathfrak{c}^{(h)}_{j_{1}\cdots j_{n}}(\mathbf{k})\right]^{*}. \tag{3.33}\]
For a given \(n\), the mode functions with the same \(|h|\) are equivalent since here we do not consider parity-violation. Then, given that the combination
\[\mathfrak{c}^{(h)}_{i_{1}\cdots i_{n}}(\mathbf{k})\left[\mathfrak{c}^{(h)}_{j _{1}\cdots j_{n}}(\mathbf{k})\right]^{*}+\mathfrak{c}^{(-h)}_{i_{1}\cdots i_{ n}}(\mathbf{k})\left[\mathfrak{c}^{(-h)}_{j_{1}\cdots j_{n}}(\mathbf{k}) \right]^{*}\, \tag{3.34}\]
is real due to (2.19), we only need to consider the reality properties of \(G^{h}_{n,S}(\eta_{1},\eta_{2},k)\). As we did above, we will consider light and heavy fields separately.
Light fieldsFirst consider light fields, i.e. those in the complementary series, and the modes with \(n=|h|\) where the solution to the homogeneous equation of motion is given by (2.39). Given that this solution involves a single Hankel function, the properties of \(G^{h}_{|h|,S}(\eta_{1},\eta_{2},k)\) will be very similar to what we encountered in the CCM case but with \(\tilde{\kappa}=0\). Indeed, for light fields we therefore already know that this propagator is purely real after Wick rotation. In any case let us show this explicitly, following [37], since this will allow us to easily see how to extend the proof to the other modes with the same helicity, but with \(n>|h|\). We have
\[G^{h}_{|h|,S}(\eta_{1},\eta_{2},k)=\left(Z^{|h|}_{S}\right)^{2}k ^{3-2|h|} (\eta_{1}\eta_{2})^{3/2-|h|}H^{(1)}_{\nu_{S}}(-k\eta_{1})H^{(2)}_{ \nu_{S}}(-k\eta_{2})\theta(\eta_{1}-\eta_{2})+(\eta_{1}\leftrightarrow\eta_{2})\] \[+\left(Z^{|h|}_{S}\right)^{2}k^{3-2|h|}(\eta_{1}\eta_{2})^{3/2-|h |}H^{(2)}_{\nu_{S}}(-k\eta_{1})H^{(2)}_{\nu_{S}}(-k\eta_{2})\, \tag{3.35}\]
where as usual the term on the second line is there to ensure that we satisfy the future boundary condition of the bulk-bulk propagator, and the \(\eta_{0}\) dependence drops out of this term since we are considering light fields. It is simple to see that the second term ensures that we satisfy Dirichlet boundary conditions since for light fields \(H^{(1)}_{\nu_{S}}(-k\eta_{0})\to-H^{(2)}_{\nu_{S}}(-k\eta_{0})\). We can write this expression more compactly as
\[G^{h}_{|h|,S}(\eta_{1},\eta_{2},k)=2\left(Z^{|h|}_{S}\right)^{2}k ^{3-2|h|}(\eta_{1}\eta_{2})^{3/2-|h|}J_{\nu_{S}}(-k\eta_{1})H^{(2)}_{\nu_{S}}(- k\eta_{2})\theta(\eta_{1}-\eta_{2})+(\eta_{1}\leftrightarrow\eta_{2})\, \tag{3.36}\]
where \(J_{\nu_{S}}(z)\) is the Bessel function of the first kind. We can now make use of the following integral representations:
\[H^{(2)}_{\nu_{S}}(z)=-\frac{e^{\frac{1}{2}\nu_{S}\pi i}}{\pi i} \int_{-\infty}^{\infty}dte^{-iz\cosh t-\nu_{S}t}\,\qquad J_{\nu_{S}}(z)=\frac{2^{1-\nu_{S}}z^{\nu_{S}}}{\sqrt{\pi} \Gamma(\frac{1}{2}+\nu_{S})}\int_{0}^{1}dt(1-t^{2})^{\nu_{S}-\frac{1}{2}}\cos zt\, \tag{3.37}\]
which are respectively valid for \(-\pi<\text{ph}\ z<0\) and \(\text{Re}\ \nu_{S}>-\frac{1}{2}\), to conclude that
\[J_{\nu_{S}}(-ik\chi_{1})H^{(2)}_{\nu_{S}}(-ik\chi_{2})=e^{-\frac{ 1}{2}\nu_{S}\pi i}\times(\text{Real})\times i\times e^{\frac{1}{4}\nu_{S}\pi i} \times(\text{Real})=i\times(\text{Real})\, \tag{3.38}\]
where we have used \(\text{Im}\ \nu_{S}=0\). We also have
\[(-\chi_{1}\chi_{2})^{3/2-|h|}=i\times(\text{Real}). \tag{3.39}\]
We therefore conclude that the Wick-rotated propagator is purely real i.e.
\[\left[\bar{G}^{h}_{|h|,S}(\chi_{1},\chi_{2},k)\right]^{*}=\bar{G} ^{h}_{|h|,S}(\chi_{1},\chi_{2},k). \tag{3.40}\]
Given this result we would immediately expect this property to hold for the other modes too since ultimately they form a single multiplet. Let's see this explicitly by considering modes with the same helicity but with \(n>|h|\). The general form of the bulk-bulk propagator is
\[G^{h}_{n,S}(\eta_{1},\eta_{2},k)=\,\Phi^{h}_{n,S}(\eta_{1},k) \Phi^{h*}_{n,S}(\eta_{2},k) \theta(\eta_{1}-\eta_{2})+(\eta_{1}\leftrightarrow\eta_{2})\] \[-\frac{\Phi^{h}_{n,S}(\eta_{0},k)}{\Phi^{h}_{n,S}(\eta_{0},k)}\Phi ^{h*}_{n,S}(\eta_{1},k)\Phi^{h*}_{n,S}(\eta_{2},k)\, \tag{3.41}\]
where the mode functions \(\Phi^{h}_{n,S}\) are related to \(\Phi^{h}_{|h|,S}\) by (2.36). Indeed, we can use this relationship iteratively to write
\[\Phi^{h}_{n,S}(\eta,k)=\hat{\mathcal{D}}_{h,n}(i\eta,k)\Phi^{h}_{|h|,S}(\eta,k )\,\qquad n>|h|\, \tag{3.42}\]
where \(\hat{\mathcal{D}}_{h,n}(x,k)\) are real differential operators in \(x\) with \(k\)-dependent coefficients. It follows that \(\hat{\mathcal{D}}_{h,n}(i\eta,k)\) are purely real after Wick rotation. Furthermore, they are either purely even in \(x\) and therefore purely
real when written in terms of \(\eta\) (if \(n-|h|\) is even) or purely odd in \(x\) and therefore purely imaginary when written in terms of \(\eta\) (if \(n-|h|\) is odd). This final property follows from the fact that the \(B_{m,n}\) in (2.36) are real and only non-zero when \(n\) and \(m\) differ by an even number. We can then write
\[G^{h}_{n,S}(\eta_{1},\eta_{2},k)=(-1)^{n-|h|}2\left(Z_{S}^{|h|} \right)^{2}k^{3-2|h|}\hat{\mathcal{D}}_{h,n}(i\eta_{1},k)[(-\eta_{1})^{3/2-|h|} J_{\nu_{S}}(-k\eta_{1})]\] \[\times\hat{\mathcal{D}}_{h,n}(i\eta_{2},k)[(-\eta_{2})^{3/2-|h|}H _{\nu_{S}}^{(2)}(-k\eta_{2})]\theta(\eta_{1}-\eta_{2})+(\eta_{1}\leftrightarrow \eta_{2})\, \tag{3.43}\]
where we have used
\[\frac{\hat{\mathcal{D}}_{h,n}(i\eta,k)[(-\eta)^{3/2-|h|}H_{\nu_{S }}^{(1)}(-k\eta)]}{\hat{\mathcal{D}}_{h,n}^{*}(i\eta,k)[(-\eta)^{3/2-|h|}H_{ \nu_{S}}^{(2)}(-k\eta)]}\xrightarrow{\eta\to 0}(-1)^{n+1-|h|}. \tag{3.44}\]
We can then use the integral representations above and the fact that \(\hat{\mathcal{D}}_{h,n}(i\eta,k)\) is always real after Wick rotation to conclude that this bulk-bulk propagator is real after Wick rotation. We have therefore shown that for light fields
\[\left[\tilde{G}_{i_{1}\cdots i_{n}\,j_{1}\cdots j_{n}}(\chi_{1}, \chi_{2},\mathbf{k})\right]^{*}=\tilde{G}_{i_{1}\cdots i_{n}\,j_{1}\cdots j_{n }}(\chi_{1},\chi_{2},\mathbf{k})\, \tag{3.45}\]
and since this holds for all \(0\leqslant n\leqslant S\) we conclude that
\[\left[\tilde{G}_{\mu_{1}\cdots\mu_{S}\,\nu_{1}\cdots\nu_{S}}( \chi_{1},\chi_{2},\mathbf{k})\right]^{*}=\tilde{G}_{\mu_{1}\cdots\mu_{S}\,\nu _{1}\cdots\nu_{S}}(\chi_{1},\chi_{2},\mathbf{k})\,\qquad\text{(light fields, CC)}. \tag{3.46}\]
It is easy to see that the same reality condition holds for partially-massless fields in the discrete series, since the proof above is valid for arbitrary \(\nu_{S}>0\) as long as the pure gauge modes are excluded.
Heavy fieldsNow consider heavy fields i.e. those in the principle series, with \(\nu_{S}=i\mu_{S}\). As we have seen in the CCM scenario, the full bulk-bulk propagator will not be real after Wick rotation, so instead we add and subtract factorised terms such that the connected part of the bulk-bulk propagator is real after rotation. As we did above, we work with \(G_{i_{1}\cdots i_{n}j_{1}\cdots j_{n}}(\eta_{1},\eta_{2},\mathbf{k})\) and work helicity-by-helicity. For each mode we define the following decomposition of the bulk-bulk propagator:
\[G^{h}_{n,S}(\eta_{1},\eta_{2},k)=C^{h}_{n,S}(\eta_{1},\eta_{2},k )+F^{h}_{n,S}(\eta_{1},\eta_{2},k)\, \tag{3.47}\]
where
\[C^{h}_{n,S}(\eta_{1},\eta_{2},k) =[\Phi^{h}_{n,S}(\eta_{1},k)\Phi^{h*}_{n,S}(\eta_{2},k)\theta( \eta_{1}-\eta_{2})+(\eta_{1}\leftrightarrow\eta_{2})]+\Delta G^{h}_{n,S}(\eta _{1},\eta_{2},k)\, \tag{3.48}\] \[F^{h}_{n,S}(\eta_{1},\eta_{2},k) =-\frac{\Phi^{h}_{n,S}(\eta_{0},k)}{\Phi^{h*}_{n,S}(\eta_{0},k)} \Phi^{h*}_{n,S}(\eta_{1},k)\Phi^{h*}_{n,S}(\eta_{2},k)-\Delta G^{h}_{n,S}(\eta _{1},\eta_{2},k). \tag{3.49}\]
As before we take \(\Delta G^{h}_{n,S}(\eta_{1},\eta_{2},k)\) to be factorised, to solve the homogeneous equation of motion, and to vanish in the far past. We therefore write
\[\Delta G^{h}_{n,S}(\eta_{1},\eta_{2},k)=\mathcal{A}_{h,n}\Phi^{h* }_{n,S}(\eta_{1},k)\Phi^{h*}_{n,S}(\eta_{2},k)\, \tag{3.50}\]
where \(\mathcal{A}_{h,n}=\mathcal{A}_{h,n}(\mu_{S})\) is independent of \(\eta_{0}\). We now want to fix \(\mathcal{A}_{h,n}\) such that
\[\left[\tilde{C}^{h}_{n,S}(\chi_{1},\chi_{2},k)\right]^{*}=\tilde{C}^{h}_{n,S}( \chi_{1},\chi_{2},k). \tag{3.51}\]
For the \(n=|h|\) cases, where the mode function is given by a single Hankel function, we can easily read off the necessary form of \(\mathcal{A}_{h,h}\) since it is a special case of what we did in the CCM scenario with \(\tilde{\kappa}=0\). We derived the constraint that \(\mathcal{A}_{h,h}\) must satisfy, and the solution, in Appendix A. The result is
\[\mathcal{A}_{h,|h|}(\mu_{S})=i\cosh\pi\mu_{S}. \tag{3.52}\]
Note that we could also add any purely real correction to \(\mathcal{A}_{h,|h|}\) and still satisfy the necessary condition so this choice is the minimal one required to make \(\tilde{C}^{h}_{|h|,S}(\chi_{1},\chi_{2},k)\) real. With this result in hand, it is simple to deduce what we need to add for modes with \(n>|h|\). We have
\[C^{h}_{n,S}(\eta_{1},\eta_{2},k)=\{\hat{\mathcal{D}}_{h,n}(i\eta _{1},k)[\Phi^{h}_{|h|,S}(\eta_{1},k)]\hat{\mathcal{D}}^{*}_{h,n}(i\eta_{2},k)[ \Phi^{h*}_{|h|,S}(\eta_{2},k)]\theta(\eta_{1}-\eta_{2})+(\eta_{1}\leftrightarrow \eta_{2})\}\] \[+\mathcal{A}_{h,n}\hat{\mathcal{D}}^{*}_{h,n}(i\eta_{1},k)[\Phi^{ h*}_{|h|,S}(\eta_{1},k)]\hat{\mathcal{D}}^{*}_{h,n}(i\eta_{2},k)[\Phi^{h*}_{|h|,S}( \eta_{2},k)]\, \tag{3.53}\]
and by using the fact that \(\hat{\mathcal{D}}_{h,n}(i\eta,k)\) is purely real for even \(n-|h|\), and purely imaginary for odd \(n-|h|\), we can write
\[C^{h}_{n,S}(\eta_{1},\eta_{2},k)=(-1)^{n-|h|}\hat{\mathcal{D}}_{ h,n}(i\eta_{1},k)\hat{\mathcal{D}}_{h,n}(i\eta_{2},k)[\Phi^{h}_{|h|,S}(\eta_{1},k) \Phi^{h*}_{|h|,S}(\eta_{2},k)\] \[+(-1)^{n-|h|}\mathcal{A}_{h,n}\Phi^{h*}_{|h|,S}(\eta_{1},k)\Phi^{ h*}_{|h|,S}(\eta_{2},k)]\theta(\eta_{1}-\eta_{2})+(\eta_{1}\leftrightarrow \eta_{2}). \tag{3.54}\]
Given that \(\hat{\mathcal{D}}_{h,n}(i\eta,k)\) is always real after Wick rotation, we therefore conclude that the choice
\[\mathcal{A}_{h,n}(\mu_{S})=(-1)^{n-|h|}i\cosh\pi\mu_{S}\, \tag{3.55}\]
ensures that (3.51) is satisfied. Again we can add any purely real correction to \(\mathcal{A}_{h,n}(\mu_{S})\), but this is the minimal solution that is sufficient for our purposes. We therefore conclude that
\[\left[\tilde{C}_{i_{1}\cdots i_{n}j_{1}\cdots j_{n}}(\chi_{1},\chi_{2},{\bf k })\right]^{*}=\tilde{C}_{i_{1}\cdots i_{n}j_{1}\cdots j_{n}}(\chi_{1},\chi_{2},{\bf k})\, \tag{3.56}\]
and since this holds for all \(0\leqslant n\leqslant S\) we conclude that
\[\boxed{\tilde{C}_{\boldsymbol{\mu}_{1}\cdots\boldsymbol{\mu}_{S}\,\boldsymbol {\nu}_{1}\cdots\boldsymbol{\nu}_{S}}(\chi_{1},\chi_{2},{\bf k})}^{*}=\tilde{ C}_{\boldsymbol{\mu}_{1}\cdots\boldsymbol{\mu}_{S}\,\boldsymbol{\nu}_{1} \cdots\boldsymbol{\nu}_{S}}(\chi_{1},\chi_{2},{\bf k})\,\qquad(\text{heavy fields, CC}). \tag{3.57}\]
We have therefore shown that for the cosmological collider physics set-up, the full bulk-bulk propagator \(\tilde{G}\) is real for light fields, while for heavy fields this is a property of the connected part \(\tilde{C}\) only. We will use these properties in the next section to prove the reality of total-energy poles in wavefunction coefficients with massless external states, and we use \(\tilde{F}\) in Section 5 to compute exact parity-odd trispectra.
## 4 Reality and factorisation
In this section we unleash the full power of the reality properties of the indexed propagators for general massive fields that we derived in the previous section, and prove a theorem revealing the universal reality of the total-energy singularities in any tree diagrams with external massless scalars (and massless gravitons) for both the CCM and the CC scenarios. More precisely, we will see that in theories involving light fields of arbitrary mass, spin, couplings and chemical potential, the wavefunction coefficient of external massless scalars is always real. This property also holds for an even number of external conformally coupled scalars, and also external massless gravitons as long as we sum over the two helicities. In more general theories involving heavy fields, despite the fact that the full wavefunction coefficient can be complex, the total-energy singularities remain real. Indeed, these singularities come from the _maximally-connected_ parts of the wavefunction coefficients (which we define below) and we prove that these parts are purely real. Based on the universal reality of total-energy singularities, we then prove that all parity-odd correlators are necessarily factorised at tree-level, and are free of any total-energy singularities, under the assumption of IR-convergence.
### Light fields: the wavefunction reality
Cosmological condensed matterWe start by considering the most general theory of interacting light fields in the CCM scenario. The particle spectrum \(\mathscr{L}\) consists of a set of spinning fields \(\sigma^{f}_{i_{1}\cdots i_{S_{f}}}\) labelled
by their flavour \(f\in\mathscr{L}\), and each field \(\sigma^{f}_{i_{1}\cdots i_{S_{f}}}\) is equipped with an integer spin \(S_{f}=0,1,2,\cdots\), a dimensionless mass parameter \(0<\nu_{f}\leqslant 3/2\), a sound speed \(c_{f}\), and a chemical potential \(\kappa_{f}\). Among these fields, we will mainly pay attention to the statistics of a "visible" massless scalar field \(\phi\equiv\sigma^{f=\phi}\) with \(S_{\phi}=0\) and \(\nu_{\phi}=3/2\), while allowing for exchanges of any of the other fields. For instance, in inflationary cosmology, \(\phi\) is usually associated with the curvature perturbation \(\zeta\) (or equivalently, the Goldstone \(\pi\) of broken time translation), while \(\sigma^{f}_{i_{1}\cdots i_{S_{f}}}\), \(f\neq\phi\) are associated with various isocurvature perturbations. In position space, the most general interaction Lagrangian at a vertex \(v\) schematically reads
\[\mathcal{L}_{v}=\lambda_{v}\,a^{4-k_{v}-l_{v}}(\eta)\left[\left( \delta_{ij}\right)^{p_{v}}\left(\epsilon_{ijk}\right)^{q_{v}}\left(\partial_{ \eta}\right)^{k_{v}}\left(\partial_{i}\right)^{l_{v}}\prod_{f\in\mathscr{L}} \left\{\sigma^{f}_{i_{1}\cdots i_{S_{f}}}(\eta,\mathbf{x})\right\}^{N_{v,f} }\right]_{\text{contract}}\equiv\lambda_{v}D_{v}\sigma^{N_{v}}\, \tag{4.1}\]
where \(D_{v}\) collectively denotes the scale factors and derivatives at the vertex \(v\), and \(N_{v}=\sum_{f\in\mathscr{L}}N_{v,f}\). Here \(k_{v},l_{v}\geqslant 0\) are integers counting the number of derivatives and \(p_{v},q_{v},N_{v,f}\geqslant 0\) are integers counting the spatial indices and number of fields involved in the interaction vertex \(v\).11 The number of scale factors is fixed by diffeomorphism invariance (or its global subgroup of de Sitter scale invariance). Note also that the coupling coefficients are real constants in time i.e.
Footnote 11: As an aside, rotational symmetry dictates that all the spatial indices must be contracted, leading to a constraint \(2p_{v}=3q_{v}+l_{v}+\sum_{f\in\mathscr{L}}S_{f}N_{v,f}\).
\[\lambda_{v}^{*}=\lambda_{v}\,\quad\partial_{\eta}\lambda_{v}=0. \tag{4.2}\]
as a consequence of unitarity and de Sitter scale invariance.
Let us now try to compute the wavefunction coefficient \(\psi_{n}\) for \(n\) external massless scalars \(\phi\). A general tree diagram that contributes to the wavefunction coefficient \(\psi_{n}\) is obtained via contracting the fields in adjacent vertices to form bulk-bulk propagators \(G^{f}_{i_{1}\cdots i_{S_{f}}\,j_{1}\cdots j_{S_{f}}}\) and massless bulk-boundary propagators \(K_{\phi}\), before finally integrating the interaction time \(\eta_{v}\) at the vertices. The bulk-boundary propagator for the massless scalar is
\[K_{\phi}(\eta,k)=(1-ic_{\phi}k\eta)e^{ic_{\phi}k\eta}, \tag{4.3}\]
which is crucially purely real after Wick rotation. Schematically, for a tree diagram with \(n\) external lines, \(V\) interaction vertices and \(I\) internal lines, we have
\[\psi_{n}=\int_{-\infty(1-i\epsilon)}^{0}\left[\prod_{v=1}^{V}d \eta_{v}\,i\lambda_{v}\,D_{v}\right]\left[\prod_{e=1}^{n}K_{e}\right]\left[ \prod_{e^{\prime}=1}^{I}G_{e^{\prime}}\right]. \tag{4.4}\]
As dictated by the Feynman rules, we have included a factor of \(i\) for each vertex. Now notice that the whole integrand of (4.4) is analytic in the second quadrant of complex \(\eta\)-plane,12 which allows us to deform the integration contour by performing a Wick rotation
Footnote 12: Analyticity in the second quadrant of \(\eta\)-plane is crucial here, since otherwise singularities passed through when deforming the contour would contribute non-trivially. Such analyticity is satisfied by _local_ interactions but can be violated by _non-local_ interactions, in which case the contribution from these singularities becomes important [104].
\[\eta=ie^{i\epsilon}\chi\,\quad\epsilon\to 0^{+}\, \tag{4.5}\]
under which the propagators become
\[K_{\phi}(\eta,k) =\tilde{K}_{\phi}(\chi,k)\, \tag{4.6}\] \[G^{f}_{i_{1}\cdots i_{S_{f}}\,j_{1}\cdots j_{S_{f}}}(\eta_{1}, \eta_{2},\mathbf{k}) =\tilde{G}^{f}_{i_{1}\cdots i_{S_{f}}\,j_{1}\cdots j_{S_{f}}}( \chi_{1},\chi_{2},\mathbf{k}). \tag{4.7}\]
The original integration contour along the negative real axis is thus deformed to that along the positive imaginary axis, together with an arc at infinity. The \(i\epsilon\)-prescription and the Bunch-Davies initial condition
for the propagators guarantee that as \(\left|\eta\right|\to\infty\), the integrand of (4.4) decays exponentially, leaving a vanishing contribution from the arc at infinity. The only non-vanishing contribution now comes from the integral along the positive imaginary axis where \(\chi>0\). In momentum space, the vertex derivative operators transform as
\[D_{v} =a^{4-k_{v}-l_{v}}(\eta)\left[\left(\delta_{ij}\right)^{p_{v}} \left(\epsilon_{ijk}\right)^{q_{v}}\left(\partial_{\eta}\right)^{k_{v}}\left(i \,k_{i}\right)^{l_{v}}\right]_{\text{partially contract}}\] \[=i^{-4+k_{v}+l_{v}}\ i^{-k_{v}}\ i^{l_{v}}\times a^{4-k_{v}-l_{v} }(\chi)\left[\left(\delta_{ij}\right)^{p_{v}}\left(\epsilon_{ijk}\right)^{q_{v }}\left(\partial_{\chi}\right)^{k_{v}}\left(k_{i}\right)^{l_{v}}\right]_{ \text{partially contract}}\] \[\equiv\tilde{D}_{v}. \tag{4.8}\]
Thus the wavefunction coefficient becomes
\[\psi_{n}=(-1)^{V}\int_{0}^{\infty}\left[\,\prod_{v=1}^{V}\,d\chi_{v}\lambda_{ v}\tilde{D}_{v}\right]\left[\,\prod_{e=1}^{n}\tilde{K}_{e}\right]\left[\,\prod_{e^{ \prime}=1}^{I}\tilde{G}_{e^{\prime}}\right]. \tag{4.9}\]
Now we evoke the reality property (3.22) of the Wick-rotated bulk-bulk propagator for light fields, namely
\[\tilde{K}_{e}^{*} =\tilde{K}_{e}\, \tag{4.10}\] \[\tilde{G}_{e^{\prime}}^{*} =\tilde{G}_{e^{\prime}}\, \tag{4.11}\]
together with the reality of the Wick-rotated derivative operator,
\[\tilde{D}_{v}^{*}=\tilde{D}_{v}\, \tag{4.12}\]
to see that each individual factor in (4.9) is purely real. Therefore, combining (4.2), (4.12), (4.10) and (4.11), we finally arrive at
\[\psi_{n}^{*}=\psi_{n}. \tag{4.13}\]
More precisely speaking, however, what we have managed to prove is only the reality of the _integrand_ in the perturbative expression of \(\psi_{n}\). In order to make the final logical leap to the reality of the full-fledged \(\psi_{n}\), we need to also ensure the _convergence_ of the integral. Since the propagators and the vertices are well-behaved for any finite Euclidean conformal time \(0<\chi<\infty\), we only need to check the convergence at the endpoints. In particular, the UV convergence at \(\chi\to\infty\) is guaranteed by the time-ordering and the Bunch-Davies initial condition, as mentioned above. On the other hand, the IR convergence at \(\chi\to 0\) is not automatic and is generally model-dependent. For instance, a \(\lambda\sigma^{4}\) self-interaction for a massless \(\sigma\) field can bring a logarithmic divergence \(\ln(-\eta)\) in the IR limit, which after Wick rotation (4.5) brings an imaginary factor of \(\ln e^{-i(\pi/2)}=-i\pi/2\). An odd number of such \(\lambda\sigma^{4}\) insertions would lead to a complex \(\psi_{n}\) in general. One can also view this as a consequence of scale invariance spontaneously broken by the IR cutoff [37]. Therefore, we will restrict ourselves to the case of IR-convergent interactions and perfect scale invariance. This establishes the reality of \(\psi_{n}\) for a general tree diagram expressed by (4.9). Since the total \(\psi_{n}\) is a sum of all possible diagrams with \(n\) external \(\phi\) lines, the reality property extends to the full \(\psi_{n}\) at tree-level for the CCM scenario.
Cosmological colliderThis proof easily generalises to the CC scenario, where the dimensionless mass of the spinning fields (2.41) takes \(0<\nu_{f}<1/2\) for the complementary series and \(\nu_{f}=1/2+T,T\in\mathbb{N}\) for the discrete series. The bulk-bulk propagators are labelled with covariant indices, (e.g. \(G^{f}_{\mu_{1}\cdots\mu_{S_{f}},\nu_{1}\cdots\nu_{S_{f}}}\)) and the vertices include contractions of all the covariant indices including that of a time-like unit vector \(n_{\mu}\), which allows us to break de Sitter boosts at the level of the interactions as in [5]. The Lagrangian
vertices are then
\[\mathcal{L}_{v} =\sqrt{-g}\,\lambda_{v}\left[\left(g^{\mu\nu}\right)^{p_{v}}\left( \varepsilon_{\mu\nu\rho\sigma}\right)^{q_{v}}\left(n_{\mu}\right)^{k_{v}}\left( \nabla_{\nu}\right)^{l_{v}}\prod_{f\in\mathscr{L}}\left\{\Phi^{f}_{\mu_{1}\cdots \mu_{S_{f}}}(\eta,\mathbf{x})\right\}^{N_{v,f}}\right]_{\text{contract}}\] \[=\lambda_{v}\,a^{4-l_{v}}(\eta)\,\left[\left(\eta^{\mu\nu}\right)^ {p_{v}}\left(\epsilon_{\mu\nu\rho\sigma}\right)^{q_{v}}\left(\bar{n}_{\mu} \right)^{k_{v}}\left(\partial_{\nu}\right)^{l_{v}}\prod_{f\in\mathscr{L}}\left\{ \bar{\Phi}^{f}_{\mu_{1}\cdots\mu_{S_{f}}}(\eta,\mathbf{x})\right\}^{N_{v,f}} \right]_{\text{contract}}+\mathcal{O}(\partial^{l_{v}-1})\] \[\equiv\lambda_{v}D_{v}\bar{\Phi}^{N_{v}}+\mathcal{O}(\partial^{l_ {v}-1}). \tag{4.14}\]
Here, \(\varepsilon_{\mu\nu\rho\sigma}=\sqrt{-g}\epsilon_{\mu\nu\rho\sigma}\) is the covariant Levi-Civita tensor density. For simplicity, we have introduced rescaled quantities that are related to the original covariant ones by \(\bar{n}_{\mu}=a^{-1}n_{\mu}\) and \(\bar{\Phi}^{f}_{\mu_{1}\cdots\mu_{S_{f}}}=a^{-S_{f}}\Phi^{f}_{\mu_{1}\cdots\mu _{S_{f}}}\). Note that these rescaled quantities are contracted with the flat metric tensor \(\eta^{\mu\nu}\), and \(\mathcal{O}(\partial^{l_{v}-1})\) are terms with fewer derivatives that take an analogous form to \(D_{v}\).13 Henceforth, it suffices to consider the leading operator \(\lambda_{v}D_{v}\bar{\Phi}^{N_{v}}\) as a general parametrisation of the interactions. In other words, we have expanded all the covariant interaction operators and re-organised the expansion in powers of ordinary derivatives on fields.
Footnote 13: Expanding the covariant derivative in terms of ordinary derivatives generates new terms proportional to de Sitter connections and curvature tensors, which enjoy simple forms due to the conformal flatness of de Sitter. By spatial diffeomorphism invariance, these terms must also take the form of \(\lambda_{v}D_{v}\bar{\Phi}^{N_{v}}\) with a different \(D_{v}\) with fewer derivatives.
The computation of \(\psi_{n}\) at tree-level is completely analogous to (4.4), with \(D_{v}\) and \(G_{e^{\prime}}\) replaced by their covariant cousins. We have shown in (3.46) that the indexed propagator of \(\Phi^{f}\) satisfies the reality condition
\[\left[\tilde{G}^{f}_{\mu_{1}\cdots\mu_{S_{f}}\,\nu_{1}\cdots\nu_{S_{f}}}( \chi_{1},\chi_{2},\mathbf{k})\right]^{*}=\tilde{G}^{f}_{\mu_{1}\cdots\mu_{S_{ f}}\,\nu_{1}\cdots\nu_{S_{f}}}(\chi_{1},\chi_{2},\mathbf{k}). \tag{4.15}\]
Thus the propagator of the rescaled field \(\bar{\Phi}^{f}\) also satisfies the reality condition after Wick rotation
\[\left[a^{-S_{f}}(i\chi_{1})a^{-S_{f}}(i\chi_{2})\,\tilde{G}^{f}_{\mu_{1} \cdots\mu_{S_{f}}\,\nu_{1}\cdots\nu_{S_{f}}}(\chi_{1},\chi_{2},\mathbf{k}) \right]^{*}=a^{-S_{f}}(i\chi_{1})a^{-S_{f}}(i\chi_{2})\,\tilde{G}^{f}_{\mu_{1} \cdots\mu_{S_{f}}\,\nu_{1}\cdots\nu_{S_{f}}}(\chi_{1},\chi_{2},\mathbf{k}). \tag{4.16}\]
In addition, the vertex derivative operators transforms as
\[D_{v} =a^{4-l_{v}}(\eta)\left[\left(\eta^{\mu\nu}\right)^{p_{v}}\left( \epsilon_{\mu\nu\rho\sigma}\right)^{q_{v}}\left(\bar{n}_{\mu}\right)^{k_{v}} \left(\partial_{\eta},i\,k_{i}\right)^{l_{v}}\right]_{\text{partially contract}}\] \[=i^{-4+l_{v}}\ i^{-l_{v}}\times a^{4-l_{v}}(\chi)\left[\left(\eta ^{\mu\nu}\right)^{p_{v}}\left(\epsilon_{\mu\nu\rho\sigma}\right)^{q_{v}} \left(\bar{n}_{\mu}\right)^{k_{v}}\left(\partial_{\chi},-k_{i}\right)^{l_{v}} \right]_{\text{partially contract}}\] \[\equiv\tilde{D}_{v}\, \tag{4.17}\]
with the same reality property as in (4.12) i.e.
\[\tilde{D}_{v}^{*}=\tilde{D}_{v}. \tag{4.18}\]
Thus all the ingredients in the perturbative computation of \(\psi_{n}\) transform identically as in the CCM scenario, leading to the same conclusion:
**Theorem 4.1**.: **(\(\psi_{n}\)-reality)** _The tree-level wavefunction coefficient of massless scalar fields is purely real, i.e. \(\operatorname{Im}\psi_{n}=0\), in theories containing an arbitrary number of fields of any light mass, spin, coupling, sound speed and chemical potential, under the assumption of locality, unitarity, scale invariance, IR convergence and a Bunch-Davies vacuum._
DiscussionBefore we move on to the inclusion of heavy fields, let us make a few remarks:
* Although we have chosen a specific massless scalar field \(f=\phi\) as the visible sector, and focused on its wavefunction coefficient \(\psi_{n}\), the same proof straightforwardly generalises to multiple massless scalar fields with different flavours, \[\text{Im}\,\psi_{f_{1}\cdots f_{n}}=0\,\text{ with }\quad\nu_{f_{1}}=\cdots=\nu_{f_{n}}=3/2\,\] (4.19) since their bulk-boundary propagators are all real after Wick rotation, and we did not use any Bose symmetry properties in the proof.
* Apart from massless scalar external lines, the particle spectrum may include massless spinning fields such as the graviton. Since the external polarisation tensors are complex, the _helical_ wavefunction coefficients are in general complex. Thus it is more convenient to work with the _indexed_ wavefunction coefficients where the helicities are added together. The indexed bulk-boundary propagator of the massless graviton is \[K^{\gamma}_{i_{1}i_{2}\,j_{1}j_{2}}(\eta,\mathbf{k})=\frac{1}{2}\sum_{h=\pm 2 }K_{\gamma}(\eta,k)\text{e}^{(h)}_{i_{1}i_{2}}(\mathbf{k})\text{e}^{(h)}_{j_{1 }j_{2}}(-\mathbf{k})\,\] (4.20) where \[K_{\gamma}(\eta,k)=(1-ik\eta)e^{ik\eta}\.\] (4.21) This indexed bulk-boundary propagator is purely real after Wick-rotating \(\eta\). Thus after going through the same argument as above, we can further extend the \(\psi_{n}\)-reality theorem to the indexed wavefunction coefficients with \(m\) massless scalars and \(n-m\) massless gravitons: \[\text{Im}\,\psi_{m;(i_{1}i_{2})\cdots(j_{1}j_{2})}=0\,\quad 0\leqslant m \leqslant n\.\] (4.22)
* We can also consider conformally-coupled scalars \(\varphi\) on the external lines where the bulk-boundary propagators are \[K_{\varphi}(\eta,k)=\frac{\eta}{\eta_{0}}e^{ik(\eta-\eta_{0})}\,\] (4.23) where the \(\eta_{0}\)-dependence ensures that we satisfy the future boundary condition. Clearly this propagator is not real after Wick rotation, instead it is purely imaginary. Our reality theorem then extends to these fields if we have an even number of them on external lines.
* The \(\psi_{n}\)-reality automatically implies the reality of the total-energy \(k_{T}\)-singularities within. As we shall see in the next subsection, after including heavy fields, the wavefunction coefficient can become complex, but the \(k_{T}\)-reality remains true.
### Adding heavy fields: the total-energy reality
Now let us move on to the case with heavy fields included. We again label the heavy fields by their flavor \(f\in\mathscr{H}\), with a dimensionless mass \(\mu_{f}=-i\nu_{f}>0\). In the CCM scenario, the most general interaction vertex straightforwardly generalises to
\[\mathcal{L}_{v}=\lambda_{v}\,a^{4-k_{v}-l_{v}}(\eta)\left[\left( \delta_{ij}\right)^{p_{v}}\left(\epsilon_{ijk}\right)^{q_{v}}\left(\partial_{ \eta}\right)^{k_{v}}\left(\partial_{i}\right)^{l_{v}}\prod_{f\in\mathscr{L} \cup\mathscr{H}}\left\{\sigma^{f}_{i_{1}\cdots i_{S_{f}}}(\eta,\mathbf{x}) \right\}^{N_{v,f}}\right]_{\text{contract}}\equiv\lambda_{v}D_{v}\sigma^{N_{v }}. \tag{4.24}\]
In contrast to light fields, the Wick-rotated bulk-bulk propagator of heavy fields does not enjoy the reality property by itself. However, as we showed in Section 3, we can always achieve reality for the connected part of the propagator by adding appropriate solutions of the homogeneous equation of motion, regardless of the mass of the propagating field. This leads us to a decomposition of the indexed bulk-bulk propagator of any mass,
\[G^{f}_{i_{1}\cdots i_{S_{f}}\,j_{1}\cdots j_{S_{f}}}(\eta_{1},\eta_{2},{\bf k}) =C^{f}_{i_{1}\cdots i_{S_{f}}\,j_{1}\cdots j_{S_{f}}}(\eta_{1},\eta_{2},{\bf k })+F^{f}_{i_{1}\cdots i_{S_{f}}\,j_{1}\cdots j_{S_{f}}}(\eta_{1},\eta_{2},{\bf k })\, \tag{4.25}\]
and as shown in (3.29), after Wick rotation, the connected part
\[C^{f}_{i_{1}\cdots i_{S_{f}}\,j_{1}\cdots j_{S_{f}}}(\eta_{1},\eta_{2},{\bf k })=\tilde{C}^{f}_{i_{1}\cdots i_{S_{f}}\,j_{1}\cdots j_{S_{f}}}(\chi_{1},\chi_ {2},{\bf k}) \tag{4.26}\]
enjoys the reality property
\[\left[\tilde{C}^{f}_{i_{1}\cdots i_{S_{f}}\,j_{1}\cdots j_{S_{f}}}(\chi_{1}, \chi_{2},{\bf k})\right]^{*}=\tilde{C}^{f}_{i_{1}\cdots i_{S_{f}}\,j_{1}\cdots j _{S_{f}}}(\chi_{1},\chi_{2},{\bf k}). \tag{4.27}\]
Now the key insight is that the time-ordering \(\theta\)-functions only appear in the connected part \(C\), and the factorised part \(F\) is a sum of products of functions of the vertex times. Therefore, in a general tree diagram, one can extract the maximally-connected contribution by isolating the all-\(C\) piece after the decomposition (4.25),
\[\psi_{n} =\int_{-\infty(1-i\epsilon)}^{0}\left[\,\prod_{v=0}^{V}d\eta_{v} \,i\lambda_{v}\,D_{v}\right]\left[\,\prod_{e=1}^{n}K_{e}\right]\left[\,\prod_{ e^{\prime}=1}^{I}G_{e^{\prime}}\right]\] \[=\int_{-\infty(1-i\epsilon)}^{0}\left[\,\prod_{v=0}^{V}d\eta_{v} \,i\lambda_{v}\,D_{v}\right]\left[\,\prod_{e=1}^{n}K_{e}\right]\left[\,\prod_ {e^{\prime}=1}^{I}(C_{e^{\prime}}+F_{e^{\prime}})\right]\] \[=\int_{-\infty(1-i\epsilon)}^{0}\left[\,\prod_{v=0}^{V}d\eta_{v} \,i\lambda_{v}\,D_{v}\right]\left[\,\prod_{e=1}^{n}K_{e}\right]\left[\,\prod_ {e^{\prime}=1}^{I}C_{e^{\prime}}\right]+\text{factorised}\] \[\equiv\psi^{C}_{n}(k_{T},\cdots)+\text{factorised}\, \tag{4.28}\]
where we have explicitly spelled out the \(k_{T}\) dependence in \(\psi^{C}_{n}\), with \((\cdots)\) denoting other kinematic variables. For \(n=4\) (where we only have a single bulk-bulk propagator) the factorised part here is completely factorised in the sense that there are no \(\theta\)-functions, while for higher-point coefficients the factorised part can still contain \(\theta\)-functions but crucially fewer than those contained in the all-\(C\) (maximally-connected) piece. The total-energy singularities only arise from this maximally-connected part, whereas the factorised piece can only have functional dependence on \(k_{T}\) that is analytic at \(k_{T}\to 0\). In particular, all the total-energy poles are contained in \(\psi^{C}_{n}\),
\[\underset{k_{T}\to 0}{\text{Res}}\left(k_{T}^{m}\,\psi_{n}\right)=\underset{k_{T} \to 0}{\text{Res}}\left(k_{T}^{m}\,\psi_{n}^{C}\right)\,\quad m,n\in\mathbb{N}. \tag{4.29}\]
Now given that the connected part of the propagator \(C\) enjoys the reality property for both light and heavy fields regardless of their mass, spin, sound speed and chemical potential, we can go through the same proof in the previous subsection, and conclude that the maximally-connected piece must be real:
\[\text{Im}\,\psi^{C}_{n}(k_{T},\cdots)=0\, \tag{4.30}\]
which immediately implies the reality of all the total-energy poles:
\[\text{Im}\,\underset{k_{T}\to 0}{\text{Res}}\left(k_{T}^{m}\,\psi_{n}\right)=0\,\quad m,n\in\mathbb{N}. \tag{4.31}\]
The proof in the CC scenario is analogous to that above after doing the same decomposition and utilizing the reality property of the covariant indexed connected propagator of \(\tilde{\Phi}^{f}\) based on (3.57),
\[\left[a^{-S_{f}}(i\chi_{1})a^{-S_{f}}(i\chi_{2})\,\tilde{C}^{f}_{\mu_ {1}\cdots\mu_{S_{f}}\,\,\nu_{1}\cdots\nu_{S_{f}}}(\chi_{1},\chi_{2},\mathbf{k}) \right]^{*}=a^{-S_{f}}(i\chi_{1})a^{-S_{f}}(i\chi_{2})\,\tilde{C}^{f}_{\mu_{1} \cdots\mu_{S_{f}}\,\,\nu_{1}\cdots\nu_{S_{f}}}(\chi_{1},\chi_{2},\mathbf{k}). \tag{4.32}\]
Therefore, we conclude with a reality theorem on the \(k_{T}\)-poles of tree-level wavefunction coefficients:
**Theorem 4.2**.: **(\(k_{T}\)-reality)** _The maximally-connected piece of a tree-level wavefunction coefficient for massless scalar fields, along with all the total-energy poles therein, is purely real, i.e. \(\operatorname{Im}\psi^{C}_{n}(k_{T},\cdots)=\operatorname{Im}\,\operatorname {Res}\,\begin{pmatrix}k_{T}^{m}\,\psi_{n}\end{pmatrix}=0\,\ m,n\in\mathbb{N}\), in theories containing an arbitrary number of fields of any mass, spin, coupling, sound speed and chemical potential, under the assumption of locality, unitarity, scale invariance, IR convergence and a Bunch-Davies vacuum._
DiscussionNotice the \(k_{T}\)-reality is concretely established only for \(k_{T}>0\). For \(k_{T}\)-poles inside \(\psi_{n}\), the reality of their residues automatically follows from the reality along the positive real axis. However, in a general tree diagram, the \(k_{T}\to 0\) limit may possess singularities other than just poles. For instance, a 4-point exchange diagram with a massive field could contain a logarithmic singularity [16, 24],
\[\lim_{k_{T}\to 0}\psi_{4}\sim c_{p}\times k_{T}^{p}\ln k_{T}\,\quad p\in \mathbb{N}\, \tag{4.33}\]
which comes with a branch cut with the branching point \(k_{T}=0\). More generally, one may expect other types of singularities to occur at \(k_{T}=0\), but none of them can lie along the positive real axis which is physically accessible. Then what we can say is that if such singularities are part of a function analytic along a strip \(k_{T}>\epsilon>0\), then the coefficient of such a function must be real. To illustrate the idea, suppose we have a branch-cut singularity and an essential singularity at \(k_{T}=0\),
\[\lim_{k_{T}\to 0^{+}}\psi_{n}\sim c_{\alpha}\times k_{T}^{\alpha}\ln k_{T}+c_{ \beta\gamma}\times\exp\left(-\frac{\gamma}{k_{T}^{\beta}}\right)+\cdots\, \tag{4.34}\]
then the \(k_{T}\)-reality states that \(\alpha,\beta,\gamma\in\mathbb{R}\) and \(\operatorname{Im}c_{\alpha}=\operatorname{Im}c_{\beta\gamma}=0\). Notice that the full analytic structure at \(k_{T}\to 0\) should be completely fixed by the perturbative structure at tree-level and the analytic property of the mode functions, and there could be a constraint on the type of allowed total-energy singularities. Hence some of the singularities (for instance, an essential singularity) may not exist at least at tree-level. However, the study of the analytic structure of singularities in the cosmological wavefunction is still in its infancy, and we leave a more detailed analysis to future work.
The reality of the \(k_{T}\)-singularities (or equivalently, the maximally-connected component \(\psi^{C}_{n}\)) can be physically understood in a few different ways:
* First, from the large-mass EFT perspective, one can choose to integrate out the heavy degrees of freedom in the theory and be left with an EFT of light fields only. This procedure is equivalent to performing a large-mass expansion of the heavy field propagators in a given Feynman diagram. After such an expansion, the heavy field propagators are contracted to contact interactions with a tower of derivative couplings that respect scale invariance. Then by the \(\psi_{n}\)-reality theorem (4.1), each contracted diagram is purely real since they only involve light fields. On the other hand, the large-mass expansion preserves part of the maximally-connected wavefunction \(\psi^{C}_{n}\). This preserved part is free of any partial-energy singularities involving momenta of the expanded heavy field propagators, but includes part of the original total-energy singularities. Hence the reality of these preserved total-energy singularities can be derived from the \(\psi_{n}\)-reality after the large-mass expansion, and is consistent with the full \(k_{T}\)-reality.
* Second, from the amplitude perspective, the \(k_{T}\)-singularities are generated by the time integral of the connected diagram in the past infinity where \(\eta\to-\infty\). Namely, we have \[\lim_{k_{T}\to 0}\psi_{n}=\lim_{k_{T}\to 0}\psi_{n}^{C}\sim\int_{-\infty}d\eta\, \eta^{p}e^{ik_{T}\eta}\sim\frac{\mathcal{A}_{n}}{k_{T}^{p+1}}\.\] (4.35) In the infinite past, the physical momenta of different modes are much larger than their mass scale as well as the Hubble scale, thus all the fields are effectively massless and interacting in a flat spacetime. The limit \(k_{T}\to 0\) therefore probes the energy-conserving scattering processes in the analytically continued sense [78]. In a de Sitter-invariant theory, the residue of the leading \(k_{T}\)-pole is expected to recover the on-shell (Lorentz-invariant) scattering amplitude of massless particles \(\mathcal{A}_{n}\) in flat spacetime [78, 79, 80]. However, it is known that a Lorentz-invariant tree amplitude of scalars in flat spacetime is manifestly real unless one of the internal lines hit the mass shell and become disconnected.14 This implies that the leading total-energy pole is real. Here we have also shown that sub-leading ones are also real, which can be argued for given the Manifestly Local Test (MLT) of [33] which states that wavefunction coefficients of massless scalars satisfy \[\frac{\partial\psi_{n}}{\partial k_{a}}\Big{|}_{k_{a}=0}=0,\] (4.36) where the derivative with respect to an external energy is taken while holding all other variables fixed. This equation should be satisfied by both contact and exchange diagrams, and is oblivious to the type of state that is being exchanged. The MLT follows from the simple observation that the bulk-boundary propagator of a massless scalar in de Sitter does not contain a term linear in \(k\): \[K_{\phi}(\eta,k)=(1-ik\eta)e^{ik\eta}=1+\mathcal{O}(k^{2}).\] (4.37) This constraint relates sub-leading total-energy poles to leading ones, and given that it is a real constraint, it implies that sub-leading poles are real once the leading ones are. It can be the case that there are sub-leading total-energy poles that are not tied to the leading ones by this constraint. However, in those cases we expect that the leading pole of such terms also has an amplitude interpretation, coming from an interaction with fewer derivatives, since it has a lower-order pole, and then the argument can be run again.15 Footnote 14: To see this fact, one simply counts the factors of \(i\) in a tree diagram: \(i^{V}\) from vertices, \((-i)^{I}\) from off-shell internal lines, \(i^{2n}\) from \(2n\) vertex derivatives, and an overall \(i\) from convention. This leads to \(i^{I-V+1}=1\) for a tree topology with \(0=I-V+1\). The imaginary part can only come from \(\operatorname{Im}\left(p^{2}+m^{2}-i\epsilon\right)^{-1}=\pi\delta(p^{2}+m^{ 2})\), where the diagram is factorised on-shell.
Footnote 15: We thank Austin Joyce for discussions on these points.
We end this subsection by pointing out the \(k_{T}\)-reality also applies to external massless gravitons and an even number of conformally coupled scalars, with the argument mirroring that of light fields which we gave above.
### Factorising parity-odd correlators
The universal reality of \(k_{T}\)-singularities is not only of theoretical interest, but also provides a powerful tool for the computation of phenomenologically interesting parity-odd correlators. We will show in this subsection that all parity-odd correlators of massless fields must be factorised at tree-level and cannot contain \(k_{T}\)-singularities as a consequence of the reality theorems we have just derived. The reason why parity is relevant is simple: for any boundary Hermitian operator \(\phi^{\dagger}(\mathbf{x})=\phi(\mathbf{x})\), its Hermitian conjugate in momentum space is equivalent to a spatial reversal,
\[\phi^{\dagger}(\mathbf{k})=\left(\int d^{3}xe^{-i\mathbf{k}\cdot\mathbf{x}} \phi(\mathbf{x})\right)^{\dagger}=\int d^{3}xe^{i\mathbf{k}\cdot\mathbf{x}} \phi(\mathbf{x})=\phi(-\mathbf{k}). \tag{4.38}\]
If in addition, \(\phi({\bf x})\) is a parity-even scalar (such as the CMB temperature fluctuations), a spatial reversal is equivalent to a parity transformation. Therefore, \(n\)-point correlation functions in momentum space can always be decomposed into a parity-even part and a parity-odd part:
\[\langle\phi({\bf k}_{1})\cdots\phi({\bf k}_{n})\rangle=\langle\phi({\bf k}_{1}) \cdots\phi({\bf k}_{n})\rangle^{\rm PE}+\langle\phi({\bf k}_{1})\cdots\phi({ \bf k}_{n})\rangle^{\rm PO}\ \, \tag{4.39}\]
with
\[\langle\phi({\bf k}_{1})\cdots\phi({\bf k}_{n})\rangle^{\rm PE} =\frac{1}{2}\left[\langle\phi({\bf k}_{1})\cdots\phi({\bf k}_{n}) \rangle+\langle\phi(-{\bf k}_{1})\cdots\phi(-{\bf k}_{n})\rangle\right]\, \tag{4.40}\] \[\langle\phi({\bf k}_{1})\cdots\phi({\bf k}_{n})\rangle^{\rm PO} =\frac{1}{2}\left[\langle\phi({\bf k}_{1})\cdots\phi({\bf k}_{n}) \rangle-\langle\phi(-{\bf k}_{1})\cdots\phi(-{\bf k}_{n})\rangle\right]. \tag{4.41}\]
The Hermiticity of \(\phi({\bf x})\) implies the parity-even part is always real while the parity-odd part is always imaginary,
\[\langle\phi({\bf k}_{1})\cdots\phi({\bf k}_{n})\rangle^{\rm PE} ={\rm Re}\ \langle\phi({\bf k}_{1})\cdots\phi({\bf k}_{n})\rangle^{\rm PE}\, \tag{4.42}\] \[\langle\phi({\bf k}_{1})\cdots\phi({\bf k}_{n})\rangle^{\rm PO} =i\,{\rm Im}\ \langle\phi({\bf k}_{1})\cdots\phi({\bf k}_{n})\rangle^{\rm PO }. \tag{4.43}\]
These boundary correlators are computed by a functional integral of the modulus square of the wavefunction,
\[\langle\phi({\bf k}_{1})\cdots\phi({\bf k}_{n})\rangle=\int{\cal D}\sigma \left|\Psi[\sigma,\eta_{0}]\right|^{2}\!\phi({\bf k}_{1})\cdots\phi({\bf k}_{n })\, \tag{4.44}\]
where \(\sigma=\{\sigma^{f}_{i_{1},\cdots i_{S_{f}}}|f\in{\cal L}\cup{\cal H}\}|_{\eta =\eta_{0}}\) collectively denotes all the bulk fields evaluated at the future boundary. The wavefunction exponent is organised as a sum over field products16
Footnote 16: We will adopt the notation that \(\psi_{n-m;f_{1}\cdots f_{m}}\) represents the wavefunction coefficient before the term with at least \(n-m\) factors of \(\phi\) and \(m\) factors of other fields (including \(\phi\)) labelled by the flavor indices. Note that we also abbreviate \(\psi_{n}=\psi_{n;}\) and \(\psi_{f_{1}\cdots f_{n}}=\psi_{0;f_{1}\cdots f_{n}}\). The same notation will be used for the density matrix diagonals below.
\[\Psi[\sigma,\eta_{0}] =\exp\Bigg{[}\sum_{n=2}^{\infty}\frac{1}{n!}\int_{{\bf k}_{1} \cdots{\bf k}_{n}}\psi_{f_{1}\cdots f_{n}}(\{k\},\{{\bf k}\})(2\pi)^{3}\delta ^{3}\bigg{(}\sum_{{\rm a}=1}^{n}{\bf k}_{a}\bigg{)}\sigma^{f_{1}}({\bf k}_{1}) \cdots\sigma^{f_{n}}({\bf k}_{n})\Bigg{]}\] \[=\exp\Bigg{[}\sum_{n=2}^{\infty}\frac{1}{n!}\int_{{\bf k}_{1} \cdots{\bf k}_{n}}\psi_{n}(\{k\},\{{\bf k}\})(2\pi)^{3}\delta^{3}\bigg{(}\sum_ {{\rm a}=1}^{n}{\bf k}_{a}\bigg{)}\phi({\bf k}_{1})\cdots\phi({\bf k}_{n})+( \cdots)\Bigg{]}\, \tag{4.45}\]
where \((\cdots)\) in the second row denotes terms including fields other than the massless scalar \(\phi\). Due to the modulus square, the phase information of the wavefunction \(\Psi[\sigma,\eta_{0}]\) is washed out in correlators, which solely depend on the combination
\[\rho_{f_{1}\cdots f_{n}}(\{k\},\{{\bf k}\})\equiv\psi_{f_{1}\cdots f_{n}}(\{k \},\{{\bf k}\})+\psi^{*}_{f_{1}\cdots f_{n}}(\{k\},\{-{\bf k}\}), \tag{4.46}\]
in the probability distribution functional
\[\left|\Psi[\sigma,\eta_{0}]\right|^{2}=\exp\Bigg{[}\sum_{n=2}^{\infty}\frac{1 }{n!}\int_{{\bf k}_{1}\cdots{\bf k}_{n}}\rho_{f_{1}\cdots f_{n}}(\{k\},\{{\bf k }\})(2\pi)^{3}\delta^{3}\bigg{(}\sum_{{\rm a}=1}^{n}{\bf k}_{a}\bigg{)}\sigma ^{f_{1}}({\bf k}_{1})\cdots\sigma^{f_{n}}({\bf k}_{n})\Bigg{]}. \tag{4.47}\]
The final correlator receives contributions from various partitions of all possible diagrams at tree-level,
\[\langle\phi({\bf k}_{1})\cdots\phi({\bf k}_{n})\rangle^{\prime}= \frac{1}{\rho_{2}^{n}}\Bigg{(}\rho_{n}+\sum_{m;f}q_{nm}\,\rho_{n-m;f}\,\frac{ 1}{\rho_{f}}\,\rho_{m;f}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{m,l;f_{1},f_{2}}q_{ nml}\,\rho_{n-m-l;f_{1}}\,\frac{1}{\rho_{f_{1}}}\,\rho_{l;f_{1},f_{2}}\,\frac{1}{ \rho_{f_{2}}}\,\rho_{m;f_{2}}+\cdots\Bigg{)}\, \tag{4.48}\]
where \(q_{nm},q_{nml},\cdots\) are real rational numbers counting the combinatorics of partitions, and momentum conservation is implicit in the individual \(\rho\)'s. Notice that except the first term, all the other contributions in (4.48) are _factorised_ in the sense that they do not contain any \(k_{T}\)-singularities. The only non-factorisable contribution that contains \(k_{T}\)-singularities comes from the maximally-connected part of the first term,
\[\rho_{n}=\rho_{n}^{C}+\text{factorised}\, \tag{4.49}\]
with
\[\rho_{n}^{C}(\{k\},\{\mathbf{k}\})=\psi_{n}^{C}(\{k\},\{\mathbf{k}\})+\psi_{n} ^{C*}(\{k\},\{-\mathbf{k}\}). \tag{4.50}\]
The \(k_{T}\)-reality theorem (4.2) tells us that the maximally-connected massless wavefunction is always real, i.e. \(\psi_{n}^{C*}=\psi_{n}^{C}\). This implies
\[\rho_{n}^{C}(\{k\},\{\mathbf{k}\})=\psi_{n}^{C}(\{k\},\{\mathbf{k}\})+\psi_{n} ^{C}(\{k\},\{-\mathbf{k}\})\, \tag{4.51}\]
i.e. the maximally-connected density matrix is parity-even. We can then check what the consequence of this is for parity-odd \(n\)-point correlators. We have
\[\left\langle\phi(\mathbf{k}_{1})\cdots\phi(\mathbf{k}_{n})\right\rangle^{ \prime\text{PO}} \equiv\frac{\left\langle\phi(\mathbf{k}_{1})\cdots\phi(\mathbf{k} _{n})\right\rangle^{\prime}-\left\langle\phi(-\mathbf{k}_{1})\cdots\phi(- \mathbf{k}_{n})\right\rangle^{\prime}}{2}\] \[=0+\text{factorised}. \tag{4.52}\]
We therefore see that parity-odd correlators of massless scalars are factorised and therefore cannot have \(k_{T}\)-singularities,
\[\lim_{k_{T}\to 0^{+}}\frac{d^{m}}{dk_{T}^{m}}\left\langle\phi(\mathbf{k}_{1}) \cdots\phi(\mathbf{k}_{n})\right\rangle^{\prime\text{PO}}=\text{finite}\,\quad m\in\mathbb{N}. \tag{4.53}\]
This also ensures that parity-odd correlators admit a well-defined Taylor expansion around \(k_{T}=0\). The proof straightforwardly generalises to the CC scenario, and provides an understanding of why the final parity-odd trispectrum of [104], computed in a non-local EFT with the massive spinning field integrated out, is manifestly factorised (we will discuss this further in Section 5.1). Thus we conclude with the following theorem,
**Theorem 4.3**.: **(Parity-odd factorisation)** _The parity-odd part of any tree-level correlator of massless scalar fields is factorised and admits a Taylor expansion around \(k_{T}=0\), in theories containing an arbitrary number of fields of any mass, spin, coupling, sound speed and chemical potential, under the assumptions of locality, unitarity, scale invariance, IR convergence and a Bunch-Davies vacuum._
DiscussionInterestingly, although the parity-odd correlator factorises for the exchange of both light fields and heavy fields, they factorise following different routes. For light fields, the whole wavefunction coefficient \(\psi_{n}\) is itself real, leading to
\[\rho_{n}^{\text{PO}} =\frac{1}{2}\left[\rho_{n}(\{k\},\{\mathbf{k}\})-\rho_{n}(\{k\}, \{-\mathbf{k}\})\right]\] \[=\frac{1}{2}\left[\psi_{n}(\{k\},\{\mathbf{k}\})+\psi_{n}(\{k\}, \{-\mathbf{k}\})-\psi_{n}(\{k\},\{-\mathbf{k}\})-\psi_{n}(\{k\},\{\mathbf{k} \})\right]\] \[=0. \tag{4.54}\]
Thus the parity-odd \(n\)-point correlator sourced by light fields solely receives contributions from lower-point wavefunction coefficients \(\psi_{f_{1}\cdots f_{m}}\) with \(m\leqslant n-1\). In contrast, when heavy fields are involved,
is no longer real by itself, and one has to perform the C-F decomposition to isolate the factorised parts of \(\psi_{n}\), which will also contribute to the final parity-odd \(n\)-point correlator. Alternatively, one can say that for light fields, \(F=0\) and there is nothing to isolate away from \(\psi_{n}\). However, such a computational distinction is an artefact of the wavefunction formalism. Namely, the Dirichlet boundary condition at \(\eta=\eta_{0}\) is introduced as an intermediate tool to organise the perturbative expansion. In the absence of IR divergences, the final correlator does not depend on \(\eta_{0}\), and the \(\eta_{0}\) dependence in each contributing piece must cancel out. Yet the Dirichlet boundary condition does not respect the continuity of the mass parameter at \(\nu=i\mu=0\), since the IR behaviour of light and heavy fields are different: light fields split into two scaling modes with one dominating over the other in the limit \(\eta_{0}\to 0\):
\[\sigma_{h}(\eta_{0},k)\sim A_{h}(k,\nu,\tilde{\kappa})(-k\eta_{0})^{\frac{3}{2 }-\nu}+B_{h}(k,\nu,\tilde{\kappa})(-k\eta_{0})^{\frac{3}{2}+\nu}. \tag{4.55}\]
Heavy fields split into two oscillatory modes with the same damping power, and are equally important as \(\eta_{0}\to 0\):
\[\sigma_{h}(\eta_{0},k)\sim A_{h}(k,-i\mu,\tilde{\kappa})(-k\eta_{0})^{\frac{3} {2}-i\mu}+B_{h}(k,i\mu,\tilde{\kappa})(-k\eta_{0})^{\frac{3}{2}+i\mu}. \tag{4.56}\]
Thus implementing a Dirichlet boundary condition at \(\eta_{0}\) is sensitive to the mass of the bulk fields, which should be artificial. This is because there is nothing physically problematic17 at \(\nu=i\mu=0\), and all the physical observables such as the boundary correlator should be continuous across this point. As we shall see in Section 5, this is indeed the case for parity-odd 4-point correlators. Namely, the correlator for exchanging a heavy field can be directly obtained via the analytic continuation \(\nu\to i\mu\) in the final result of light field exchange. To complement this discussion and the proofs we have outlined in this section, in Appendix B we show how to understand our results in the in-in/Schwinger-Keldysh formalism where the subtlety of the role of \(\eta_{0}\) does not appear.
Footnote 17: In contrast, the Higuchi bound at \(\nu_{H}=1/2\)_is_ problematic in de Sitter-invariant theories due to the loss of unitarity beyond \(\nu_{H}\).
## 5 Exact parity-odd trispectra
In this section we present three examples of parity-odd trispectra, which we are able to compute exactly given our theorem that parity-odd correlators are factorised. Indeed, to arrive at these exact shapes we only need to compute time integrals associated with cubic diagrams, without having to worry about the complicated nested integrals that one usually encounters when computing trispectra. Given these examples, it would be straightforward to extend our methods to more general examples corresponding to other interactions, other spins, etc.
Before diving into technical details, we first outline the overall algorithm for computing such parity-odd trispectra. Based on the C-F decomposition that isolates the connected propagator \(C\) which satisfies helical-reality,
\[\begin{array}{ccccc}G&C&F\\ \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par
we can compute the \(s\)-channel parity-odd trispectrum as a sum of three terms,
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1 -1-
parameter regimes. In contrast, in this work, we will set out to obtain the _exact_ result of the parity-odd trispectrum using our factorisation theorem.18
Footnote 18: Notice that for a unit sound speed \(c_{s}=1\), the complete trispectrum has been solved in [73]. However, the case with a non-unit sound speed (more specifically, \(c_{s}<1\)) has not yet been fully understood in the whole kinematic domain. The main difficulty lies in the analytic continuation beyond the spurious collinear singularities [34]. Our work serves as a first complete result in the parity-odd sector for non-unit sound speeds.
The action of a massive vector field with a chemical potential is
\[S=\int d^{4}x\sqrt{-g}\left[-\frac{1}{4}F_{\mu\nu}^{2}-\frac{m^{2}}{2}\Phi_{ \mu}^{2}+\frac{\phi}{4\Lambda_{c}}F_{\mu\nu}\tilde{F}^{\mu\nu}\right]\, \tag{5.5}\]
where \(F_{\mu\nu}=\nabla_{\mu}\Phi_{\nu}-\nabla_{\nu}\Phi_{\mu}\), \(\tilde{F}^{\mu\nu}=\varepsilon^{\mu\nu\alpha\beta}F_{\alpha\beta}\) and \(\varepsilon^{\mu\nu\alpha\beta}=\frac{\varepsilon^{\mu\nu\alpha\beta}}{ \sqrt{-g}}\) is the contravariant Levi-Civita tensor density. The mass term breaks the \(U(1)\) gauge symmetry of the spin-1 field. As we will explain in more detail below, all of our results also apply in the massless limit corresponding to axion-\(U(1)\) gauge field inflation, see e.g. [114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124], however throughout this subsection we will be more general and keep the mass term.
The inflaton background can be expanded as \(\phi=\text{const}+\dot{\phi}_{0}t\), where higher order terms are suppressed by slow-roll parameters. The constant term has no dynamical effects due to the shift symmetry of this dimension-5 operator, and the chemical potential \(\kappa\) is equal to \(\dot{\phi}_{0}/\Lambda_{c}\). The equation of motion of this spin-1 field is then
\[\Box\Phi^{\nu}-\nabla_{\mu}\nabla^{\nu}\Phi^{\mu}-m^{2}\Phi^{\nu}-2\kappa\, \varepsilon^{0\nu\alpha\beta}\nabla_{\alpha}\Phi_{\beta}=0. \tag{5.6}\]
The second term can be eliminate using
\[\nabla_{\mu}\nabla^{\nu}\Phi^{\mu}=\nabla^{\nu}\nabla_{\mu}\Phi^{\mu}+3H^{2} \Phi^{\nu}\, \tag{5.7}\]
and taking the divergence of both sides yields the transverse constraint \(\nabla_{\nu}\Phi^{\nu}=0\). The final equation of motion is then
\[\left[\Box-(m^{2}+3H^{2})\right]\Phi^{\nu}=2\kappa\varepsilon^{0\nu\alpha \beta}\nabla_{\alpha}\Phi_{\beta}. \tag{5.8}\]
We now convert to momentum space and decompose into the different helicities. We write
\[\Phi_{\mu}(\eta,\mathbf{x})=\sum_{h=-1}^{1}\int_{\mathbf{k}}\Phi_{\mu}^{h}( \eta,\mathbf{k})e^{i\mathbf{k}\cdot\mathbf{x}}\, \tag{5.9}\]
with
\[\Phi_{\eta}(\eta,\mathbf{k}) =\Phi_{0,1}^{0}(\eta,k)\, \tag{5.10}\] \[\Phi_{i}^{0}(\eta,\mathbf{k}) =\Phi_{1,1}^{0}(\eta,k)\mathfrak{c}_{i}^{0}(\mathbf{k})\,\qquad\Phi_{i}^{\pm 1}( \eta,\mathbf{k})=\Phi_{1,1}^{\pm 1}(\eta,k)\mathfrak{c}_{i}^{\pm 1}(\mathbf{k}). \tag{5.11}\]
The equations of motion then decouple for each mode, and only the transverse mode will be affected by the addition of the chemical potential, while the temporal and longitudinal mode remain the same as in Section 2. Since we ultimately care about the parity-odd contributions, which cannot come from the exchange of \(h=0\) modes, let us only focus on the transverse modes which are subject to
\[\Phi_{1,1}^{\pm 1}\prime+(k^{2}\pm 2a\kappa k+a^{2}m^{2})\Phi_{1,1}^{\pm 1}=0\, \tag{5.12}\]
and the solution to this equation with Bunch-Davies vacuum conditions is given by the Whittaker-\(W\) function:
\[\Phi_{1,1}^{h}(\eta,k)=\frac{e^{-\pi\tilde{\kappa}/2}}{\sqrt{2k}}W_{i\tilde{ \kappa},\nu}(2ik\eta)\, \tag{5.13}\]
with \(\tilde{\kappa}\equiv h\kappa/H\). This is familiar from our discussion of the cosmological condensed matter scenario but now the mass parameter is different, as is the scaling dimension of the field. Here we have
\[\nu=\sqrt{1/4-m^{2}/H^{2}}. \tag{5.14}\]
Given that only helicity states with \(\pm 1\) are relevant here (and they have the same speed), we have set the speed of sound of the internal massive field to unity, and incorporated a dependence on the speed of sound of the external Goldstone boson \(c_{s}\). This convention has been adopted in [104] and will make the comparison between the two sets of results more transparent.
We now need to choose interaction vertices of the form \(\pi\pi\Phi\), and in order to make use of our factorisation theorem the interactions need to be IR-finite. EFToI operators that are quadratic or cubic in building blocks can both yield the desired interactions where here we define a building block as an object that starts at linear order in fluctuations. By using only these operators the tadpole cancellation is guaranteed. For operators that are quadratic in building blocks the presence of \(\pi\pi\Phi\) couplings also induces \(\pi\Phi\) couplings. Such couplings have two primary effects: they can contribute to the bispectrum of curvature perturbations through single-exchange diagrams, and yield new trispectrum diagrams which perturbatively capture the corrections to the linear theory. See [34, 35] for recent works bootstrapping such single-exchange contributions to the bispectrum. Furthermore, for these interactions the time integrals that we need to compute are not IR-convergent and so we cannot conclude that the parity-odd correlator is factorised.19 In our quest to write down exact shapes, we will therefore concentrate on EFToI operators for which the leading vertices are \(\pi\pi\Phi\) i.e. those that do not necessarily come with \(\pi\Phi\) couplings by symmetry. This essentially tells us that \(\pi\) must appear as \(\pi^{\prime}\) or \(\partial_{i}\partial_{j}\pi\) (which come from the EFToI operators \(\delta g^{00}\) and \(\delta K_{\mu\nu}\)), and we can add extra derivatives to these objects. By requiring that \(\pi\) always appears in this way we ensure convergence of all time integrals at the conformal boundary. Indeed, the two spatial derivatives come with two factors of \(\eta\) by scale invariance while \(\pi^{\prime}\) yields one power of \(\eta\) by scale invariance but then we have
Footnote 19: In general IR-convergence depends on the mass of the exchanged field since its late-time behaviour in time depends on the mass. Here we will assume convergence in the limit that the exchanged field is massless which then guarantees convergence when it is massive.
\[K^{\prime}_{\pi}(k,\eta)=k^{2}\eta e^{ik\eta}\, \tag{5.15}\]
which yields an additional factor. The net contribution from the two factors of \(\pi\) in each vertex is then at least four powers of \(\eta\) which cancels the four inverse powers coming from the integration measure. Adding additional derivatives can only improve convergence thanks to the additional powers of \(\eta\) which are dictated by scale invariance. We can also restrict to at most one time derivative on each of the external bulk-boundary propagators given that higher order ones can be eliminated by the scalar field's equation of motion. The lowest dimension operators which satisfies these properties are dimension-7 [37], and for concreteness we will use20
Footnote 20: There is a dimension-6 operator, \(\pi^{\prime}\partial_{i}\pi^{\prime}\Phi_{i}\), that satisfies our requirements but since only the \(h=\pm 1\) modes contribute we can take \(\Phi_{i}\) to be transverse and then this operator is a total spatial derivative. The resulting correlator will then vanish once we impose momentum conservation. The other dimension-7 operator that we could use is \(\partial^{2}\pi\partial_{i}\pi^{\prime}\Phi_{i}\). The correlator arising from this vertex will only differ from the one we are going to compute in the kinematic factors since the time integrals will be the same.
\[S_{\rm int}=\int d^{3}xd\eta\left(\frac{a^{-1}}{\Lambda^{3}}\partial_{j}\pi^ {\prime}\partial_{i}\partial_{j}\pi_{c}\Phi_{i}\right)\, \tag{5.16}\]
which originates from the EFT operator \(\nabla_{\mu}\delta g^{00}\delta K^{\mu}{}_{\nu}\Phi^{\nu}\). The number of scale factors can be understood from scale invariance given that under a scale transformation the vector transforms in the same way as \(a(\eta)\pi_{c}\) (since it is spin-1). Here \(\pi_{c}\) is the canonically normalized Goldstone boson \(\pi_{c}=c_{s}^{-3/2}f_{\pi}^{2}\,\pi\) with \(f_{\pi}^{4}=H^{4}/(2\pi\Delta_{\zeta})^{2}\), and \(\Delta_{\zeta}^{2}\approx 2\times 10^{-9}\) is the observed dimensionless power spectrum. In the following we compute the parity-odd trispectrum due to the exchange of this massive vector due to (5.16). We consider light and heavy fields separately.
Light mass case.Let us first consider the light mass case where \(m<H/2\). We remind the reader that the \(s\)-channel contribution to the trispectrum is, c.f. (4.48),
\[\langle\pi_{c}^{4}\rangle_{s}^{\prime}=\prod_{a=1}^{4}P_{\pi_{c}}(k_{a})\left( \rho_{4}(\{k\};s;\{\mathbf{k}\})+\sum_{h=-1}^{1}P_{h}(s)\rho_{3}^{(h)}(\mathbf{ k}_{1},\mathbf{k}_{2},-\mathbf{s})\rho_{3}^{(h)}(\mathbf{k}_{3},\mathbf{k}_{4}, \mathbf{s})\right). \tag{5.17}\]
According to the proof in Section 4, the full \(\psi_{4}\) is real and \(\rho_{4}\) does not contribute to the parity-odd trispectrum (see (5.3)). Here we therefore directly compute the factorised contributions i.e. the cubic wavefunction coefficients. We have
\[\psi_{3}^{(h)}(\mathbf{k}_{1},\mathbf{k}_{2},-\mathbf{s})=\] \[= -\frac{H}{\Lambda^{3}}\left(\mathbf{k}_{1}\cdot\mathbf{k}_{2} \right)\left(\mathbf{k}_{1}\cdot\boldsymbol{\epsilon}^{(h)}(\tilde{\mathbf{s }})\right)^{*}\times\int d\eta\,\eta K_{\pi_{c}}(k_{1},\eta)\partial_{\eta}K_ {\pi_{c}}(k_{2},\eta)K_{h}(s,\eta)\] \[-\frac{H}{\Lambda^{3}}\left(\mathbf{k}_{1}\cdot\mathbf{k}_{2} \right)\left(\mathbf{k}_{2}\cdot\boldsymbol{\epsilon}^{(h)}(\tilde{\mathbf{s }})\right)^{*}\times\int d\eta\,\eta K_{\pi_{c}}(k_{2},\eta)\partial_{\eta}K_ {\pi_{c}}(k_{1},\eta)K_{h}(s,\eta)\, \tag{5.18}\]
with
\[K_{\pi_{c}}=(1-ic_{s}k\eta)e^{ic_{s}k\eta},\qquad K_{h}=\frac{W_{-i\tilde{\kappa },\nu}(-2ik\eta)}{W_{-i\tilde{\kappa},\nu}(-2ik\eta_{0})}. \tag{5.19}\]
Here the superscript denotes the helicity of the external vector field, and we have used \([\boldsymbol{\epsilon}^{(h)}(\tilde{\mathbf{s}})]^{*}=\boldsymbol{\epsilon}^{ (-h)}(\tilde{\mathbf{s}})=\boldsymbol{\epsilon}^{(h)}(-\tilde{\mathbf{s}})\) for \(h=\pm 1\). The dynamical integral can be evaluated exactly using the Laplace transformation of the Whittaker function,
\[\mathcal{I}_{n}^{h}(a,b,\nu) \equiv a^{n+1}\int_{0}^{\infty}x^{n}W_{-i\tilde{\kappa},\nu}(2ax) e^{-bx}dx\] \[=2^{-1-n}\Gamma\left(\frac{3}{2}+n-\nu\right)\Gamma\left(\frac{3 }{2}+n+\nu\right){}_{2}\tilde{\mathrm{F}}_{1}\Bigg{[}\begin{array}{c}\frac{ 3}{2}+n-\nu,\frac{3}{2}+n+\nu\\ 2+n+i\tilde{\kappa}\end{array}\Bigg{|}\,\frac{1}{2}-\frac{b}{2a}\Bigg{]}\, \tag{5.20}\]
where \({}_{2}\tilde{\mathrm{F}}_{1}\) is the regularized hypergeometric function:
\[{}_{2}\tilde{\mathrm{F}}_{1}\Bigg{[}\begin{array}{c}a,b\\ c\end{array}\Bigg{|}z\Bigg{]}\equiv{}_{2}\mathrm{F}_{1}\Bigg{[}\begin{array}[] {c}a,b\\ c\end{array}\Bigg{|}z\Bigg{]}/\Gamma(c). \tag{5.21}\]
This integral enjoys the helical reality property we have seen many times above,
\[\mathcal{I}_{n}^{h*}(a,b,\nu)=\mathcal{I}_{n}^{-h}(a,b,\nu). \tag{5.22}\]
The cubic wavefunction coefficient is therefore given by
\[\psi_{3}^{(h)}(\mathbf{k}_{1},\mathbf{k}_{2},-\mathbf{s})=-\frac {ic_{s}^{2}}{s^{3}W_{-i\tilde{\kappa},\nu}(-2is\eta_{0})}\frac{H}{\Lambda^{3}} \left(\mathbf{k}_{1}\cdot\mathbf{k}_{2}\right)\left(\mathbf{k}_{1}\cdot \mathbf{e}^{(h)}(\tilde{\mathbf{s}})\right)^{*}k_{2}^{2}\left(1-k_{1}\frac{ \partial}{\partial k_{1}}\right)\mathcal{I}_{2}^{h}(s,c_{s}k_{12},\nu)\\ -\frac{ic_{s}^{2}}{s^{3}W_{-i\tilde{\kappa},\nu}(-2is\eta_{0})} \frac{H}{\Lambda^{3}}\left(\mathbf{k}_{1}\cdot\mathbf{k}_{2}\right)\left( \mathbf{k}_{2}\cdot\boldsymbol{\epsilon}^{(h)}(\tilde{\mathbf{s}})\right)^{*}k_ {1}^{2}\left(1-k_{2}\frac{\partial}{\partial k_{2}}\right)\mathcal{I}_{2}^{h}(s,c_{s}k_{12},\nu)\, \tag{5.23}\]
and the cubic density matrix coefficient reads
\[\rho_{3}^{(h)}\left(\mathbf{k}_{1},\mathbf{k}_{2},-\mathbf{s}\right)= \psi_{3}\left(\mathbf{k}_{1},\mathbf{k}_{2},-\mathbf{s}\right)+\psi_{3}^{* }\left(-\mathbf{k}_{1},-\mathbf{k}_{2},\mathbf{s}\right)\] \[= -\frac{2ic_{s}^{2}H}{\Lambda^{3}}\left(\mathbf{k}_{1}\cdot \mathbf{k}_{2}\right)\left(\mathbf{k}_{1}\cdot\boldsymbol{\epsilon}^{(h)}( \tilde{\mathbf{s}})\right)^{*}\frac{k_{2}^{2}}{s^{3}}\left(1-k_{1}\frac{ \partial}{\partial k_{1}}\right)\mathrm{Re}\ \bigg{[}\frac{\mathcal{I}_{2}^{h}(s,c_{s}k_{12},\nu)}{W_{-i\tilde{\kappa}, \nu}(-2is\eta_{0})}\bigg{]}\] \[-\frac{2ic_{s}^{2}H}{\Lambda^{3}}\left(\mathbf{k}_{1}\cdot \mathbf{k}_{2}\right)\left(\mathbf{k}_{2}\cdot\boldsymbol{\epsilon}^{(h)}( \tilde{\mathbf{s}})\right)^{*}\frac{k_{1}^{2}}{s^{3}}\left(1-k_{2}\frac{ \partial}{\partial k_{2}}\right)\mathrm{Re}\ \bigg{[}\frac{\mathcal{I}_{2}^{h}(s,c_{s}k_{12},\nu)}{W_{-i\tilde{\kappa}, \nu}(-2is\eta_{0})}\bigg{]}\,. \tag{5.24}\]
The product of polarisation factors is given by
\[\left(\mathbf{k}_{1}\cdot\boldsymbol{\mathfrak{e}}^{(h)}(\hat{\mathbf{s}})\right)^ {*}\left(\mathbf{k}_{3}\cdot\boldsymbol{\mathfrak{e}}^{(h)}(\hat{\mathbf{s}}) \right)=\left\{\begin{aligned} &\frac{1}{2}\left[\mathbf{k}_{1}\cdot \mathbf{k}_{3}-(\mathbf{k}_{1}\cdot\hat{\mathbf{s}})(\mathbf{k}_{3}\cdot\hat{ \mathbf{s}})+ih\hat{\mathbf{s}}\cdot(\mathbf{k}_{1}\times\mathbf{k}_{3}) \right],&\quad h=\pm 1\\ &\quad\left(\mathbf{k}_{1}\cdot\hat{\mathbf{s}})(\mathbf{k}_{3} \cdot\hat{\mathbf{s}})&\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad
with
\[\mathcal{A}_{h,1}=\frac{i\pi\operatorname{sech}(\pi\tilde{\kappa})}{ \Gamma\left(\frac{1}{2}-i\tilde{\kappa}-i\mu\right)\Gamma\left(\frac{1}{2}-i \tilde{\kappa}+i\mu\right)}. \tag{5.28}\]
This factorised contribution from \(F\) is then given by
\[\psi_{4}^{h,\text{PO}}=\] \[=\] \[\times e^{-\pi\tilde{\kappa}}\Bigg{[}\mathcal{A}_{h,1}+\frac{W_{i \tilde{\kappa},i\mu}(2is\eta_{0})}{W_{-i\tilde{\kappa},i\mu}(-2is\eta_{0})} \Bigg{]}\mathcal{I}_{2}^{h}(s,c_{s}k_{12},i\mu)\mathcal{I}_{2}^{h}(s,c_{s}k_{3 4},i\mu)+3\text{ perms }, \tag{5.29}\]
and the corresponding parity-odd density matrix coefficient is
\[\rho_{4}^{\text{PO}}= ic_{s}^{4}\left(\frac{H}{\Lambda^{3}}\right)^{2}\frac{k_{2}^{2}k_ {4}^{2}}{4s^{8}}\left(\mathbf{k}_{1}\cdot\mathbf{k}_{2}\right)\left(\mathbf{ k}_{3}\cdot\mathbf{k}_{4}\right)\left[\mathbf{s}\cdot\left(\mathbf{k}_{1} \times\mathbf{k}_{3}\right)\right]\left(1-k_{1}\frac{\partial}{\partial k_{1} }\right)\left(1-k_{3}\frac{\partial}{\partial k_{3}}\right)\] \[\times\Bigg{\{}\left[2\cosh(\pi\tilde{\kappa})\mathcal{A}_{+1,1} +\frac{e^{-\pi\tilde{\kappa}}W_{i\tilde{\kappa},i\mu}(2is\eta_{0})}{W_{-i \tilde{\kappa},i\mu}(-2is\eta_{0})}-\frac{e^{\pi\tilde{\kappa}}W_{i\tilde{ \kappa},i\mu}(-2is\eta_{0})}{W_{-i\tilde{\kappa},i\mu}(2is\eta_{0})}\right]\] \[\times\mathcal{I}_{2}^{(+1)}(s,c_{s}k_{12},i\mu)\mathcal{I}_{2}^ {(+1)}(s,c_{s}k_{34},i\mu)-(\tilde{\kappa}\rightarrow-\tilde{\kappa})\Bigg{\}} +3\text{ perms }. \tag{5.30}\]
For the light mass case the entire bracket in (5.30) vanishes, which is consistent with \(\psi_{4}\) being purely real. For the heavy mass case, \(\rho_{4}^{\text{PO}}\) is dependent on \(\eta_{0}\), however this dependence cancels with the contribution to the correlator coming from the cubic wavefunction coefficients thereby rendering the final correlator
Figure 4: The \(s\)-channel dimensionless parity-odd trispectrum \(\operatorname{Im}\mathcal{T}_{s,\text{PO}}\) as a function of the momentum ratio \(k_{1}/s\). The kinematics is chosen as \(k_{1}=k_{3}\), \(k_{2}=k_{4}=\sqrt{s^{2}+k_{1}^{2}}\) and \(\psi=\pi/3\) being the dihedral angle from the \((\mathbf{k}_{1},\mathbf{k}_{2})\)-plane to the \((\mathbf{k}_{3},\mathbf{k}_{4})\)-plane. The parameters are chosen as \(c_{s}=0.1,\Lambda=3H\) (left panel) and \(c_{s}=1,\Lambda=20H\) (right panel), together with a common chemical potential \(\kappa=H\). The blue and magenta curves show the exact solution (5.26) for vector field mass \(m=H/3\) and \(m=H/5\), respectively. The dots represent the numerical result computed in the UV theory, which perfectly match our exact solution.
\(\eta_{0}\)-independent (as it should be for IR-finite interactions). The computation of the cubic wavefunction coefficients is identical as above, and is omitted here for brevity. Putting everything together and projecting on to the parity-odd sector, we arrive at
\[B_{4}^{\zeta,\text{PO}}= \,i\left(\frac{H}{\Lambda}\right)^{6}\frac{\pi^{4}\Delta_{4}^{4}}{2 c_{s}^{2}}\frac{(\mathbf{k}_{1}\cdot\mathbf{k}_{2})\left(\mathbf{k}_{3}\cdot \mathbf{k}_{4}\right)}{k_{1}k_{2}k_{3}k_{4}}\frac{\mathbf{s}\cdot(\mathbf{k}_ {1}\times\mathbf{k}_{3})}{k_{1}^{2}k_{3}^{2}s^{8}}\left(1-k_{1}\frac{\partial} {\partial k_{1}}\right)\left(1-k_{3}\frac{\partial}{\partial k_{3}}\right)\] \[\times\Bigg{\{}\cosh(\pi\tilde{\kappa})\mathcal{A}_{+1,1} \mathcal{I}_{2}^{+1}(s,c_{s}k_{12},i\mu)\mathcal{I}_{2}^{+1}(s,c_{s}k_{34},i \mu)+e^{\pi\tilde{\kappa}}\text{Re}\,\left[\mathcal{I}_{2}^{+1}(s,c_{s}k_{12},i\mu)\mathcal{I}_{2}^{-1}(s,c_{s}k_{34},i\mu)\right]\] \[\qquad-\left(\tilde{\kappa}\rightarrow-\tilde{\kappa}\right) \Bigg{\}}+3\text{ perms}\] \[+(\text{$t$-channel})+(\text{$u$-channel}). \tag{5.31}\]
Again this result has the correct momentum scaling, and is purely imaginary. If we compare this result with that of light fields (5.26), we see that they can be converted into each other by replacing \(i\mu\leftrightarrow\nu\). Hence as we expected, there is no discontinuity in the mass parameters. This property extends to other examples too: the heavy field result is always given by an analytic continuation of the light field result. Given that the calculation for light fields is less involved, for the other examples we will concentrate on light fields and then extract the heavy field result via this simple replacement rule.
Interestingly, for a small inflaton sound speed (i.e. \(c_{s}\ll 1\)), this model of a heavy vector field with chemical potential admits a non-local single-field EFT description in the IR, which well approximates the behaviour of the parity-odd trispectrum in the regime \(c_{s}\kappa<c_{s}m<1\)[104]. After partially integrating out the heavy vector, parity violation resurges through _emergent non-locality_ in the effective vertex, and the resulting parity-odd trispectrum is neatly computed as a residue of the non-local pole in the effective
Figure 5: The \(s\)-channel dimensionless parity-odd trispectrum \(\text{Im}\,\mathcal{T}_{s,\text{PO}}\) as a function of the momentum ratio \(k_{1}/s\). The kinematics is chosen as \(k_{1}=k_{3}\), \(k_{2}=k_{4}=\sqrt{s^{2}+k_{1}^{2}}\) and \(\psi=\pi/3\) being the dihedral angle from the \((\mathbf{k}_{1},\mathbf{k}_{2})\)-plane to the \((\mathbf{k}_{3},\mathbf{k}_{4})\)-plane. The parameters are chosen as \(m=6H,\kappa=H\) (left panel) and \(m=6H,\kappa=4H\) (right panel), together with a common sound speed \(c_{s}=0.1\) and \(\Lambda=30H\). The blue and green curves show the leading-order (LO) and the next-to-leading-order (NLO) non-local EFT results. The red curve denotes the exact result (5.31) and the gray dots represent the numerics in the UV theory. We see that the non-local EFT predictions agree with numerics in the small-\(\kappa\) case, yet deviations start to appear at large \(\kappa\). In principle, such deviations may be cured by adding higher order contributions which are in practice tedious to compute. In contrast, the exact result we have computed in this paper matches the numerics very well for all parameter choices.
vertex. Such a miraculous behaviour as seen from the non-local EFT now becomes understandable from the parity-factorisation perspective. The fact that the parity-odd trispectrum is necessarily factorised in the UV theory is precisely the reason why we only acquire a non-vanishing contribution from the non-local pole in the IR EFT.
To compare our exact result for the parity-odd trispectrum in this model with the non-local EFT prediction, and to check them against numerics, we plot the corresponding dimensionless trispectra in Figure 5. As we can see from the plot, the exact result agrees with numerics very well, while the non-local EFT predictions start to deviate from the exact result when the chemical potential \(\kappa\) is large.
Before moving to some other examples, let us first comment on the massless case with a \(U(1)\) gauge symmetry, as promised. In this case we set \(m=0\) to preserve the gauge symmetry in the free theory of the vector field. We therefore have \(\nu=1/4\). Without adding any additional interactions beyond those in (5.5), a parity-odd trispectrum can be generated at 1-loop due to the \(\pi\Phi\Phi\) coupling. At tree-level we would again need to add interactions of the form \(\pi\pi\Phi\) that preserve the \(U(1)\) gauge symmetry. Since the field strength is anti-symmetric, this requires more derivatives than what we have studied so far. Indeed, the first non-zero operator is dimension-8. It would be interesting to study this class of trispectra in more detail.
### Example 2: spin-2 exchange in CCM
We now move to a second example where we consider the exchange of a spin-2 field with its dynamics described by the cosmological condensed matter physics scenario. In this case we take the bulk-bulk propagator to be parity-even \(\tilde{\kappa}=0\), and source the parity-violation via the interaction vertices. We therefore need one vertex to have an even number of spatial derivatives, and for the other to have an odd number. As before, we need to ensure IR-convergence. We will therefore work with the following interactions:
\[S_{\rm int}=\int d^{3}xd\eta\left(\frac{a}{\Lambda_{1}^{2}}\pi^{ \prime}_{c}\partial_{i}\partial_{j}\pi_{c}\sigma_{ij}+\frac{1}{\Lambda_{2}^{ 3}}\epsilon_{ijk}\partial_{i}\pi^{\prime}_{c}\partial_{j}\partial_{l}\pi_{c} \sigma_{kl}\right)\, \tag{5.32}\]
where the first term is dimension-6 while the second is dimension-7. These are the only operators with those mass dimensions and are the leading ones which are IR-finite. The corresponding EFToI operators are \(\delta g^{00}\delta K_{\mu\nu}\Sigma^{\mu\nu}\) and \(n_{\mu}\varepsilon^{\mu\nu\alpha\beta}\nabla_{\nu}\delta g^{00}\delta K_{ \alpha\gamma}\Sigma^{\gamma}{}_{\beta}\). In the CCM scenario the conformal weight of the massive field is the same as that of a massless scalar so the counting of the scale factors is simply \(4-\) (total number of derivatives). As with Section 2 we write
\[\sigma_{ij}(\eta,{\bf k})=\sum_{h=-2}^{2}\sigma_{h}(\eta,k){\rm e }^{(h)}_{ij}({\bf k})\, \tag{5.33}\]
and the polarisation tensors are chosen to satisfy conditions (2.6) and (2.7), and are given by
\[{\rm e}^{(0)}_{ij}=\sqrt{3}\left(\hat{k}_{i}\hat{k}_{j}-\frac{1} {3}\delta_{ij}\right),\qquad{\rm e}^{(\pm 1)}_{ij}=i(\hat{k}_{i}\hat{e}^{\pm}_{j} +\hat{k}_{j}\hat{e}^{\pm}_{i}),\qquad{\rm e}^{(\pm 2)}_{ij}=\sqrt{2}\hat{e}^{\pm}_{i }\hat{e}^{\pm}_{j}. \tag{5.34}\]
The mode functions for each helicity are given by
\[\sigma_{h}(\eta,k)=-\frac{H\eta}{\sqrt{2c_{h,2}k}}W_{0,\nu}(2ic_{h,2}k\eta)\, \tag{5.35}\]
where as we mentioned before we take the chemical potential to vanish \(\tilde{\kappa}=0\). In this limit the Whittaker function recovers the Hankel function of the first kind, c.f. (2.15). When the mass of the spin-2 field is light, the only contributions to the parity-odd trispectrum come from the cubic wavefunction coefficients
which are given by
\[\psi_{3}^{(h)}(\mathbf{k}_{1},\mathbf{k}_{2},-\mathbf{s}) =\] \[=\frac{ik_{1}^{2}}{\eta_{0}W_{0,\nu}(-2ic_{h,2}s\eta_{0})}\times \left[\frac{1}{s^{2}H\Lambda_{1}^{2}}\left(k_{2}^{i}k_{2}^{j}\mathrm{e}_{ij}^{(h )}(\mathbf{s})\right)^{*}\left(1-k_{2}\frac{\partial}{\partial k_{2}}\right) \mathcal{I}_{1}^{0}(c_{h,2}s,k_{12},\nu)\right.\] \[\qquad\qquad+\left.\frac{1}{s^{3}\Lambda_{2}^{3}}\left(\epsilon_{ ijk}k_{1}^{i}k_{2}^{j}k_{2}^{l}\mathrm{e}_{kl}^{(h)}(\mathbf{s})\right)^{*} \left(1-k_{2}\frac{\partial}{\partial k_{2}}\right)\mathcal{I}_{2}^{0}(c_{h,2 }s,k_{12},\nu)\right]+\left(1\leftrightarrow 2\right). \tag{5.36}\]
In the absence of the chemical potential, the \(\mathcal{I}_{n}^{h}\) integral is identical for each helicity (except for the sound speeds) and purely real. The helicity dependence then resides in the kinematic factors rather than the dynamical ones. The density matrix then reads
\[\rho_{2,1}^{(h)}(\mathbf{k}_{1},\mathbf{k}_{2},-\mathbf{s}) =\,\frac{k_{1}^{2}}{s^{2}H\Lambda_{1}^{2}}\mathrm{Re}\,\left( \frac{2i}{\eta_{0}W_{0,\nu}(-2ic_{h,2}s\eta_{0})}\right)\left(k_{2i}k_{2j} \cdot\mathrm{e}_{ij}^{(h)}(\mathbf{s})\right)^{*}\left(1-k_{2}\frac{\partial}{ \partial k_{2}}\right)\mathcal{I}_{1}^{0}(c_{h,2}s,k_{12},\nu)\] \[+i\frac{k_{1}^{2}}{s^{3}\Lambda_{2}^{3}}\mathrm{Re}\,\left(\frac{ 2}{\eta_{0}W_{0,\nu}(-2ic_{h,2}s\eta_{0})}\right)\left(\epsilon_{ijk}k_{1i}k_{ 2j}k_{2l}\mathrm{e}_{kl}^{(h)}(\mathbf{s})\right)^{*}\left(1-k_{2}\frac{ \partial}{\partial k_{2}}\right)\mathcal{I}_{2}^{0}(c_{h,2}s,k_{12},\nu)\] \[+\left(1\leftrightarrow 2\right). \tag{5.37}\]
To make the angular dependence transparent, let us decompose the \(s\)-channel trispectrum into two separate parts arising from the exchange of different helicity modes:
\[B_{4}^{\zeta}=B_{4,h=\pm 1}^{\zeta}+B_{4,h=\pm 2}^{\zeta}\, \tag{5.38}\]
where we have dropped the contribution from \(h=0\) since scalar exchanges cannot yield a parity-odd contribution. For the higher helicity modes, we need to fix the polarisation sums. For spin-1 the form of \(\sum_{h=\pm 1}\hat{e}_{i}(\mathbf{k})\hat{e}_{j}(-\mathbf{k})\) can be easily fixed without choosing any particular basis. The result should be parity-even and real given the properties of the polarisation vectors. Scale invariance further constrains it to only depend on \(\delta_{ij}\) and \(\hat{k}_{i}=k_{i}/k\). The free parameters can be then fixed by requiring the result to be transverse, and appropriately normalised: \(\hat{e}_{i}^{\pm}(\mathbf{k})\hat{e}_{i}^{\pm}(-\mathbf{k})=1\). We then have
\[\pi_{ij}(\mathbf{k})\equiv\hat{e}_{i}^{+}(\mathbf{k})\hat{e}_{j}^{+}(- \mathbf{k})+\hat{e}_{i}^{-}(\mathbf{k})\hat{e}_{j}^{-}(-\mathbf{k})=\delta_{ ij}-\hat{k}_{i}\hat{k}_{j}. \tag{5.39}\]
For spin-2, where polarisation tensors are combinations of \(\hat{k}_{i}\) and \(\hat{e}_{i}^{\pm}\), we can proceed in a similar way using (5.33) and (5.39). We have
\[\sum_{h=\pm 1}\mathrm{e}_{ij}^{h}(\mathbf{k})\mathrm{e}_{mn}^{h}(- \mathbf{k})=\hat{k}_{i}\hat{k}_{m}\pi_{jn}+\hat{k}_{j}\hat{k}_{m}\pi_{in}+\hat {k}_{i}\hat{k}_{n}\pi_{jm}+\hat{k}_{j}\hat{k}_{n}\pi_{im}\, \tag{5.40}\]
and
\[\sum_{h=\pm 2}\mathrm{e}_{ij}^{h}(\mathbf{k})\mathrm{e}_{mn}^{h}(- \mathbf{k})=\pi_{im}\pi_{jn}+\pi_{in}\pi_{jm}-\pi_{ij}\pi_{mn}. \tag{5.41}\]
By combining these polarisation sums evaluated at \(\mathbf{k}=-\mathbf{s}\) and the density matrix coefficients we can extract the contribution from individual helicity exchanges to the final parity-odd trispectrum,
\[B_{4,h=\pm 1}^{\zeta,\text{PO}}= \,2i\pi^{4}\Delta_{\zeta}^{4}\cos(\pi\nu)\left(\frac{H}{\Lambda_{1 }}\right)^{2}\left(\frac{H}{\Lambda_{2}}\right)^{3}\frac{\left(\mathbf{k}_{2} \cdot\mathbf{s}\right)\left(\mathbf{k}_{4}\cdot\mathbf{s}\right)}{k_{1}k_{2}k_{3 }k_{4}}\frac{\left[\mathbf{s}\cdot\left(\mathbf{k}_{1}\times\mathbf{k}_{3} \right)\right]}{c_{1,2}k_{2}^{2}k_{4}^{2}s^{8}}\] \[\times\left(1-k_{2}\frac{\partial}{\partial k_{2}}\right)\left(1- k_{4}\frac{\partial}{\partial k_{4}}\right)\mathcal{I}_{1}^{0}(c_{1,2}s,k_{12},\nu) \mathcal{I}_{2}^{0}(c_{1,2}s,k_{34},\nu)+7\text{ perms}\] \[+(t\text{-channel})+(u\text{-channel}). \tag{5.42}\]
\[B_{4,h=\pm 2}^{\zeta,\text{PO}}= \,2i\pi^{4}\Delta_{\zeta}^{4}\cos(\pi\nu)\left(\frac{H}{\Lambda_{ 1}}\right)^{2}\left(\frac{H}{\Lambda_{2}}\right)^{3}\frac{\left[s^{2}\left( \mathbf{k}_{2}\cdot\mathbf{k}_{4}\right)-\left(\mathbf{k}_{2}\cdot\mathbf{s} \right)\left(\mathbf{k}_{4}\cdot\mathbf{s}\right)\right]}{k_{1}k_{2}k_{3}k_{4 }}\frac{\left[\mathbf{s}\cdot\left(\mathbf{k}_{1}\times\mathbf{k}_{3}\right) \right]}{c_{2,2}k_{2}^{2}k_{4}^{2}s^{8}}\] \[\times\left(1-k_{2}\frac{\partial}{\partial k_{2}}\right)\left(1 -k_{4}\frac{\partial}{\partial k_{4}}\right)\mathcal{I}_{1}^{0}(c_{2,2}s,k_{ 12},\nu)\mathcal{I}_{2}^{0}(c_{2,2}s,k_{34},\nu)+7\text{ perms}\] \[+(t\text{-channel})+(u\text{-channel}). \tag{5.43}\]
As a consistency check, we see that the terms proportional to \(\left(\mathbf{k}_{2}\cdot\mathbf{s}\right)\left(\mathbf{k}_{4}\cdot\mathbf{s}\right)\) in (5.42) and (5.43) differ only by a sign and the sound speeds of the different modes. These two contributions would therefore cancel once added together if the sound speeds were identical (\(c_{1,2}=c_{2,2}\)). In that case the total trispectrum would be independent of \(s_{i}\), which is to be expected since in that case the three polarisation sums add up to an object that is independent of \(s_{i}\). We also see that the result is purely imaginary, and has the correct momentum scaling. For the special cases of \(\nu=3/2\) and \(\nu=1/2\), where the mode functions simplify to exponentials, the trispectrum vanishes which is consistent with the no-go theorem of [37]. Note that here we add \(+7\) perms rather than \(+3\) perms which we had in example \(1\) since here the two vertices on either side of a diagram are different.
It is noteworthy that in de-Sitter/inflationary four-point functions, spin-1 exchange is typically characterised by linear factors of \(t^{2}-u^{2}\) in \(s\)-channel diagrams, which originate from contractions between momenta and the polarisation sum. Exchanges of higher spin are then non-linear in \(t^{2}-u^{2}\). However, here things are slightly different due to the Levi-Civita \(\epsilon\)-tensor, and it is easy to check no such factor arises for spin-1 exchange as indicated by (5.42). For the exchange of \(h=\pm 2\) modes, this dependence appears from the \(\mathbf{k}_{2}\cdot\mathbf{k}_{4}\) factor and its corresponding permutation. Indeed we have
\[\mathbf{k}_{2}\cdot\mathbf{k}_{4}=\frac{1}{4}\left(t^{2}-u^{2}\right)+\frac{1 }{4}\left(k_{1}^{2}+k_{3}^{2}-k_{2}^{2}-k_{4}^{2}-s^{2}\right). \tag{5.44}\]
For general spin-\(S\), it is simple to see that helicity \(\pm h\) exchange will introduce contributions to the parity-odd trispectrum with a factor of \(\left(t^{2}-u^{2}\right)^{|h|-1}\), from which we can read off the spin of the exchanged field.
Here we have computed the trispectrum for light field exchange, and as we discussed above, from this result we can extract that of heavy field exchange by sending \(\nu\to i\mu\). We have also checked by explicit calculation that this analytic continuation yields the correct result.
### Example 3: spin-2 exchange in CC
In this final example we again consider spin-2 exchange with parity-violation arising from the interactions vertices, however now we describe the dynamics of the spin-2 field in the cosmological collider physics set-up. As we discussed in Section 2, the mode functions in the CC and CCM scenarios can differ significantly, leading to a distinct parity-odd trispectrum compared to what we have just computed in the previous subsection. We will denote the massive spin-2 field by \(\Phi_{\mu\nu}\). Given that scalar modes do not contribute to the final trispectrum, our focus will be on the components \(\Phi_{0j}\) and \(\Phi_{ij}\). Again we need IR-finite interaction
vertices and we choose
\[S_{\rm int}=\int d^{3}xd\eta\bigg{(} \frac{a^{-1}}{\Lambda_{1}^{3}}\pi_{c}^{\prime}\partial_{i}\partial_ {j}\pi_{c}\Phi_{ij}+\frac{a^{-2}}{\Lambda_{2}^{3}}\epsilon_{ijk}\partial_{i}\pi_ {c}^{\prime}\partial_{j}\partial_{l}\pi_{c}\Phi_{kl}\] \[+\frac{a^{-2}}{\Lambda_{3}^{3}}\partial_{i}\pi_{c}^{\prime} \partial_{i}\partial_{j}\pi_{c}\Phi_{0j}+\frac{a^{-3}}{\Lambda_{4}^{4}} \epsilon_{ijk}\partial_{i}\partial_{l}\pi_{c}\partial_{j}\partial_{l}\pi_{c}^{ \prime}\Phi_{0k}\bigg{)}\, \tag{5.45}\]
which arise from the following EFToI operators:
\[\delta g^{00}\delta K_{\mu\nu}\Phi^{\mu\nu} \longrightarrow \pi_{c}^{\prime}\partial_{i}\partial_{j}\pi_{c}\Phi_{ij}, \tag{5.46}\] \[n_{\mu}\varepsilon^{\mu\nu\alpha\beta}\nabla_{\nu}\delta g^{00 }\delta K_{\alpha\gamma}\Phi^{\gamma}{}_{\beta} \longrightarrow \epsilon_{ijk}\partial_{i}\pi_{c}^{\prime}\partial_{j}\partial_ {l}\pi_{c}\Phi_{kl},\] (5.47) \[\nabla^{\mu}\delta g^{00}\delta K_{\mu\nu}n_{\alpha}\Phi^{\alpha\nu} \longrightarrow \partial_{i}\pi_{c}^{\prime}\partial_{i}\partial_{j}\pi_{c}\Phi_ {0j},\] (5.48) \[n_{\mu}\varepsilon^{\mu\nu\alpha\beta}\delta K_{\nu\rho}n_{ \gamma}\nabla^{\gamma}\delta K^{\rho}{}_{\alpha}n_{\delta}\Phi^{\delta}{}_{\beta} \longrightarrow \epsilon_{ijk}\partial_{i}\partial_{l}\pi_{c}\partial_{j}\partial_ {l}\pi_{c}^{\prime}\Phi_{0k}. \tag{5.49}\]
Again the scale factors are fixed by scale invariance (note that here \(\Phi\) scales in the same way as \(a^{2}(\eta)\pi_{c}\)). We now decompose the field into the helicity basis:
\[\Phi_{0j}(\eta,{\bf k})=\sum_{h}\Phi_{1,2}^{h}(\eta,k)\mathfrak{ e}_{j}^{(h)}(\hat{\bf k})\, \tag{5.50}\] \[\Phi_{ij}(\eta,{\bf k})=\sum_{h}\Phi_{2,2}^{h}(\eta,k)\mathfrak{ e}_{ij}^{(h)}(\hat{\bf k})\, \tag{5.51}\]
and from now on we will ignore the longitudinal modes \(\Phi_{1,2}^{0}\) and \(\Phi_{2,2}^{0}\) since they will not contribute to the parity-odd trispectrum. This leaves us with three modes: \(\Phi_{1,2}^{\pm 1}\), \(\Phi_{2,2}^{\pm 1}\), and \(\Phi_{2,2}^{\pm 2}\). The polarization tensors for these modes, which satisfy (2.19) and (2.20), are
\[\mathfrak{e}_{i}^{(\pm 1)}=\sqrt{2}\,\hat{e}_{i}^{\pm},\qquad\mathfrak{e}_{ ij}^{(\pm 1)}=\frac{3}{\sqrt{2}}\left(\hat{k}_{i}\hat{e}_{j}^{\pm}+\hat{k}_{j}\hat{e}_{i}^{ \pm}\right),\qquad\mathfrak{e}_{ij}^{(\pm 2)}=2\hat{e}_{i}^{\pm}\hat{e}_{j}^{\pm}. \tag{5.52}\]
The factor \(\sqrt{2}\) arises because we adhere to the convention of [5], where \(\mathfrak{e}_{i}^{(\pm 1)}\mathfrak{e}_{i}^{*(\pm 1)}=2\). Following our discussion in Section 2, we can derive the following mode functions
\[\Phi_{1,2}^{\pm 1}(\eta,k) =e^{i\pi(\nu+1/2)/2}Z_{2}^{+1}\left(-k\eta\right)^{1/2}H_{\nu}^{( 1)}(-k\eta)\, \tag{5.53}\] \[\Phi_{2,2}^{\pm 2}(\eta,k) =e^{i\pi(\nu+1/2)/2}Z_{2}^{+2}\left(-k\eta\right)^{-1/2}H_{\nu}^{( 1)}(-k\eta)\,\] (5.54) \[\Phi_{2,2}^{\pm 1}(\eta,k) =\frac{i}{2}e^{i\pi(\nu+1/2)/2}Z_{2}^{+1}\left(-k\eta\right)^{-1/ 2}\left[k\eta\left(H_{\nu+1}^{(1)}(-k\eta)-H_{\nu-1}(-k\eta)\right)-3H_{\nu}^ {(1)}(-k\eta)\right]\,,\] \[=-\frac{i}{2}e^{i\pi(\nu+1/2)/2}Z_{2}^{+1}\left(-k\eta\right)^{-1/ 2}\left[2k\eta\,H_{\nu-1}^{(1)}(-k\eta)+(3+2\nu)H_{\nu}^{(1)}(-k\eta)\right]\,, \tag{5.55}\]
where \(Z_{s}^{|h|}\) is given by (2.40), and in the last line of \(\Phi_{2,2}^{\pm 1}\) we have used the recursion relation of Hankel function to simplify the expression. As in the previous example, we consider three contributions to the final parity-odd trispectrum arising from the exchange of these three modes. We denote the different components as \(B_{(n,h)}^{\zeta}\), and therefore the full trispectrum is
\[B_{4}^{\zeta}=B_{(1,\pm 1)}^{\zeta}+B_{(2,\pm 1)}^{\zeta}+B_{(2,\pm 2)}^{\zeta}. \tag{5.56}\]
The computation of each component is the same as what we have been through above. Once again, considering light fields buys us the privilege of limiting the focus on the cubic wavefunction coefficients only (i.e. (5.3)), before obtaining the density matrix coefficients and summing over helicities. The result for heavy fields then follows from analytic continuation. In short, we find
\[B^{\zeta,\text{PO}}_{(1,\pm 1)}= -i\Delta_{\zeta}^{4}(2\pi)^{3}\cos(\pi\nu)H^{2}\big{|}Z_{2}^{+1}(s) \big{|}^{2}\left(\frac{H}{\Lambda_{3}}\right)^{3}\left(\frac{H}{\Lambda_{4}} \right)^{4}\frac{\left(\mathbf{k}_{1}\cdot\mathbf{k}_{2}\right)\left(\mathbf{k }_{3}\cdot\mathbf{k}_{4}\right)}{k_{1}k_{2}k_{3}k_{4}}\frac{\mathbf{s}\cdot( \mathbf{k}_{1}\times\mathbf{k}_{3})}{k_{2}^{2}k_{4}^{2}s^{9}}\] \[\times\left(1-k_{2}\frac{\partial}{\partial k_{2}}\right)\left(1- k_{4}\frac{\partial}{\partial k_{4}}\right)\mathcal{I}_{3}^{0}(s,k_{12},\nu) \mathcal{I}_{4}^{0}(s,k_{34},\nu)+7\text{ perms}\] \[+(t\text{-channel})+(u\text{-channel}). \tag{5.57}\] \[B^{\zeta,\text{PO}}_{(2,\pm 2)}= 2i\Delta_{\zeta}^{4}(2\pi)^{3}\cos(\pi\nu)H^{2}\big{|}Z_{2}^{+2} (s)\big{|}^{2}\left(\frac{H}{\Lambda_{1}}\right)^{2}\left(\frac{H}{\Lambda_{2} }\right)^{3}\frac{\left[s^{2}\mathbf{k}_{2}\cdot\mathbf{k}_{4}-\left(\mathbf{ k}_{2}\cdot\mathbf{s}\right)\left(\mathbf{k}_{4}\cdot\mathbf{s}\right)\right]}{k _{1}k_{2}k_{3}k_{4}}\frac{\mathbf{s}\cdot(\mathbf{k}_{1}\times\mathbf{k}_{3} )}{k_{2}^{2}k_{4}^{2}s^{9}}\] \[\times\left(1-k_{2}\frac{\partial}{\partial k_{2}}\right)\left(1 -k_{4}\frac{\partial}{\partial k_{4}}\right)\mathcal{I}_{1}^{0}(s,k_{12},\nu )\mathcal{I}_{2}^{0}(s,k_{34},\nu)+7\text{ perms}\] \[+(t\text{-channel})+(u\text{-channel}).\] (5.58) \[B^{\zeta,\text{PO}}_{(2,\pm 1)}= 9i\Delta_{\zeta}^{4}\pi^{3}\cos(\pi\nu)H^{2}\big{|}Z_{2}^{+1}(s) \big{|}^{2}\left(\frac{H}{\Lambda_{1}}\right)^{2}\left(\frac{H}{\Lambda_{2}} \right)^{3}\frac{\left(\mathbf{k}_{2}\cdot\mathbf{s}\right)\left(\mathbf{k}_{ 2}\cdot\mathbf{s}\right)}{k_{1}k_{2}k_{3}k_{4}}\frac{\mathbf{s}\cdot(\mathbf{k} _{1}\times\mathbf{k}_{3})}{k_{2}^{2}k_{4}^{2}s^{9}}\] \[\times\left(1-k_{2}\frac{\partial}{\partial k_{2}}\right)\left[2 \mathcal{I}_{2}^{0}(s,k_{12},\nu-1)+(3+2\nu)\mathcal{I}_{1}^{0}(s,k_{12},\nu )\right]\] \[\times\left(1-k_{4}\frac{\partial}{\partial k_{4}}\right)\left[2 \mathcal{I}_{3}^{0}(s,k_{34},\nu-1)+(3+2\nu)\mathcal{I}_{2}^{0}(s,k_{34},\nu )\right]+7\text{ perms}\] \[+(t\text{-channel})+(u\text{-channel}). \tag{5.59}\]
Again we see that each component is purely imaginary and has the correct scaling with momenta (in checking this point we use the fact that \(Z_{2}^{h}(s)\sim s^{1/2}\)). We also see that for the special case of \(\nu=1/2\) the trispectrum vanishes. This limit corresponds to the partially-massless limit where the massive spin-2 field has only four propagating degrees of freedom (since the \(h=0\) modes do not contribute here we don't need to worry about the divergence of the corresponding two-point function in this PM limit). In this limit the mode functions simplify to exponentials and again the no-go theorem of [37] dictates that the result should vanish.
## 6 Conclusions and outlook
In this paper we have derived some reality theorems relating to cosmological wavefunction coefficients of massless scalars, massless gravitons and conformally coupled scalars. We have shown that the maximally-connected part of these coefficients, which is \(i)\) the most difficult to compute since it contains the maximum number of nested time integrals and \(ii)\) the part that can be singular as the total-energy goes to zero \(k_{T}\to 0\), is a purely real function of the kinematics. Our results allow for the exchange of states with any mass and integer spin, and in deriving our results we considered two distinct descriptions for the dynamics of massive spinning fields during inflation: cosmological condensed matter physics (where states are representations of the group of rotations) and cosmological collider physics (where states are representations of the de Sitter group). Furthermore, if all exchanged fields are in the complementary series i.e. they have light masses, then our reality theorem extends beyond the maximally-connected part to the full wavefunction coefficient. Our results apply under the following assumptions:
* **Tree-level approximation**: we considered tree-level Feynman diagrams which allowed us to avoid having to analytically continue the spacetime dimensions. This meant that we could work with fixed external propagators which have simple properties in \(D=3+1\), namely that they are real after Wick rotation. In Appendix C we offer an alternative proof of our theorems using Hermitian analyticity of all propagators, and scale invariance. Here we use the relation \(V=I+1\) where \(V\) is the number of vertices and \(I\) is the number of internal propagators. This relation holds only at tree-level. By
relaxing the tree-level approximation the maximally-connected parts of the wavefunction coefficients can have imaginary parts [42].
* **Bunch-Davies vacuum conditions**: this assumption enabled us to rotate all time variables by \(90^{\circ}\) in the complex plane as a tool for computing wavefunction coefficients. The fact that we are computing the vacuum wavefunction for which fields vanish exponentially fast in the far past allowed us to close the contour and drop any contributions from the arc such that integration along the real line could be replaced by integration along the imaginary line. This assumption is relevant for our proof in Appendix C since Hermitian analyticity is closely tied to having Bunch-Davies vacuum conditions. If we relax this assumption, for example with the Ghost Condensate, then the maximally-connected parts of the wavefunction coefficients can have imaginary parts [37].
* **Scale invariance**: this assumption ensured that the vertex operators were real after Wick rotation. Indeed, scale invariance ensures that time derivatives enter as \(\eta\partial_{\eta}\) and spatial momenta enter as \(i\eta\mathbf{k}\). This could then be combined with the reality properties of the propagators to prove that various integrands are purely real. If we relax this assumption, for example by allowing for time-dependent couplings or going to general FLRW spacetimes (except when the scale factor is an odd-power-law function of the conformal time), then the maximally-connected parts of the wavefunction coefficients can have imaginary parts [37, 125].
* **IR-convergence**: this assumption enabled us to make the leap from proving the reality of integrands to the reality of the integrated result. Indeed, IR-convergence meant that the final result was independent of \(\eta_{0}\) and so we did not need to worry about rotating this cut-off. In the presence of an IR-divergence we would need to also rotate \(\eta_{0}\) and this can yield imaginary parts with a logarithmic-divergence, for example. In Appendix C we combined scale invariance of the bulk interactions with IR-convergence to use the fact that the wavefunction coefficients have a fixed scaling with momenta yielding simple transformation properties as all momenta and energies flip sign. If we relax this assumption, for example by allowing for IR-divergent bulk interactions as occurs for the minimal coupling between the inflaton and the massless graviton, then the maximally-connected parts of the wavefunction coefficients can have imaginary parts [101].
The reality of the maximally-connected part of wavefunction coefficients is not just of theoretical interest, rather it can make the computation of phenomenologically relevant ([97, 98, 99, 102, 103]) cosmological correlators a far simpler task than naively expected. We have shown this in Section 5 by considering the parity-odd trispectra of curvature perturbations due to the coupling of the inflaton with another sector with massive spinning fields and parity-violation. Since parity-odd correlators depend on the imaginary part of the maximally-connected wavefunction coefficients, these trispectra are factorised and computed by considering cubic diagrams only. We presented a number examples considering both the CCM and CC scenarios, both light and heavy fields (with the final answers related by analytic continuation), and both parity-violation arising from the free theory of the massive spinning fields and from the bulk interactions. In particular, we considered a parity-violating correction to the action of a massive vector field in Section 5.1 and compared our result with that computed in [104] using a non-local EFT arising from integrating out the massive vector field. Our result recovers the EFT result in the appropriate limit, but also gives an exact result in the regime where the EFT breaks down. This example also includes axion-\(U(1)\) gauge field inflation. We also considered examples with massive spin-2 fields in Sections 5.2 and 5.3. For all spins we allowed for a chemical potential in our analysis, and for hierarchies between the speed of the inflaton and the exchanged fields, such that the size of trispectra can be enhanced.
There are many avenues for future research directions and here we outline a few:
* **Moving on to loops:** it would be interesting to see if any of our reality theorems hold at loop level. As we mentioned above, total-energy poles coming from loops can be imaginary, but perhaps the
structure of such imaginary terms can be constrained given that the reality properties of bulk-bulk propagators still hold. In fact, in the original \(D=3+1\) spacetime dimensions, the loop integrand for \(\psi_{n}\) of light fields still appears to be purely real after Wick rotation, but the dimensional regulator demands an evaluation in \(D=4-\epsilon\) dimensions. The Wick rotation in such a case is expected to bring factors of \((-\eta)^{\epsilon}=(-i\chi)^{\epsilon}=(1-i\pi\epsilon/2)\chi^{\epsilon}\). The \(\mathcal{O}(\epsilon)\) imaginary part turns out to be cancellable by the \(1/\epsilon\) divergences, giving finite imaginary contributions to \(\psi_{n}\). This has been demonstrated for massless and conformally coupled theories in [42]. It would be interesting to see if such a phenomenon persists for general massive fields and at higher loops.
* **Distinguishing between massive spinning field set-ups**: we have considered two different descriptions of massive spinning fields during inflation (CC and CCM). It would be interesting to investigate if these two set-ups could be distinguished from each other at the level of massless scalar and massless graviton correlation functions.
* **Kramers-Kronig for correlators**: it would be interesting to investigate if the parity-even part of a correlator can be constrained given the parity-odd part. This is conceivable for examples where the parity-violation is driven by a chemical potential correction to the free theory. Perhaps consistency of higher-point functions could be used to constrain lower-point ones in this regard. Since the parity-odd part is always imaginary and the parity-even part is always real, such reconstruction of a full correlator from its imaginary part, if possible, will be an interesting cosmological analogy of the well-celebrated Kramers-Kronig relations in electromagnetism.
* **EAdS perspective**: the Wick rotation used extensively throughout this work is in fact a contour deformation, i.e. a change of integrated bulk time which are dummy variables. However it is tempting to conceive a further Wick rotation for the boundary time \(\eta_{0}=i\chi_{0}\) which is explicit. In doing so, one arrives at a quantity _different_ from the wavefunction of the universe. The reality of this quantity is especially transparent since the whole spacetime becomes Euclidean. In fact, a further rotation of the Hubble parameter \(H=-i/L_{\text{AdS}}\) yields a theory defined in Euclidean Anti-de Sitter (EAdS) space. The wavefunction then becomes the partition function of the boundary CFT of the EAdS bulk [71]. The difficulty, however, is in the fact that the continuations of \(\eta_{0}\) and \(H\) do not seem to commute with the recipe of obtaining correlators from wavefunction coefficients, which relies on the unitarity of the wavefunction. It would be helpful to understand how to correctly perform this continuation, i.e. how to understand in-in correlators in Euclidean field theories.
* **Classifying singularities**: we have concentrated on the total-energy singularities in this work, yet they are by no means the only singularities of the wavefunction. Notably, there are also partial-energy singularities when the total energy flowing into a sub-component of a Feynman diagram approaches zero. For IR-convergent massless or conformally coupled theories in de Sitter and flat spacetime, these singularities are always poles at tree-level since the wavefunction coefficients are rational functions of momenta [28, 78]. However, in more general theories with arbitrary mass, spin, sound speed and chemical potential, these singularities have not yet been classified even at tree-level. The classification of these singularities would crucially serve as a first step toward a complete understanding of the analytic structure of wavefunction of the universe.
AcknowledgementsWe thank Paolo Benincasa, Zongzhe Du, Sadra Jazayeri, Austin Joyce, Hayden Lee, Arthur Lipstein, Scott Melville, Enrico Pajer, Sebastien Renaux-Petel, Denis Werth, Yi Wang, Zhong-Zhi Xianyu and Yang Zhang for helpful discussions. D.S. is supported by a UKRI Stephen Hawking Fellowship [grant number EP/W005441/1] and a Nottingham Research Fellowship from the University of Nottingham. XT and YZ are supported in part by the National Key R&D Program of China (No. 2021YFC2203100). D.S. thanks The Hong Kong University of Science and Technology for kind hospitality.
For the purpose of open access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising.
General solution of \(\Delta G_{\sigma}^{(h)}\)
In this appendix we construct the general solution of \(\Delta G_{\sigma}^{(h)}\) such that the connected propagator \(C_{\sigma}^{(h)}=G_{\sigma}^{(h)}+\Delta G_{\sigma}^{(h)}\) is helically real after Wick rotation. We allow for a chemical potential \(\kappa\) in the massive field's dispersion relation in the CCM scenario such that the relevant mode function is given by a Whittaker function, c.f. (2.14). Recall that we need to decompose the bulk-bulk propagator into two parts,
\[C_{\sigma}^{(h)}(\eta_{1},\eta_{2},k) =[\sigma_{h}(\eta_{1},k)\sigma_{h}^{*}(\eta_{2},k)\theta(\eta_{1}- \eta_{2})+(\eta_{1}\leftrightarrow\eta_{2})]+\Delta G_{\sigma}^{(h)}(\eta_{1}, \eta_{2},k)\,\] (A.1) \[F_{\sigma}^{(h)}(\eta_{1},\eta_{2},k) =-\frac{\sigma_{h}(\eta_{0},k)}{\sigma_{h}^{*}(\eta_{0},k)} \sigma_{h}^{*}(\eta_{1},k)\sigma_{h}^{*}(\eta_{2},k)-\Delta G_{\sigma}^{(h)}( \eta_{1},\eta_{2},k)\.\] (A.2)
In order to ensure that the connected part still satisfies the propagator equation (3.7), the added term must satisfy the homogeneous equation
\[\left(\eta_{1}^{2}\frac{\partial^{2}}{\partial\eta_{1}^{2}}-2\eta_{1}\frac{ \partial}{\partial\eta_{1}}+c_{h,S}^{2}k^{2}\eta_{1}^{2}-2c_{h,S}\tilde{ \kappa}\,k+\frac{m^{2}}{H^{2}}\right)\Delta G_{\sigma}^{(h)}(\eta_{1},\eta_{2},k)=0\.\] (A.3)
The UV convergence of bulk time integrals requires the Bunch-Davies initial condition at \(\eta\to-\infty\), namely
\[\lim_{\eta_{1},\eta_{2}\to-\infty\left(1-i\epsilon\right)}\Delta G_{\sigma}^{( h)}(\eta_{1},\eta_{2},k)=0\.\] (A.4)
Note that in contrast to (3.8), we do not impose any boundary condition at \(\eta_{0}\) for \(\Delta G^{(h)}\). Upon symmetrizing over \(\eta_{1}\leftrightarrow\eta_{2}\), we are left with
\[\Delta G_{\sigma}^{(h)}(\eta_{1},\eta_{2},k)=\mathcal{A}_{h}\,\sigma_{h}^{*}( \eta_{1},k)\sigma_{h}^{*}(\eta_{2},k)\.\] (A.5)
Here \(\mathcal{A}_{h}=\mathcal{A}_{h}(\kappa,\mu)\) is a helicity-dependent constant to be determined. Now we demand that this connected propagator is helically real after rotation, i.e.
\[\left[\tilde{C}_{\sigma}^{(h)}(\chi_{1},\chi_{2},k)\right]^{*}=\tilde{C}_{ \sigma}^{(-h)}(\chi_{1},\chi_{2},k)\.\] (A.6)
Due to the symmetry between \(\chi_{1}\leftrightarrow\chi_{2}\), we are free to pick \(\chi_{1}<\chi_{2}\) without loss of generality. Thus the Wick-rotated connected propagator becomes
\[\tilde{C}_{\sigma}^{(h)}(\chi_{1},\chi_{2},k) =\left[\sigma_{h}(ie^{ie}\chi_{1},k)+\mathcal{A}_{h}\sigma_{h}^{*} (i\chi_{1},k)\right]\sigma_{h}^{*}(i\chi_{2},k)\] \[=-\frac{H^{2}\chi_{1}\chi_{2}}{2c_{h,S}k}e^{-\pi\tilde{k}}\] \[\qquad\times\left[W_{i\tilde{\kappa},i\mu}(-2e^{i\epsilon}c_{h,S}k \chi_{1})+\mathcal{A}_{h}W_{-i\tilde{\kappa},-i\mu}(2c_{h,S}k\chi_{1})\right]W_{ -i\tilde{\kappa},-i\mu}(2c_{h,S}k\chi_{2})\.\] (A.7)
Notice that Whittaker functions have a branch cut along the negative real axis, which is why we have kept the \(e^{i\epsilon}\) factor to ensure that the branch cut is never crossed. However, the complex conjugation of Whittaker functions is most transparent when their argument lies along the positive real axis,
\[\left[W_{a,b}(z)\right]^{*}=W_{a^{*},b^{*}}(z)\,\quad\left[M_{a,b}(z)\right]^{*}=M _{a^{*},b^{*}}(z)\,\quad z\in\mathbb{R}_{+}\.\] (A.8)
To inspect the complex conjugation \([\tilde{C}_{\sigma}^{(h)}]^{*}\), we can first expand the Whittaker \(W\)-functions in terms of Whittaker \(M\)-functions,
\[W_{a,b}(z)=\frac{\Gamma(-2b)}{\Gamma\left(\frac{1}{2}-b-a\right)}M_{a,b}(z)+ \frac{\Gamma\left(2b\right)}{\Gamma\left(\frac{1}{2}+b-a\right)}M_{a,-b}(z)\,\] (A.9)
and then rotate away arguments lying below the branch cut using
\[M_{a,b}(-e^{i\epsilon}z)=-ie^{-i\pi b}M_{-a,b}(z)\.\] (A.10)
This yields
\[\tilde{C}_{\sigma}^{(h)}(\chi_{1},\chi_{2},k)=-\frac{H^{2}\chi_{1} \chi_{2}}{2c_{h,S}k}e^{-\pi\tilde{\kappa}}\] \[\qquad\times\left\{\frac{\Gamma(-2i\mu)^{2}}{\Gamma\left(\frac{1}{ 2}+i\tilde{\kappa}-i\mu\right)^{2}}\left[\mathcal{A}_{h}-\frac{ie^{\pi\mu} \Gamma\left(\frac{1}{2}+i\tilde{\kappa}-i\mu\right)}{\Gamma\left(\frac{1}{2}- i\tilde{\kappa}-i\mu\right)}\right]M_{-+}(1)M_{-+}(2)\right.\] \[\qquad+\left.\frac{\pi\operatorname{csch}(2\pi\mu)}{2\mu\,\Gamma \left(\frac{1}{2}+i\tilde{\kappa}-i\mu\right)\Gamma\left(\frac{1}{2}+i\tilde {\kappa}+i\mu\right)}\left[\mathcal{A}_{h}-\frac{ie^{-\pi\mu}\Gamma\left( \frac{1}{2}+i\tilde{\kappa}+i\mu\right)}{\Gamma\left(\frac{1}{2}-i\tilde{ \kappa}+i\mu\right)}\right]M_{--}(1)M_{-+}(2)\right.\] \[\qquad+\left.(\mu\to-\mu)\right\}\,,\] (A.11)
where we have abbreviated
\[M_{\pm\pm}(j)\equiv M_{\pm i\tilde{\kappa},\pm i\mu}(2c_{h,S}k \chi_{j})\,\quad M_{\pm\mp}(j)\equiv M_{\pm i\tilde{\kappa},\mp i\mu}(2c_{h,S}k \chi_{j})\,\quad j=1,2\.\] (A.12)
Now we can perform a complex conjugation on the Wick-rotated propagator and obtain
\[[\tilde{C}_{\sigma}^{(h)}(\chi_{1},\chi_{2},k)]^{*}=-\frac{H^{2} \chi_{1}\chi_{2}}{2c_{h,S}k}e^{-\pi\tilde{\kappa}}\] \[\qquad+\left(\mu\to-\mu\right)\right\}\,,\] (A.13)
while a helicity flip gives
\[\tilde{C}_{\sigma}^{(-h)}(\chi_{1},\chi_{2},k)=-\frac{H^{2}\chi_ {1}\chi_{2}}{2c_{h,S}k}e^{\pi\tilde{\kappa}}\] \[\qquad+\left(\mu\to-\mu\right)\right\}\,.\] (A.14)
By comparing the coefficients of (A.13) and (A.14), we conclude that reality of the connected propagator requires us to satisfy:
\[e^{\pi\tilde{\kappa}}\mathcal{A}_{-h}-e^{-\pi\tilde{\kappa}} \mathcal{A}_{h}^{*}=\frac{2i\pi}{\Gamma\left(\frac{1}{2}+i\tilde{\kappa}-i\mu \right)\Gamma(\frac{1}{2}+i\tilde{\kappa}+i\mu)}\.\] (A.15)
In principle, \(\mathcal{A}_{h}\) can be chosen to be an arbitrary complex constant as long as it satisfies (A.15). However, it is often convenient to pick a symmetric choice such that
\[\mathcal{A}_{-h}=-\mathcal{A}_{h}^{*}\,\] (A.16)
which yields (3.31) in the main text i.e
\[\mathcal{A}_{h}=\frac{i\pi\operatorname{sech}(\pi\tilde{\kappa})}{ \Gamma\left(\tfrac{1}{2}-i\tilde{\kappa}-i\mu\right)\Gamma(\tfrac{1}{2}-i \tilde{\kappa}+i\mu)}\.\] (A.17)
This concludes the construction of the desired connected propagator in the CCM scenario.
In the CC scenario, we turn off the chemical potential, and the general constraint (A.15) for the mode with \(n=|h|\) reduces to
\[\mathcal{A}_{-h,|h|}-\mathcal{A}_{h,|h|}^{*}=2i\cosh\pi\mu_{S}\.\] (A.18)
The simplest choice is to demand \(\mathcal{A}_{-h,|h|}=\mathcal{A}_{h,|h|}=-\mathcal{A}_{h,|h|}^{*}\), which gives
\[\mathcal{A}_{h,|h|}=i\cosh\pi\mu_{S}\.\] (A.19)
Note that this solution establishes the reality of \(C^{h}_{|h|,S}\), while that of \(C^{h}_{n,S},\ n>|h|\) follows trivially from acting with real derivative operators on \(C^{h}_{|h|,S}\), as we explained in the main text.
Finally, we point out that although the inclusion of heavy fields motivated the \(\Delta G\) piece that achieves the reality of \(C\), the same procedure works equally well for light fields except when \(\nu\) is a half-integer. It is easy to see that all the derivations above straightforwardly go through with the replacement of \(\mu\to-i\nu\). In fact, after choosing the specific choice (A.17) and replacing \(\mu\) by \(-i\nu\), the connected propagator becomes identical to the original bulk-bulk propagator in the \(\eta_{0}\to 0\) limit (c.f., (3.13)), i.e. \(C=G\) and \(F=0\). For the case of \(\nu=n/2\), \(n\in\mathbb{N}_{+}\), the equation (A.9) reaches a singularity, rendering the proof invalid. However, it is easy to check the validity of (A.17) by inserting it into (A.6), using the same derivation that appeared in Section 3.1. This crucially demonstrates that the proof of \(k_{T}\)-reality does not make any artificial distinction between light fields and heavy fields; we can extract the connected propagator part of all the internal lines regardless of their masses, and make use of their reality property to show the total-energy poles are always real.
## Appendix B Proofs via the in-in/Schwinger-Keldysh formalism
In this appendix we streamline the derivation of the reality and parity-odd factorisation theorems in the language of the more conventional in-in and Schwinger-Keldysh formalisms. Both of them focus directly on the observable, i.e. the \(n\)-point correlation function itself, without introducing intermediate quantities such as the wavefunction representation of the quantum state of the universe. Therefore, we shall translate the \(k_{T}\)-reality theorem for wavefunction coefficients (4.2) to the correlator level, and show that consequently all parity-odd correlators are factorised at tree-level. Here we will only deal with the CCM case, and the proof straightforwardly extends to the CC case as shown in the main text.
The perturbative computation of correlators can be organized using diagrammatics with slightly different Feynman rules from those of the cosmological wavefunction (see, for example, [8]). In short, every \(n\)-point correlator is computed by a set of diagrams with coloured vertices indicating whether they are time-ordered (black or "+") or anti-time-ordered (white or "\(-\)"). Thus in general, a diagram of \(V\) vertices will comprise of \(2^{V}\) coloured copies which need to be summed. A change from a black vertex to a white one (i.e. from \(+\) to \(-\)) corresponds to a local complex conjugation plus a flip of the direction of all the 3-momenta flowing into the vertex. The internal propagators connecting these vertices are thus classified into four types according to the colour of their vertices:
\[\mathcal{G}_{++}(\eta_{1},\eta_{2},k) =\theta(\eta_{1}-\eta_{2})\varphi(\eta_{1},k)\varphi^{*}(\eta_{2},k)+\theta(\eta_{2}-\eta_{1})\varphi^{*}(\eta_{1},k)\varphi(\eta_{2},k)\,\] (B.1a) \[\mathcal{G}_{+-}(\eta_{1},\eta_{2},k) =\varphi^{*}(\eta_{1},k)\varphi(\eta_{2},k)\,\] (B.1b) \[\mathcal{G}_{-+}(\eta_{1},\eta_{2},k) =\varphi(\eta_{1},k)\varphi^{*}(\eta_{2},k)\,\] (B.1c) \[\mathcal{G}_{--}(\eta_{1},\eta_{2},k) =\theta(\eta_{1}-\eta_{2})\varphi^{*}(\eta_{1},k)\varphi(\eta_{ 2},k)+\theta(\eta_{2}-\eta_{1})\varphi(\eta_{1},k)\varphi^{*}(\eta_{2},k)\.\] (B.1d)
The external propagators are simply obtained from sending one of the vertices to the boundary in the internal propagators,
\[\mathcal{K}_{+}(\eta,k) =\mathcal{G}_{-+}(\eta_{0},\eta,k)=\varphi(\eta_{0},k)\varphi^{*}( \eta,k)\,\] (B.2a) \[\mathcal{K}_{-}(\eta,k) =\mathcal{G}_{+-}(\eta_{0},\eta,k)=\varphi^{*}(\eta_{0},k)\varphi( \eta,k)\.\] (B.2b)
All of these propagators satisfy the Bunch-Davies (or anti-Bunch-Davies for anti-time-ordered vertices) initial condition in the far past, while no boundary condition at \(\eta_{0}\) is posed for them. Instead, they satisfy the conjugation rule
\[\left[\mathcal{G}_{\rm ab}(\eta_{1},\eta_{2},k)\right]^{*} =\mathcal{G}_{\rm(-a)(-b)}(\eta_{1},\eta_{2},k)\,\] (B.3) \[\left[\mathcal{K}_{\rm a}(\eta,k)\right]^{*} =\mathcal{K}_{\rm-a}(\eta,k)\,\qquad\qquad{\rm a,b=\pm}\.\] (B.4)
Notice that the flip of vertex colour is accompanied by a flip of momentum in the kinematic structure in addition to the complex conjugation.
We start by noticing that the total-energy singularities can only reside in _monochromatic_ diagrams where the vertices are either all-black or all-white. This is simply a consequence of the factorised nature of the Wightman functions \(\mathcal{G}_{\pm\mp}(\eta_{1},\eta_{2},k)\), i.e. any polychromatic diagram is necessarily disconnected in time at the internal line with opposite colours. In the wavefunction formalism language, they correspond to contributions from the factorised third term in the bulk-bulk propagator (1.7) together with sewing disconnected \(\rho_{n}\) in (4.48). Since the all-white diagram is just the complex conjugation plus momentum inversion of the all-black diagram,
\[\left\langle\varphi(\mathbf{k}_{1})\cdots\varphi(\mathbf{k}_{n}) \right\rangle^{\prime}_{-}=\left\langle\varphi(-\mathbf{k}_{1})\cdots\varphi( -\mathbf{k}_{n})\right\rangle^{\prime\,*}_{+}\,\] (B.5)
we will focus on the all-black diagram without loss of generality,
\[\left\langle\varphi(\mathbf{k}_{1})\cdots\varphi(\mathbf{k}_{n}) \right\rangle^{\prime}_{+}=\int_{-\infty(1-i\epsilon)}^{0}\left[\prod_{v=0}^{ V}d\eta_{v}\,(+i)\lambda_{v}\,D_{v}\right]\left[\prod_{e=1}^{n}\mathcal{K}_{e,+} \right]\left[\prod_{e^{\prime}=1}^{I}\mathcal{G}_{e^{\prime},++}\right]\,\] (B.6)
where \(D_{v}\) is given by (4.1) for the CCM scenario. As with the derivation for the cosmological wavefunction \(\psi_{n}\), we perform a Wick rotation \(\eta=i\chi\),
\[\left\langle\varphi(\mathbf{k}_{1})\cdots\varphi(\mathbf{k}_{n}) \right\rangle^{\prime}_{+}=(-1)^{V}\int_{0}^{\infty}\left[\prod_{v=0}^{V}d \chi_{v}\,\lambda_{v}\,\tilde{D}_{v}\right]\left[\prod_{e=1}^{n}\tilde{ \mathcal{K}}_{e,+}\right]\left[\prod_{e^{\prime}=1}^{I}\tilde{\mathcal{G}}_{e^ {\prime},++}\right]\.\] (B.7)
The reality of the vertices and external propagators becomes automatic assuming scale invariance and massless external fields,
\[\tilde{D}^{*}_{v}=\tilde{D}_{v}\,\quad\tilde{\mathcal{K}}^{*}_{e,+}= \tilde{\mathcal{K}}_{e,+}\.\] (B.8)
For the internal Feynman propagators, reality can be achieved for the connected part via adding and subtracting a homogeneous solution to the equations of motion,
\[\tilde{\mathcal{G}}_{e^{\prime},++}=\tilde{\mathcal{C}}_{e^{ \prime},++}+\tilde{\mathcal{F}}_{e^{\prime},++}\,\quad\left(\tilde{\mathcal{C}}_{e^{\prime},++}\right)^{*}= \tilde{\mathcal{C}}_{e^{\prime},++}\,\] (B.9)
where
\[\tilde{\mathcal{C}}_{e^{\prime},++} \equiv\tilde{\mathcal{G}}_{e^{\prime},++}+\Delta\tilde{G}\,\] (B.10) \[\tilde{\mathcal{F}}_{e^{\prime},++} \equiv-\Delta\tilde{G}\.\] (B.11)
The solution of \(\Delta\tilde{G}\) is identical to that in the wavefunction approach (see Appendix A). Thus after expanding the internal propagators of the all-black diagram, we deduce that the maximally-connected contribution
\[\langle\varphi({\bf k}_{1})\cdots\varphi({\bf k}_{n})\rangle^{\prime C}_{+}=(-1) ^{V}\int_{0}^{\infty}\Bigg{[}\prod_{v=0}^{V}d\chi_{v}\,\lambda_{v}\,\tilde{D}_{ v}\Bigg{]}\left[\prod_{e=1}^{n}\tilde{\mathcal{K}}_{e,+}\right]\Bigg{[}\prod_{e^{ \prime}=1}^{I}\tilde{\mathcal{C}}_{e^{\prime},++}\Bigg{]}\,\] (B.12)
is purely real (see the diagrammatic illustration in Figure 6). Consequently, all the total-energy poles inside the full tree-level correlator must also be real,
\[\text{Im}\ \langle\varphi({\bf k}_{1})\cdots\varphi({\bf k}_{n})\rangle^{ \prime C}_{+}=\text{Im}\operatorname*{Res}_{k_{T}\to 0}\left[k_{T}^{m}\ \langle\varphi({\bf k}_{1})\cdots\varphi({\bf k}_{n}) \rangle^{\prime}\right]=0\,\ m,n\in\mathbb{N}\.\] (B.13)
Other total-energy singularities are also real when understood as analytically continued from the positive-\(k_{T}\) direction, see the discussions in Section 4.2. The factorisation of parity-odd correlators then directly follows from the \(k_{T}\)-reality,
\[\langle\varphi({\bf k}_{1})\cdots\varphi({\bf k}_{n})\rangle^{ \prime\text{PO}} =\frac{1}{2}\left[\langle\varphi({\bf k}_{1})\cdots\varphi({\bf k }_{n})\rangle^{\prime}-\langle\varphi(-{\bf k}_{1})\cdots\varphi(-{\bf k}_{n })\rangle^{\prime}\right]\] \[=\frac{1}{2}\left[\langle\varphi({\bf k}_{1})\cdots\varphi({\bf k }_{n})\rangle^{\prime C}_{+}+\langle\varphi({\bf k}_{1})\cdots\varphi({\bf k} _{n})\rangle^{\prime C}_{-}\right.\] \[\qquad-\left.\langle\varphi(-{\bf k}_{1})\cdots\varphi(-{\bf k} _{n})\rangle^{\prime C}_{+}-\langle\varphi(-{\bf k}_{1})\cdots\varphi(-{\bf k }_{n})\rangle^{\prime C}_{-}\right]+\text{factorised}\] \[=0+\text{factorised}\,\] (B.14)
where we have applied (B.5) and (B.13). Finally, we comment that in the in-in/Schwinger-Keldysh formalism, there is no explicit reference to the boundary time \(\eta_{0}\) except for the external propagator \(\mathcal{K}_{\pm}\) which
Figure 6: A five-point illustration of the \(k_{T}\)-reality in the in-in/Schwinger-Keldysh diagrammatics. The all-black vertices indicate the diagram is a fully time-ordered diagram \(\langle\varphi^{5}\rangle_{+}\). We then expand the internal Schwinger-Keldysh propagators (solid lines) into the connected (double lines) and factorized parts (dashed lines), and use the reality of the connected propagator to conclude the reality of the maximally-connected first term and the total-energy singularities therein.
can be trivialised by sending \(\eta_{0}\to 0\) first, and consequently the parity-odd correlators factorise in the same fashion for light fields and heavy fields. This generalises the proof of [37], which applied to massless and conformally-coupled mode functions, allowing for general massive mode functions for the internal lines.
## Appendix C Reality from Hermitian analyticity
In [37], the Cosmological Optical Theorem (COT) of [22] was used to deduce that contact diagrams of massless scalars arising from IR-finite interactions are purely real, which in turn implies that such diagrams do not contribute to parity-odd trispectra. This result itself suggests that such a trispectrum is a very nice probe of exotic inflationary physics. The same result was also derived in [86] using Wick rotations. It is therefore tempting to wonder if the more general results we have derived in this paper, namely the inclusion of exchange processes and massive fields, can also be understood using the COT (or more generally without invoking Wick rotations). In other words, can we deduce from the COT that the imaginary part of wavefunction coefficients is factorised? Let us first review the argument for contact diagrams. The COT states that
\[\psi_{n}(\{k\},\{{\bf k}\})+\psi_{n}^{*}(\{-k\},\{-{\bf k}\})=0\,\] (C.1)
where in the second term we flip both the energies and the momenta. Note that the COT relies on our ability to analytically continue the momenta away from the physical region. The COT follows from having real coupling constants (by unitarity) and _Hermitian analyticity_ of the massless bulk-boundary propagators, \(K(\eta,k)=K^{*}(\eta,-k)\) c.f. (4.3), and the spatial momenta which enter as \(i{\bf k}\). We refer the reader to [22, 23, 24, 29] for full details. If we have exact scale invariance then \(\psi_{n}\sim k^{3}\), where the cubic scaling is there to cancel the scaling of the momentum-conserving delta function in three spatial dimensions, and therefore (C.1) becomes
\[\psi_{n}(\{k\},\{{\bf k}\})-\psi_{n}^{*}(\{k\},\{{\bf k}\})=0\qquad\implies \qquad\text{Im }\psi_{n}=0\.\] (C.2)
Since it is the imaginary part of the wavefunction coefficient that contributes to the parity-odd correlator, c.f. (4.46), this tells us that contact diagrams do not contribute. Here we have used exact scale invariance of the wavefunction coefficients which of course requires scale invariance of the bulk interactions, but also IR-convergence of the time integrals otherwise scale invariance is broken by the IR cut-off \(\eta_{0}\). In that case the wavefunction can indeed have an imaginary part [37].
Now consider a four-point exchange diagram. For an \(s\)-channel diagram the COT reads [22]
\[\psi_{4,s}(\{k\},s,\{{\bf k}\})+\psi_{4,s}^{*}(\{-k\},s,\{-{\bf k}\})=\text{ factorised}\,\] (C.3)
where the factorised RHS depends on the three-point sub-diagrams that contribute to this four-point coefficient. In addition to unitarity and Hermitian analyticity of the bulk-boundary propagators and spatial momenta, this expression holds since the real part of the bulk-bulk propagator is factorised [24] (we remind the reader that in this paper our bulk-bulk propagator differs by a factor of \(i\) from that of [24] which is why the real part rather than the imaginary part is factorised). We can see this explicitly. Indeed,
\[G(\eta_{1},\eta_{2},k)=\sigma(\eta_{1},k)\sigma^{*}(\eta_{2},k)\theta(\eta_{1} -\eta_{2})+\sigma(\eta_{2},k)\sigma^{*}(\eta_{1},k)\theta(\eta_{2}-\eta_{1}) +\text{factorised}\,\] (C.4)
and therefore
\[G(\eta_{1},\eta_{2},k)+G^{*}(\eta_{1},\eta_{2},k) =[\sigma(\eta_{1},k)\sigma^{*}(\eta_{2},k)+\sigma^{*}(\eta_{1},k) \sigma(\eta_{2},k)][\theta(\eta_{1}-\eta_{2})+\theta(\eta_{2}-\eta_{1})]+ \text{factorised}\] \[=\text{factorised}\.\] (C.5)
This straightforwardly generalises to spinning fields. The LHS of the COT is therefore picking out the factorised part of the bulk-bulk propagator. The fact that the RHS is factorised suggests that such a
relation could be used to derive similar results to what we have found in this paper, namely that the imaginary part of wavefunction coefficients is factorised (under the assumptions of scale invariance and IR-convergence). We would then naturally want to use scale invariance to pick out the imaginary part like we did for contact diagrams, however this leads to
\[\psi_{4,s}(\{k\},s,\{\mathbf{k}\})-\psi_{4,s}^{*}(\{k\},-s,\{\mathbf{k}\})= \text{factorised}\,\] (C.6)
since under a scale transformation \(s\) is rescaled too. In general, wavefunction coefficients can contain both odd and even terms in \(s\), since the bulk-bulk propagator does not enjoy any symmetry property under \(s\to-s\), so we cannot conclude that the imaginary part is factorised from this expression alone.
However, this discussion suggests that instead we need to use a property of the bulk-bulk propagator that requires us to flip the sign of \(s\). This is precisely Hermitian analyticity of the bulk-bulk propagator which for massive scalars and spinning fields within the CCM set-up was discussed in detail in [24]. Let us explain how we can use this property to offer another perspective on the results we have found in this paper. As one might expect, the story for the CCM and CC scenarios are slightly different, and in each case light and heavy fields are slightly different. Let us therefore take each possibility in turn.
Cosmological condensed matter physicsFor concreteness we will restrict ourselves to \(\tilde{\kappa}=0\) since Hermitian analyticity has been well-established in this case. To match the notation of [24] we define \(\sigma_{h}^{-}(\eta,k)\equiv-i\sigma_{h}(\eta,k)\) and \(\sigma_{h}^{+}(\eta,k)\equiv+i\sigma_{h}^{*}(\eta,k)\). We then have21
Footnote 21: These two solutions are complex conjugate to each other for both light and heavy fields.
\[\sigma_{h}^{-}(\eta,k) =-i\frac{H\sqrt{\pi}}{2}(-\eta)^{3/2}e^{i\pi(\nu+1/2)/2}H_{\nu}^ {(1)}(-c_{h,S}k\eta)\,\] (C.7) \[\sigma_{h}^{+}(\eta,k) =+i\frac{H\sqrt{\pi}}{2}(-\eta)^{3/2}e^{-i\pi(\nu+1/2)/2}H_{\nu}^ {(2)}(-c_{h,S}k\eta)\,\] (C.8)
and the helical bulk-bulk propagator can be written as
\[G_{\sigma}^{(h)}(\eta_{1},\eta_{2},k)=\sigma_{h}^{+}(\eta_{1},k)\sigma_{h}^{+ }(\eta_{2},k)\left(\frac{\sigma_{h}^{-}(\eta_{1},k)}{\sigma_{h}^{+}(\eta_{1},k )}-\frac{\sigma_{h}^{-}(\eta_{0},k)}{\sigma_{h}^{+}(\eta_{0},k)}\right)\theta (\eta_{1}-\eta_{2})+(\eta_{1}\leftrightarrow\eta_{2})\.\] (C.9)
Since the new mode functions only differ from the old ones by a phase, the bulk-bulk propagator is unchanged. Now as shown in [24], the mode functions satisfy the properties:
\[\left[\sigma_{h}^{+}(\eta,-k^{*})\right]^{*} =i\sigma_{h}^{+}(\eta,k)\,\] (C.10) \[\left[\sigma_{h}^{-}(\eta,-k^{*})\right]^{*} =i\sigma_{h}^{-}(\eta,k)+2\cos(\pi\nu)\sigma_{h}^{+}(\eta,k)\,\] (C.11)
from which one can show that the helical bulk-bulk propagator is anti-Hermitian analytic:
\[\left[G_{\sigma}^{(h)}(\eta_{1},\eta_{2},-k^{*})\right]^{*}=-G_{\sigma}^{(h)} (\eta_{1},\eta_{2},k)\.\] (C.12)
Note that this property holds for both light and heavy fields. Here \(-k^{*}\) indicates that we must include a small negative imaginary contribution, in additional to the negative real part, such that we do not cross any branch cuts. We can now attempt to put this property into good use to conclude some reality properties of wavefunction coefficients. Consider the general diagram structure of (4.4):
\[\psi_{n}(\{k\},\{p\},\{\mathbf{k}\})=\int\left[\prod_{v=1}^{V}d\eta_{v}\,i \lambda_{v}\,D_{v}\right]\left[\prod_{e=1}^{n}K_{e}(k_{e})\right]\left[\prod_ {e^{\prime}=1}^{I}G_{e^{\prime}}(p_{e^{\prime}})\right]\,\] (C.13)
where we have only indicated the energy dependence of the propagators. We can now use the various Hermitian analyticity properties to write
\[\psi^{*}_{n}(\{-k\},\{-p^{*}\},\{-\mathbf{k}\})=-\int\left[\,\prod_{v=1}^{V}d \eta_{v}\,i\lambda_{v}\,D_{v}\right]\left[\,\prod_{e=1}^{n}K_{e}(k_{e})\right] \left[\,\prod_{e^{\prime}=1}^{I}G_{e^{\prime}}(p_{e^{\prime}})\right]\,\] (C.14)
where we have used the fact that at tree-level we have \(V=I+1\). For wavefunction coefficients of massless scalars that adhere to exact scale invariance, i.e. have no \(\eta_{0}\) dependence and therefore scale as \(\psi_{n}\sim k^{3}\), we can then write
\[\psi^{*}_{n}(\{k\},\{p\},\{\mathbf{k}\})=\int\left[\,\prod_{v=1}^{V}d\eta_{v} \,i\lambda_{v}\,D_{v}\right]\left[\,\prod_{e=1}^{n}K_{e}(k_{e})\right]\left[\, \prod_{e^{\prime}=1}^{I}G_{e^{\prime}}(p_{e^{\prime}})\right]=\psi_{n}(\{k\}, \{p\},\{\mathbf{k}\})\,\] (C.15)
which establishes the reality of \(\psi_{n}\). Here it was crucial that there was no \(\eta_{0}\) dependence. If there is an \(\eta_{0}\) dependence then flipping the signs of all energies and momenta does not simply yield an overall minus sign since \(\eta_{0}\) itself carries a conformal weight. As we have discussed a number times in this paper, the bulk-bulk propagator is independent of \(\eta_{0}\) only for light fields, whereas for heavy fields it does indeed depend on \(\eta_{0}\). This proof therefore applies for light fields only, and offers a complementary proof of the result we derived in Section 4 using Wick rotations.
For heavy fields this argument does not hold (and indeed we wouldn't expect it to hold since we have already seen that for heavy fields \(\psi_{n}\) is not real), but the discussion and the C-F decomposition we made in Section 3 suggests a clear way forward. Indeed, consider the connected bulk-bulk propagator c.f. (3.24),
\[C^{(h)}_{\sigma}(\eta_{1},\eta_{2},k)=\sigma^{+}_{h}(\eta_{1},k)\sigma^{+}_{h }(\eta_{2},k)\left(\frac{\sigma^{-}_{h}(\eta_{1},k)}{\sigma^{+}_{h}(\eta_{1}, k)}-i\cos(\pi\nu)\right)\theta(\eta_{1}-\eta_{2})+(\eta_{1}\leftrightarrow\eta_{2})\,\] (C.16)
where we have taken the minimal solution of \(\mathcal{A}_{h}\) which we derived in Appendix A. The relative minus sign between the two terms in the brackets comes from the fact we have written this propagator in terms of \(\sigma^{-}\) and \(\sigma^{+}\) rather than \(\sigma\) and \(\sigma^{*}\). Here we have written the solution for \(\mathcal{A}_{h}\) that is valid for both light and heavy fields. We can then use the Hermitian analytic properties of the mode functions to conclude that this connected bulk-bulk propagator is anti-Hermitian analytic:
\[\left[C^{(h)}_{\sigma}(\eta_{1},\eta_{2},-k^{*})\right]^{*}=-C^{(h)}_{\sigma} (\eta_{1},\eta_{2},k)\,\] (C.17)
which holds for both light and heavy fields. In Appendix A we saw that we could add any real term to \(\mathcal{A}_{h}\) while maintaining the reality of the connected bulk-bulk propagator after Wick rotation. Here we can also add any real term to \(\mathcal{A}_{h}\) and still realise the anti-Hermitian analyticity given (C.10). We can now run the same argument as above but with the full bulk-bulk propagator replaced by the connected one, and with the crucial difference that the connected propagator is independent of \(\eta_{0}\) such that we can use exact scale invariance,22 to conclude that \(\psi^{C}_{n}\) is real which complements the proof we derived in Section 4 using Wick rotations.
Footnote 22: Note that the connected propagator has the same conformal weight as the full bulk-bulk propagator so it remains the case that \(\psi^{C}_{n}\sim k^{3}\) by scale invariace.
Cosmological collider physicsAs expected, things are a little more involved for the CC scenario since the mode functions are more complicated, however here we prove anti-Hermitian analyticity of the full bulk-bulk propagator, and the connected part, for each helicity mode. To the best our of knowledge this has not been shown before.
As always we start with the \(n=|h|\) modes with mode functions given in (2.39) which are the same as in the CCM scenario up to some real factors, and integer powers of \(k\). We can therefore immediately conclude that the bulk-bulk propagator is anti-Hermitian analytic:
\[\left[G^{h}_{|h|,S}(\eta_{1},\eta_{2},-k^{*})\right]^{*}=-G^{h}_{|h|,S}(\eta_{1}, \eta_{2},k)\.\] (C.18)
As we discussed at length in Section 3, the bulk-bulk propagator for the other modes can be written in terms of \(\Phi^{h}_{|h|,S}\) by iteratively using the relation (2.36):
\[G^{h}_{n,S}(\eta_{1},\eta_{2},k)= \hat{\mathcal{D}}^{*}_{h,n}(i\eta_{1},k)[\Phi^{h*}_{|h|,S}(\eta_{ 1},k)]\hat{\mathcal{D}}^{*}_{h,n}(i\eta_{2},k)[\Phi^{h*}_{|h|,S}(\eta_{2},k)]\] \[\times \left(\frac{\hat{\mathcal{D}}_{h,n}(i\eta_{1},k)[\Phi^{h}_{|h|,S} (\eta_{1},k)]}{\hat{\mathcal{D}}^{*}_{h,n}(i\eta_{1},k)[\Phi^{h*}_{|h|,S}(\eta _{1},k)]}-\frac{\hat{\mathcal{D}}_{h,n}(i\eta_{0},k)[\Phi^{h}_{|h|,S}(\eta_{0 },k)]}{\hat{\mathcal{D}}^{*}_{h,n}(i\eta_{0},k)[\Phi^{h*}_{|h|,S}(\eta_{0},k)] }\right)\theta(\eta_{1}-\eta_{2})+(\eta_{1}\leftrightarrow\eta_{2})\.\] (C.19)
The main observation that allows us to make progress is that the differential operators \(\hat{\mathcal{D}}_{h,n}(i\eta,k)\) are Hermitian analytic. This follows straightforwardly from the fact that the differential operator in (2.36) is Hermitian analytic (and so acting with it iteratively also yields something Hermitian analytic). The same Hermitian analyticity relations we used above for CCM can then be used to infer that this bulk-bulk propagator is anti-Hermitian analytic:
\[\left[G^{h}_{n,S}(\eta_{1},\eta_{2},-k^{*})\right]^{*}=-G^{h}_{n,S}(\eta_{1}, \eta_{2},k)\.\] (C.20)
Note that in arriving at this conclusion we also had to make use of the fact that \(\hat{\mathcal{D}}_{h,n}(i\eta,k)\) are either purely real, for even \(n-|h|\), or purely imaginary, for odd \(n-|h|\), as we discussed in Section 3. This leads to
\[\frac{\hat{\mathcal{D}}_{h,n}(i\eta_{1},k)[\Phi^{h*}_{|h|,S}(\eta_{1},k)]}{ \hat{\mathcal{D}}^{*}_{h,n}(i\eta_{1},k)[\Phi^{h*}_{|h|,S}(\eta_{1},k)]}=(-1)^ {n-|h|}\.\] (C.21)
This is used to cancel the \(\cos(\pi\nu)\) pieces that come from (C.11). We can now use a similar argument as above for the CCM scenario, using the general wavefunction coefficients we discussed in Section 4 for the CC scenario, to conclude that the wavefunction coefficients of massless scalars exchanging such massive spinning fields are purely real, as long as the \(\eta_{0}\) dependence cancels out. As with the CCM case, this is only the case for light fields c.f. (3.44). This complements the proof we detailed in Section 4 using Wick rotations.
The situation for heavy fields now follows in the same way as for the CCM scenario: the connected bulk-bulk propagator for all modes is Hermitian analytic from which we can easily deduce that the connected part of wavefunction coefficients is purely real. In this CC scenario the connected bulk-bulk propagator is given by (3.48) with reality of this propagator after Wick rotation fixing
\[\mathcal{A}_{h,n}(\mu)=(-1)^{n-|h|}i\cos(\pi\nu_{S})\,\] (C.22)
which we have again written in a way that is valid for both light and heavy fields. We can then again use the Hermitian analyticity of \(\hat{\mathcal{D}}_{h,n}(i\eta,k)\), and the fact that these differential operators are purely real for even \(n-|h|\) and purely imaginary for odd \(n-|h|\), to conclude that this connected bulk-bulk propagator is anti-Hermitian analytic:
\[\left[C^{h}_{n,S}(\eta_{1},\eta_{2},-k^{*})\right]^{*}=-C^{h}_{n,S}(\eta_{1}, \eta_{2},k)\.\] (C.23)
A caution on locality and the anomaly of Hermitian analyticityIn all of the above proofs, locality of the general form of the interactions, meaning that the vertex operator \(D_{v}\) is composed of derivatives but
not inverse derivatives, is an implicit assumption. In momentum space, locality tells us that \(D_{v}\sim(ik)^{n}\) is a polynomial in the energy variable \(k=|{\bf k}|\) with \(n\in\mathbb{N}\). Such a local interaction vertex trivially satisfies Hermitian analyticity, i.e. \(D_{v}^{*}(-k)=D_{v}(k)\), leading to (C.14) and the reality theorems. However, if the assumption of locality is dropped, and \(D_{v}(k)\) is allowed to have a non-polynomial dependence on \(k\), there can be an intriguing "anomaly" of Hermitian analyticity after performing the time integrals, thereby invalidating the conclusions about reality of wavefunction coefficients.
To demonstrate the essential idea, consider the following toy model of a scale-invariant non-local interaction:
\[\mathcal{L}=\lambda\phi^{\prime 2}\frac{1}{1-\mathbf{\nabla}^{2}/(aH)^{2}} \phi^{\prime 2}\,\] (C.24)
where \(\mathbf{\nabla}^{2}=\delta_{ij}\partial_{i}\partial_{j}\) is the three-dimensional Laplacian in flat space. This non-local theory can be understood as describing massless \(\phi\) particles interacting via a Yukawa-like force. It can be derived from a UV theory of \(\phi\) and a massive scalar \(\sigma\), by integrating out \(\sigma\) and taking the leading order contribution from the time-derivative expansion [34]. The resulting \(s\)-channel contact wavefunction is23
Footnote 23: By “\(s\)-channel” here we mean the contribution to the full four-point wavefunction coefficient that has the same symmetries as an \(s\)-channel exchange diagram.
\[\psi_{4}(\{k\},s)=i\lambda(k_{1}k_{2}k_{3}k_{4})^{2}\int_{-\infty}^{0}d\eta \frac{\eta^{4}}{1+s^{2}\eta^{2}}e^{ik_{T}\eta}\.\] (C.25)
Taking the Hermitian-analytic conjugate, we obtain
\[\psi_{4}^{*}(\{-k^{*}\},-s)=-i\lambda(k_{1}k_{2}k_{3}k_{4})^{2}\int_{-\infty}^ {0}d\eta\frac{\eta^{4}}{1+s^{2}\eta^{2}}e^{ik_{T}\eta}\,\] (C.26)
indicating that \(\psi_{4}\) is anti-Hermitian analytic,
\[\psi_{4}(\{k\},s)+\psi_{4}^{*}(\{-k^{*}\},-s)=0\,\quad(\text{before time integration})\,\] (C.27)
as expected from (C.14). We might then be tempted to use scale invariance and conclude that such a wavefunction coefficient is purely imaginary. However, this is not the case. Indeed, we can carry on and compute the time integral to obtain
\[\psi_{4}(\{k\},s)=i\lambda\frac{(k_{1}k_{2}k_{3}k_{4})^{2}}{s^{5}}\left[i \left(\text{Ci}\frac{ik_{T}}{s}\sinh\frac{k_{T}}{s}-\text{Shi}\frac{k_{T}}{s} \cosh\frac{k_{T}}{s}+\frac{s}{k_{T}}+\frac{2s^{3}}{k_{T}^{3}}\right)+\frac{ \pi}{2}\cosh\frac{k_{T}}{s}\right]\,\] (C.28)
which, under Hermitian-analytic conjugation, becomes
\[\psi_{4}^{*}(\{-k^{*}\},-s)=i\lambda\frac{(k_{1}k_{2}k_{3}k_{4})^{2}}{s^{5}} \left[-i\left(\text{Ci}\frac{-ik_{T}^{*}}{s}\sinh\frac{k_{T}}{s}-\text{Shi} \frac{k_{T}}{s}\cosh\frac{k_{T}}{s}+\frac{s}{k_{T}}+\frac{2s^{3}}{k_{T}^{3}} \right)+\frac{\pi}{2}\cosh\frac{k_{T}}{s}\right]\,\] (C.29)
where
\[\text{Ci}(x)=-\int_{x}^{\infty}\frac{\cos t}{t}dt\,\quad\text{Shi}(x)=\int_{0}^ {x}\frac{\sinh t}{t}dt\,\] (C.30)
are the cosine integral and hyperbolic sine integral functions, respectively. Adding (C.28) and (C.29) together, we see that in contradiction to our naive expectation (C.27), (anti-)Hermitian analyticity is violated,
\[\psi_{4}(\{k\},s)+\psi_{4}^{*}(\{-k^{*}\},-s)=i\pi\lambda\frac{(k_{1}k_{2}k_{3 }k_{4})^{2}}{s^{5}}e^{-k_{T}/s}\neq 0,\quad(\text{after time integration})\.\] (C.31)
Consequently, the wavefunction coefficient \(\psi_{4}\) is complex in general (rather than being purely imaginary).
Such an anomaly of Hermitian analyticity stems from the non-analytic behaviour of the vertex function with respect to the energy variable \(s\): at any finite time \(\eta\), there are poles at \(s=\pm i/\eta\) which affect the definition of the Hermitian analytic image. One can choose to continue path-wise in the \(s\)-plane from either side of the poles, but since the integration time \(\eta\) ranges from the origin all the way to infinity, there is no uniform way to perform the continuation throughout time (see Figure 7). The Hermitian analytic properties of the integrand therefore do not imply some simple relations for the final integrated result when there is some element of non-locality in the interactions.
Things are somewhat clearer from the Wick rotation perspective which we have primarily used in this paper: the pole at \(\eta_{c}=i/s\) on the complex \(\eta\)-plane prevents the deformation of the integration contour into the Wick-rotated one. Instead, we must include half of the residue at \(\eta_{c}\) to account for the half-circle touring around the pole, which gives exactly (C.31). In fact, this is precisely how parity violation is generated in the non-local single-field EFT model [104].
## Appendix D Beyond scale invariance: reality in other FLRW spacetimes
The discussion in Appendix C suggests a strong connection between the reality properties we have derived in this work, namely that of wavefunction propagators after Wick rotation, and their Hermitian analyticity properties that have been discussed in the literature. Indeed, with the exact scale invariance of de Sitter space, these two properties are equivalent at the level of equations of motion. To see this equivalence more clearly, recall the equation of motion for a free bosonic field without the chemical potential term:
\[\hat{\mathcal{E}}(\eta,k)\sigma_{h}(\eta,k)=0\,\quad\hat{\mathcal{E}}(\eta,k) \equiv\eta^{2}\frac{\partial^{2}}{\partial\eta^{2}}-2\eta\frac{\partial}{ \partial\eta}+c_{h,S}^{2}k^{2}\eta^{2}+\frac{m^{2}}{H^{2}}\.\] (D.1)
Our reality theorems rely on the fact that after Wick rotation, \(\eta=i\chi\), the equation of motion remains real:
\[\left[\hat{\mathcal{E}}(i\chi,k)\right]^{*}=\hat{\mathcal{E}}(i\chi,k)\,\quad \text{or}\quad\hat{\mathcal{E}}^{*}(-i\chi,k)=\hat{\mathcal{E}}(i\chi,k)\.\] (D.2)
Hermitian analyticity, on the other hand, states that sending energies to minus energies, while doing a complex conjugation, is a unit transformation,
\[\left[\hat{\mathcal{E}}(\eta,-k)\right]^{*}=\hat{\mathcal{E}}(\eta,k)\,\quad \text{or}\quad\hat{\mathcal{E}}^{*}(\eta,-k)=\hat{\mathcal{E}}(\eta,k)\.\] (D.3)
Figure 7: Left panel: At any fixed time \(\eta\), one can choose to continue from \(s\) to \(-s^{*}\) by either passing above or below the singularity at \(i/\eta\). Right panel: After finishing the time integral, the singularities merge into a branch cut that goes all the way from zero to infinity, preventing a uniform definition of the analytic continuation.
For a scale-invariant free theory in de Sitter space, the equation of motion operator must be a function of the combination
\[\hat{\mathcal{E}}(\eta,k)=f(\eta\partial_{\eta},k\eta)\,\] (D.4)
which means (D.2) and (D.3) are equivalent (at least for a vanishing chemical potential):
\[\hat{\mathcal{E}}^{*}(-i\chi,k)=f^{*}(\chi\partial_{\chi},-ik\chi)=f^{*}(\eta \partial_{\eta},-k\eta)=\hat{\mathcal{E}}^{*}(\eta,-k)\.\] (D.5)
However, in the absence of scale invariance, reality and Hermitian analyticity are drastically different notions, since they constrain the functional dependence on different variables in \(\hat{\mathcal{E}}(\eta,k)\): reality constrains the _time_ dependence \((\eta,\cdot)\), whereas Hermitian analyticity constrains the _energy_ dependence \((\cdot,k)\). For fields with more complicated dispersion relations, as can appear in general FLRW spacetimes, the time and energy dependence decouples, and it is easy to find examples where one of them is satisfied but not the other. For instance, a scale-dependent mass alters the equation of motion to
\[\hat{\mathcal{E}}_{\alpha}(\eta,k)=\eta^{2}\frac{\partial^{2}}{ \partial\eta^{2}}-2\eta\frac{\partial}{\partial\eta}+c_{h,S}^{2}k^{2}\eta^{2 }+\left(\frac{m^{2}}{H^{2}}+\alpha k\right),\] (D.6)
which satisfies reality but not Hermitian analyticity, while a time-dependent sound speed
\[\hat{\mathcal{E}}_{\beta}(\eta,k)=\eta^{2}\frac{\partial^{2}}{ \partial\eta^{2}}-2\eta\frac{\partial}{\partial\eta}+\left(c_{h,S}^{2}+\beta \eta\right)k^{2}\eta^{2}+\frac{m^{2}}{H^{2}}\,\] (D.7)
satisfies Hermitian analyticity but not reality. Notice that if one assumes the usual dispersion relation \(w^{2}=c_{s}^{2}k_{p}^{2}+m^{2}\), Hermitian analyticity is valid for most theories in general FLRW spacetimes with a Bunch-Davis vacuum [24], whereas reality is more stringent and is only valid for certain spacetimes.
To see how far we can go without assuming scale invariance, consider theories in a power-law FLRW spacetime,
\[ds^{2}=a^{2}(\eta)(-d\eta^{2}+d\mathbf{x}^{2})\,\quad a(\eta)=\left( \frac{\eta_{*}}{\eta}\right)^{p}\,\quad p\geqslant 0\,\] (D.8)
where \(p=1\) corresponds to the case of inflation. The equation of motion operator reads
\[\hat{\mathcal{E}}(\eta,k)=\frac{1}{a^{2}(\eta)}\left(\frac{ \partial^{2}}{\partial\eta^{2}}-\frac{2p}{\eta}\frac{\partial}{\partial\eta} +c_{s}^{2}k^{2}\right)+m^{2}\.\] (D.9)
This operator is apparently Hermitian analytic for any \(p\in\mathbb{R}_{+}\), and under some assumptions the corresponding propagators are Hermitian analytic [24]. How about reality after Wick rotation? Replacing \(\eta\to i\chi\), we find
\[\hat{\mathcal{E}}(i\chi,k)=i^{2p}\frac{1}{a^{2}(\chi)}\left(- \frac{\partial^{2}}{\partial\chi^{2}}+\frac{2p}{\chi}\frac{\partial}{\partial \chi}+c_{s}^{2}k^{2}\right)+m^{2}\,\] (D.10)
which is real only for \(p\) being an integer. Thus the propagator realities for \(K_{e},G_{e^{\prime}}\) will continue to hold for \(p\in\mathbb{N}\). To further check the \(\psi_{n}\)-reality and \(k_{T}\)-reality, we need to examine how the vertex \(D_{v}\) transforms under Wick rotation:24
Footnote 24: We stress that the power of the scale factor in \(D_{v}\) is fixed by diffeomorphism invariance (gauge redundancy) rather than scale invariance (isometry), and so is the same for any FLRW spacetime.
\[D_{v} =a^{4-k_{v}-l_{v}}(\eta)\left[\left(\delta_{ij}\right)^{p_{v}} \left(\epsilon_{ijk}\right)^{q_{v}}\left(\partial_{\eta}\right)^{k_{v}}\left( i\,k_{i}\right)^{l_{v}}\right]_{\text{partially contract}}\] \[=i^{(-4+k_{v}+l_{v})p}\ i^{-k_{v}}\ i^{l_{v}}\times a^{4-k_{v}-l _{v}}(\chi)\left[\left(\delta_{ij}\right)^{p_{v}}\left(\epsilon_{ijk}\right)^ {q_{v}}\left(\partial_{\chi}\right)^{k_{v}}\left(k_{i}\right)^{l_{v}}\right]_ {\text{partially contract}}\] \[=i^{(k_{v}+l_{v})(p+1)}\times\text{real}\,\] (D.11)
which is real for arbitrary couplings (i.e. all \(k_{v},l_{v}\in\mathbb{N}\)) only if \(p\) is an odd integer.25 Therefore, we conclude with
**Corollary D.1**.: _In odd-power-law FLRW spacetimes with a Bunch-Davies vacuum and IR convergence, \(\psi_{n}\)-reality, \(k_{T}\)-reality and parity-odd factorisation theorems are still valid even in the absence of scale invariance._
|
2305.19682 | Directional planar antennae in polariton condensates | We report on the realization of all-optical planar microlensing for
exciton-polariton condensates in semiconductor microcavities. We utilize
spatial light modulators to structure a nonresonant pumping beam into a
planoconcave lens-shape focused onto the microcavity plane. When pumped above
condensation threshold, the system effectively becomes a directional polariton
antenna, generating an intense focused beam of coherent polaritons away from
the pump region. The effects of pump intensity, which regulates the interplay
between gain and blueshift of polaritons, as well as the geometry of
lens-shaped pump are studied and a strategy to optimize the focusing of the
condensate is proposed. Our work underpins the feasibility to guide nonlinear
light in microcavities using nonresonant excitation schemes, offering
perspectives on optically reprogrammable on-chip polariton circuitry. | Denis Aristov, Stepan Baryshev, Julian D. Töpfer, Helgi Sigurðsson, Pavlos G. Lagoudakis | 2023-05-31T09:26:06Z | http://arxiv.org/abs/2305.19682v2 | # Reservoir microlensing in polariton condensates
###### Abstract
We report on the realization of all-optical planar microlensing for exciton-polariton condensates in semiconductor microcavities. We utilize spatial light modulators to structure a nonresonant pumping beam into a planoconcave lens-shape focused onto the microcavity plane. When pumped above condensation threshold, the system effectively becomes a directional polariton antenna, generating an intense focused beam of coherent polaritons away from the pump region. The effects of pump intensity, which regulates the interplay between gain and blueshift of polaritons, as well as the geometry of lens-shaped pump are studied and a strategy to optimize the focusing of the condensate is proposed. Our work underpins the feasibility to guide nonlinear light in microcavities using nonresonant excitation schemes, offering perspectives on optically reprogrammable on-chip polariton circuitry.
Guiding of light waves in planar structures at the microscale is an important step in development of miniaturized optical technologies, like optical circuits and logic gates [1; 2]. As a consequence, a variety of different methods of light guiding and focusing were realized in e.g. metamaterials [3; 4; 5], surface plasmon-polaritons [6; 7; 8; 9], phase-change materials [10], and photonic crystals [11; 12]. However, a shortcoming of many optical devices is their weak Kerr nonlinear response. Techniques to guide instead highly nonlinear exciton-polariton waves [13; 14; 15; 16; 17] in the strong light-matter coupling regime could open new possibilities in future light-based circuitry and logic [18]. However, so far, guiding of polariton quantum fluids usually relies on resonant injection techniques or irreversible sample fabrication steps which limits their flexibility in field programmable on-chip technologies [19].
Exciton-polaritons (from here on _polaritons_) are bosonic quasiparticles from strongly coupled photonic and excitonic modes in semiconductor microcavities [20]. Polaritons inherit a light effective mass, around \(10^{-5}\) of the electron mass, from their photonic component, and strong interactions from their excitonic component. These features permit nonequilibrium Bose-Einstein condensation of polaritons at elevated temperatures [21; 22] and ballistic outflow from localized pumping spots [23]. Today, structured nonresonant excitation is an established method of inducing localized regions of polariton gain leading to the condensate amplification [23], trapping [24; 25], vortex manipulation [26], analogue simulators [27], and artificial lattices [28]. Flexibility in excitation control paves the way for creation of optical devices such as polariton transistors [14; 29], logic gates [30] and interferometers [31]. These practical applications, in conjunction with rapid advances in room-temperature materials [19; 25; 33], make polaritons prospective candidates for future technologies based on optical information processing or simulation [18].
Due to their large nonlinearities, direct resonant excitation of polaritons was shown as an all-optical method for switching [34; 35; 36] and to control their planar flow [13; 29; 32]. But resonant injection demands careful calibration of the excitation beam incident angle and energy which inhibits implementation in integrated on-chip technologies. Instead, nonresonant excitation schemes for controlling the state [37; 38] and flow [14; 39; 40] of condensate polaritons offers a more practical integration into polaritonic devices. Recently, it was proposed that nonresonant excitation beams structured into planar microlenses could act as directional antennas for polariton condensates [41]. The reported _reservoir optics_ scheme exploited the strong interactions and small effective mass of polaritons. In brief, the nonresonant pump photoexcites a co-localized exciton reservoir
Figure 1: Schematic of an all-optical polariton microlensing effect. Lens-shaped nonresonant pump profile generates a potential landscape for excited polariton waves, which follow the shape of concave lens and focus in the focal region. Yellow arrows in the potential landscape illustrate polariton condensate flow direction. The bottom layer shows experimentally measured cavity photoluminescence corresponding to the condensate density.
which in-turn generates and blueshifts polaritons via repulsive polariton-exciton interactions [22]. Pumped above condensation threshold, the excited polaritons become macroscopically phase-coherent and thus can interfere constructively when they ballistically flow and refract out of the structured pumping region.
In this letter, we provide an experimental realization of said all-optical plano-concave microlens to guide and focus ballistically propagating condensate polaritons (see Fig. 1). We employ a strain compensated planar microcavity with embedded InGaAs quantum wells [42]. The sample is held at 4 K, and is pumped nonresonantly with a single mode continuous wave laser (see Supplementary Material for experimental parameters). Figures 2(a) and 2(b) show the recorded spatially resolved photoluminescence (PL), corresponding to the condensate density, at \(1.2\times P_{th}\) and \(1.5\times P_{th}\), where \(P_{th}\) corresponds to the pumping excitation density at condensation threshold. The white dotted lines indicate the boundary of the pumped region and the yellow arrows schematically illustrate the polariton flow. We observe that with increasing excitation density polaritons propagate further away from the excitation area in the direction dictated by the lens shape, implying more efficient focusing [see scan of PL line profiles along the "lens axis" in Fig. 2(c)]. However, \(>1.5\times P_{th}\) the position and shape of the PL at the focal area starts becoming fixed, indicating a saturation effect. We note that the in-plane attenuation in the condensate flow is mostly due to the relatively short polariton lifetime, \(\approx 5\) ps.
At the lower pumping intensity regime, we observe a nonlinear increase of the condensate's in-plane propagation speed and population with increasing pump intensity just above threshold. The focal distance in this regime and polariton wavevector was predicted to change in proportion with the excitonic reservoir density, which in turn is proportional to the pump intensity [41]. This can be observed in a narrow region of pump intensities between \(1.1\times P_{th}\) and \(1.5\times P_{th}\). At the higher pumping intensity regime, i.e. above \(1.5\times P_{th}\), where the reservoir saturates, we observe a slowing down of the change of the effective refractive index so that the position of the brightest focal point remains virtually unaffected of the pump intensity. We point out that the horizontal PL modulations seen in the focal region [see Fig. 2(c)] can be attributed to weak multi-modal condensation within the pump spot. These results show that the strongest response from the polariton microlens sys
Figure 2: Experimental PL for planoconcave lens shaped pump profile for two different intensities (a,b). Line profile along the ”optical axis” of the lens for varying the pump intensity (c). Simulated PL results for corresponding parameters (d,e) and k-space distribution (f). Each panel is normalized independently to increase visibility. Vertical red lines indicate pumped region. White dotted lines represent pump profiles - contour at which pump intensity is half of it’s maximum. Yellow arrows illustrate condensate flow direction.
tem occurs at lower intensities just above threshold.
A generalized Gross-Pitaevskii model describes the mean field dynamics of the pumped condensate coupled to an exciton reservoir [22] and can be used to qualitatively predict the focusing abilities of reservoir optics elements (see Supplemental Material). The results of numerically solving the coupled nonlinear partial differential equations from a random initial seed recreate the experimentally observed steady state real space PL [Fig. 2(d,e)]. We additionally provide in Fig. 2(f) the calculated \(k\)-space image of panel (e). The results show that a lens-shaped pump profile creates a polariton steady state condensate wavefunction characterized by a beam of polaritons propagating mostly in one direction (in this instance, the left direction).
Other reservoir lens parameters, such as aperture (N), thickness (T), and radius of curvature (R), can also be modified freely in our experiment which allows tuning the intensity and propagation distance of guided polaritons. Figure 3(a) shows a reservoir lens with a curvature radius larger than its half-aperture, resulting in a low condensate fraction in the focal region. Two bright condensate lobes appear within the pump region but are unable to constructively interfere outside. By decreasing the radius of curvature to half of lens aperture and increasing its thickness, we are able to create a highly focused region of polaritons [Fig. 3(b)]. In general, we observe that lenses of larger thickness, like in Fig. 3(b-d), result in more localized focal region moving closer to the pumping area. In contrast, thinner lenses like in Fig. 3(a) result in low focusing. We note that, since the reservoir lens is technically a directional antenna for polaritons, the condensate mode which forms within the pump region plays an essential role in the focusing abilities of the lens. Indeed, we see from all panels in Fig. 3 that complicated "wavefront sources" are being generated within the pump regions which subsequently form complicated refraction patterns, affecting the focusing ability of the lens. More sophisticated pumping geometries can potentially inhibit the different modes forming in the pump region.
For each case we scan the pump intensity from 1 to 4 P/P\({}_{\text{th}}\) and plot the _focusing strength_ of the lenses [see Fig. 3(e)], defined as the ratio of average PL intensity in the focal region against the pump region \(\Sigma_{F}/\Sigma_{L}\). Here, \(\Sigma_{L,F}=\frac{1}{S_{L,F}}\int_{S_{L,F}}I(\mathbf{r})d\mathbf{r}\) and \(S_{L,F}\) corresponds to the areas enclosed by the yellow and white dotted boundaries in Fig. 3(a-d). We stress that \(P_{\text{th}}\) is different for different lens shapes. Around threshold the PL is mostly emitted from the pump region giving small values of focusing strength. Increasing the pump intensity, we observe how the thickness of the lens dictates its focusing ability with T = 3 \(\mu\)m lens having smallest focusing strength and lens with T = 9 \(\mu\)m having highest. One can see that pump intensity curves for (a,b,c) have a point of maximal focusing strength after which the value drops. Since theoretically focusing strength as well as the condensate shape should stay the same after several \(P_{th}\) due to the saturation, we explain this effect in experimental results as a consequence of non-ideality in the generated pump profile (see supplementary for more details) and phase-space filling [43] (i.e., decrease of light-matter coupling). This affects the polariton dispersion stronger in high-pump regime for lenses of small thickness (a,b), less for medium thickness lens (c) and not visible for thickest (d). We also demonstrate the decrease of threshold with increase of lens T by showing the input-output relationship of the average PL in the focal region as a function of pump density in Fig. 3(f).
We point out that thicker lenses have a larger gain region and therefore a lower pump density threshold for condensation (i.e., "activation") [see Fig. 3(f) where orange line rises first]. Therefore, the size of the lens can be used to fine-tune the balance between the condensate gain and blueshift coming from the exciton reservoir. As expected, the size of the estimated focusing region becomes smaller since, for standard lenses, it should scale with the wavelength of the wavefront passing through the lens. In general, we observe that thicker lenses pumped at high pump intensities have the strongest focusing abilities as we see in Figs. 3(d).
Summarizing, we have experimentally demonstrated all-optical and tunable planar microlenses capable of generating and focusing polariton condensate flows up to 25 \(\mu m\) away from the pump region using only nonresonant pumping. Al
Figure 3: (a-d) Cavity PL for four different lens shapes with curvature radius (R), aperture (N) and thickness (T). White dots outline the lens area and yellow dots the focal area (\(\Sigma_{L,F}\) respectively). Each panel is normalized independently. (e) Corresponding focusing strength \(\Sigma_{F}/\Sigma_{L}\) of each lens and (f) and normalized intensities of the focal area \(\Sigma_{F}\) as a function of normalized pumping intensity of each lens shape. The four thick dots on panels (e,f) indicate pump intensities at which PL is shown in panels (a-d).
though referred to as _reservoir lenses_[41] our pump pattern can also be regarded as optically reprogrammable directional planar antennas for polaritons (i.e., highly nonlinear light). Different lens shapes were presented and analysed, as well as the dependency of the pump intensity on their focusing ability. We showed that tuning of three lens parameters, namely curvature radius, aperture and thickness, yields control over focusing strength and the focal distance. Another advantage of working in the strong light-matter coupling regime is free tuning of the polariton light-matter composition through their photon and exciton Hopfield fractions [22]. For this purpose, wedged cavities [44] offer an additional parameter to tune the focusing ability of our reservoir lenses. Moreover, we have presented results for a relatively short lifetime polaritons \(\tau_{p}\approx 5\) ps which causes strong attenuation as they flow from the pump region. Higher quality cavities with \(\tau_{p}\sim 100\) ps [45] would allow polaritons to propagate further and possibly offer more focused polariton beams further away from the reservoir lens.
Polaritonic microlenses offer an advantage in all-optical techniques for polariton manipulation and add to the promising prospect of all-optical polariton computational devices [18].
See the supplementary material for sample specifications, theoretical model used for simulations, and additional data on evolution of the condensate over different pump densities.
The authors acknowledge the support of the European Union's Horizon 2020 program, through a FET Open research and innovation action under the grant agreements No. 899141 (PoLLoC) and No. 964770 (TopoLight). H.S. acknowledges the Icelandic Research Fund (Rannis), Grant No. 239552-051.
## I Author declarations
### Conflict of interest
The authors have no conflicts to disclose.
### Author contribution
**Denis Aristov** : Conceptualization (equal); Data curation (lead); Formal analysis (lead); Investigation (lead); Methodology (equal); Software (equal); Simulations (lead); Validation (equal); Visualization (lead); Writing - original draft (lead); Writing - review and editing (equal). **Stepan Baryshev** : Investigation (equal); Project administration (equal); Resources (equal); Writing - review and editing (equal). **Helgi Sigurdsson**: Supervision (equal); Validation (equal); Writing - review and editing (lead). **Julian D. Topfer** : Software (lead); Writing - review and editing (equal). **Pavlos Lagoudakis** : Conceptualization (lead); Resources (lead); Supervision (lead);Validation (equal); Writing - review and editing (equal).
|
2309.09744 | Towards Better Modeling with Missing Data: A Contrastive Learning-based
Visual Analytics Perspective | Missing data can pose a challenge for machine learning (ML) modeling. To
address this, current approaches are categorized into feature imputation and
label prediction and are primarily focused on handling missing data to enhance
ML performance. These approaches rely on the observed data to estimate the
missing values and therefore encounter three main shortcomings in imputation,
including the need for different imputation methods for various missing data
mechanisms, heavy dependence on the assumption of data distribution, and
potential introduction of bias. This study proposes a Contrastive Learning (CL)
framework to model observed data with missing values, where the ML model learns
the similarity between an incomplete sample and its complete counterpart and
the dissimilarity between other samples. Our proposed approach demonstrates the
advantages of CL without requiring any imputation. To enhance interpretability,
we introduce CIVis, a visual analytics system that incorporates interpretable
techniques to visualize the learning process and diagnose the model status.
Users can leverage their domain knowledge through interactive sampling to
identify negative and positive pairs in CL. The output of CIVis is an optimized
model that takes specified features and predicts downstream tasks. We provide
two usage scenarios in regression and classification tasks and conduct
quantitative experiments, expert interviews, and a qualitative user study to
demonstrate the effectiveness of our approach. In short, this study offers a
valuable contribution to addressing the challenges associated with ML modeling
in the presence of missing data by providing a practical solution that achieves
high predictive accuracy and model interpretability. | Laixin Xie, Yang Ouyang, Longfei Chen, Ziming Wu, Quan Li | 2023-09-18T13:16:24Z | http://arxiv.org/abs/2309.09744v1 | Towards Better Modeling with Missing Data: A Contrastive Learning-based Visual Analytics Perspective
###### Abstract
Missing data can pose a challenge for machine learning (ML) modeling. To address this, current approaches are categorized into feature imputation and label prediction and are primarily focused on handling missing data to enhance ML performance. These approaches rely on the observed data to estimate the missing values and therefore encounter three main shortcomings in imputation, including the need for different imputation methods for various missing data mechanisms, heavy dependence on the assumption of data distribution, and potential introduction of bias. This study proposes a Contrastive Learning (CL) framework to model observed data with missing values, where the ML model learns the similarity between an incomplete sample and its complete counterpart and the dissimilarity between other samples. Our proposed approach demonstrates the advantages of CL without requiring any imputation. To enhance interpretability, we introduce _CIVs_, a visual analytics system that incorporates interpretable techniques to visualize the learning process and diagnose the model status. Users can leverage their domain knowledge through interactive sampling to identify negative and positive pairs in CL. The output of _CIVs_ is an optimized model that takes specified features and predicts downstream tasks. We provide two usage scenarios in regression and classification tasks and conduct quantitative experiments, expert interviews, and a qualitative user study to demonstrate the effectiveness of our approach. In short, this study offers a valuable contribution to addressing the challenges associated with ML modeling in the presence of missing data by providing a practical solution that achieves high predictive accuracy and model interpretability.
Explainable AI, missing data, data imputation, contrastive learning
## 1 Introduction
Missing data indicates that the values in a dataset are not recorded due to a variety of factors such as inherent characteristics, privacy concerns, and difficulties in data collection. The missing data issue is prevalent in the field of machine learning (ML) and can be observed during the training and inference phases, which makes the purpose of effective modeling of the observed data challenging. If the missing values, denoted as "NaN", are included in the input of the model, then the ML program should throw an error. Also, the exclusion of missing values may lead to weak statistical conclusions due to a reduction in sample size [1]. Consequently, effective resolution of problems caused by missing data is essential for modeling observed data with missing values.
In order to obtain better performance in ML modeling, existing methods focus mainly on dealing with missing data and they are classified into two categories: _feature imputation_[2, 3, 4] and _label prediction_[5, 6, 7]. Feature imputation fills in missing values based on the observed data distribution (Figure 1), such as joint modeling with expectation-maximization [3], multiple imputations via chained equations (MICE) [2], and matrix complement [4]. However, the diverse missing mechanisms, missing proportions, and distributions of missing data [8] require different imputation models. In addition, assumptions about the data distribution may severely affect the imputation accuracy and introduce specific biases. Meanwhile, label prediction accomplishes the downstream ML task directly from the observed data with missing values. Particularly, the missing values are imputed by trainable parameters. Specifically, label prediction is an end-to-end strategy that adjusts these parameters by feeding only data with missing values in the training or inference phase and optimizing the prediction accuracy for downstream tasks (Figure 1). For example, the missing values are filled by the parametric density [6], or by the intermediate prediction results [7]. In other words, most existing label prediction methods include a trainable data imputation component in their predictions, which also introduces the aforementioned disadvantage of induced bias. To summarize, the existing methods are more or less imputing missing values and inevitably produce biases (Figure 1).
Recent advances in Contrastive Learning (CL) have shown impressive performance in various domains [9], which collects an anchor (i.e., a reference point) sample with its positive samples and disperses it with the negative samples in the latent space. In this study, CL motivates us to address the issue of modeling observed data with missing values from a CL perspective: _for an incomplete sample, the ML model should learn the similarity between its complete counterpart and the dissimilarity between any other samples._ In particular, we formulate the ML modeling with missing data as a subproblem of CL and propose a CL-based
framework to address the above-mentioned challenges. Our approach does not require data imputation. Instead, our approach introduces expert knowledge in both positive and negative strategies. Moreover, in contrast with label prediction approaches, our proposed framework integrates human expertise as a guide and co-adaption of humans and systems to facilitate knowledge generation [10]. Specifically, the framework consists of a full model and a semi-model, trained from different sets of the original dataset. The complete data records (i.e., data without missing features) are fed into the full model, and the full model should have a higher numerical performance than the semi-model. The semi-model receives incomplete records (i.e., data with specified features) and is trained to be consistent with the full model. In the inference phase, the semi-model works alone to make predictions in downstream tasks.
However, transferring the idea of CL to the case of missing data is nontrivial due to the following challenges. First, CL-based solutions do not necessarily converge to global optimization, and the training phase may be trapped in local optimization. Accordingly, measuring and diagnosing the training phase from different perspectives are concerns that previous work has rarely attempted to address. Second, when selecting positive and negative samples, they do not necessarily have the same dimensionality, posing a challenge when we want to fully utilize these incomplete data records. Considerable interactive modifications are required when using the sampling strategy [11, 12, 13, 14, 15] in our scenarios. To address the issues encountered when adapting CL concepts in our scenario, we further propose Contrastive Imputation Visualization (_CIVis_), a visual analytics system that facilitates the exploration, analysis, and development of CL-based modeling with missing data in a user-friendly and transparent manner. Specifically, to address the first challenge, we derive a series of interpretable techniques [16, 17] to measure and diagnose model status, enabling users to choose an appropriate sampling strategy and visualize the root cause of model collapse1[18] and the improvement/degradation of model performance. _CIVis_ adapts several well-established sampling strategies in selecting positive and negative samples to address the second challenge. In negative sampling, users can apply various sampling strategies on the training set and observe the changes in its distributions. Meanwhile, in positive sampling, users can interactively determine the mapping relationships between incomplete and complete data. Our main contributions are as follows.
Footnote 1: A typical local optimization in CL, in which the deep learning model generates the same embedding regardless of the fed instance.
* We present a novel CL-based framework that allows model prediction with incomplete data, which does not require data imputation and can be easily generalized to different ML models.
* We develop _CIVis_, a visual analytics system, based on the proposed CL-based framework and new visualization capabilities, to support users in understanding, analyzing, and improving ML modeling of observed data with missing values.
* We confirm the efficacy of our approach through two usage scenarios with different downstream tasks (i.e., value prediction of house prices and classification of credit card repayments), quantitative experiments, expert interviews, and a qualitative user study.
## 2 Related Work
### _Methods Dealing with Missing Data_
In order to obtain better performance in ML modeling, existing approaches mainly focus on dealing with missing data and address it in two ways: _feature imputation_ and _label prediction_. Specifically, the statistics-based feature imputation [2, 3, 4, 19] is useful when the dataset satisfies the algorithm assumptions. For example, joint modeling [3] is used to populate multiple imputed values with different parameters under a specific probability density function. Matrix completion [4] requires that the matrix satisfies certain otherwise mathematically unsolvable properties. MICE [2] and KNN [19] are subject to the assumption of data distribution. Meanwhile, deep learning-based feature imputation [20, 21] requires that the missing values fit the distribution of the observed data. Therefore, these algorithms still fill a value according to their assumptions when the missing values are not within a certain assumption, thus biasing the dataset.
To overcome the limitations of feature imputation, label prediction methods [5, 6, 7] predict the outcome of downstream tasks with incomplete data. The tree-based model [5] is heuristic but time-consuming when the data volume is large. You et al. [7] explicitly captured the relationship between data records and features by Graph Neural Network, wherein the label predictor is based on the imputed data. Smeja et al. [6] imputed the corresponding positions in the feature map and predicted the labels by using a neural network. These imputed values and the neural network are trained to improve the prediction accuracy. Nonetheless, imputation becomes highly challenging to diagnose when the imputed data are unavailable. Instead of imputation, we formulate modeling observed data with missing values as a subproblem of CL in this study and allow for label prediction after removing dimensions that may contain missing data. In contrast with the black box nature of the existing label prediction methods, our proposed CL framework is transparent. Based on our proposed CL framework, experts can interactively choose a sampling strategy that is consistent with their domain knowledge.
### _Contrastive Learning_
CL is a popular method in self-supervised learning [14, 22, 23, 24, 25]. This method is used to relabel data
Fig. 1: Existing methods (i.e., label prediction and feature imputation) impute the missing data, while our approach directly utilizes the incomplete data with missing values.
and its augmentation (e.g., cropped image, flipped image) as positive and negative samples to fully utilize the data. CL first demonstrates impressive performance in the field of computer vision (e.g., classification) [14, 22]. Some work followed the loss of InfoNCE [26] and proposed new positive and negative sampling strategies to extend CL to other domains [23, 24, 25]. Li et al. [15] applied CL to horizontal federated learning, wherein the global and local models of the last epoch generate positive and negative samples, respectively. Yao et al. [24] discussed CL in object detection, wherein the samples take the form of Region of Interest (RoI), and the highly overlapped RoIs are negative samples of each other. In addition, negative mining has long been studied in Deep Metric Learning [11, 12, 13], with the aim of finding similar samples from different classes, denoted as hard negative samples. Suh et al. [12] defined an anchor for each class, and the classes with close anchor points are neighbors and are negative samples of each other. Schroff et al. [11] defined semi-hard samples as one that is slightly less distant from a positive sample than from its negative counterpart. In contrast with the above-mentioned work that treats a domain or a particular sampling strategy as a case-by-case study whose validity and choice of sampling strategy depend on the particular usage scenario, _CIVis_ provides a comprehensive analysis of the different sampling strategies from the perspective of model performance and data distribution. Users can visualize the direct effect (i.e., data changes) and consequences (i.e., model changes) when choosing a specific sampling strategy.
### _Visualization for Machine Learning_
The substantial work of visualization enhances the different stages of ML. A branch focuses on the assessment and correction of labeled data to improve the quality of data used to train ML models [27, 28, 29, 30, 8]. The models combine multiple metrics, including those for missing data, to assess the data quality, identify anomalies, and correct labels. Our work focuses on the use of missing data, rather than on assessment or correction.
Post hoc interpretable visualizations [31, 32, 33, 34, 35, 36] are regarded as another key technique for understanding the ML black-box. This mechanism is used to build bridges between the input data and the hidden layers of a particular model to reveal how high-level information is captured layer by layer. Our work is the first to provide post hoc interpretation capabilities for CL. Several visual analytics approaches have been proposed to develop novel ML models in a progressive manner [37, 38, 39, 40, 41], reducing the manual effort in iterative activities. Particularly, these models diagnose the training process, explain the model status, and refine unsatisfactory performance. The progressive pipeline of our visual analytics system starts with several epochs of training, diagnoses the model with visual cues, refines the sampling strategy, and proceeds to another round of model training.
Some studies [23, 42] explain CL from a theoretical perspective. Zhu et al. [23] provided visual evidence (i.e. statistical plots) to diagnose the performance of CL models and perform targeted feature transformations to fix errors. Inspired by [23], _CIVis_ also visualizes statistics, such as variance and mean of positive/negative sample distances, which facilitate model diagnosis during the training stage.
## 3 Observational Study
### _Experts' Conventional Practice and Bottleneck_
To understand how (well) the issue of modeling observed data with missing values are tackled in practice, we worked with a team of experts from a collaborating local AI provider institution, including a product manager (E1, male, 34), and two ML practitioners (E2, male, 28, E3, male, 27). A considerable part of their work involves providing Big Data and AI solutions to their clients for their specific business requirements. These experts shared with us a recent case that they encountered where they designed and developed a precision marketing strategy in a real estate scenario. Specifically, the real estate enterprise wanted to use its data to train an ML model to predict whether a particular customer would come and visit their real estate sales office and to understand the customer's characteristics. The real estate enterprise can find more suitable customers who are likely to visit the sales office from their large customer pool by using this trained ML model.
When utilizing ML in this context, the ML practitioners (E2 and E3) first identified training samples from the pool, namely, positive samples (i.e., those customers who have visited the sales office) and negative samples (i.e., those customers showing no interest in visiting the sales office). Although the total number of available training samples had reached approximately \(25,000\), not all of the samples were good enough for training purposes. Most of the records were collected from field sales representatives and described a rough picture of customer characteristics, such as _gender_, _age_, _occupation_, _income level_, _marital status_, and _family structure_. E1 commented that _"if we exclude these missing records from our modeling, it may lead to weak statistical conclusions"_. Specifically, removing a large number of data records with missing characteristics may not adequately represent the population as a whole. E1 further noted that _"in the surveys we use for real estate sales offices, non-response to certain items does not occur randomly; only those with certain characteristics refuse to respond to specific questionnaires"_. Removing these missing records may bias the model training, and the remaining records may not be representative of the entire population.
In discussing how to fill in more records for model training, E2 and E3 attempted data imputation to fill in these missing values. However, a consequence is that the team had no certainty about the model performance. Specifically, the missing data are present in the training and inference samples. During the training phase, E2 and E3 performed specific data imputation strategies, such as averaging and regression, to train a good enough ML model. In the inference phase, the team must also discover any missing data in the inference samples and ensure that the feature space is the same as the training samples to ensure a smooth operation of the ML model. Although various data imputation methods exist, E2 and E3 cannot easily choose an optimal imputation model because it requires a specific level of business knowledge. Moreover, the missing data of different variables jointly determine which imputation
model to use, and the use of different imputation models largely affects the quality of the data after data imputation. Therefore, the team was envisioning a more flexible way to deal with missing data rather than data imputation.
### _Experts' Needs and Expectations_
We discussed the possibility of dealing with the issue of modeling observed data with missing values without any form of data imputation. We interviewed experts (E1-E3) to identify their primary needs and concerns. At the end of the interviews, the need to fully exploit the dataset with missing values and steer a more transparent and interactive model that incorporates domain expertise emerged as a key theme in the feedback collected. We summarize their specific requirements below.
**R.1 Understand the overall missing features.** Prior label prediction methods have neglected the issue of missing rates and the importance of features. E2 argued that the absence of critical features can render even human judgments unreliable. Hence, it is crucial for domain experts to possess a comprehensive understanding of the number and location of missing features, in addition to discerning the importance of each feature. Such insights enable experts to make informed decisions on how to utilize features and skip records that contain substantial amounts of missing data.
**R.2 Support interactive model configuration.** Domain knowledge using label prediction end-to-end models typically takes the form of strict assumptions that are difficult to modify or change. E3 commented that _"analyzing the imputation performance of label prediction is more challenging"_ because he did not know how to integrate it with his domain knowledge. Therefore, the experts wanted to represent and integrate their domain knowledge by interacting with the model in a way that would promote its interpretability and further increase their confidence in the prediction results.
**R.3 Evaluate model performance during training.** A subsequent response of experts is to evaluate the performance of the model and understand how much knowledge has been learned from the data and how many features have been captured. However, a potential risk is that the induced knowledge may cause the model to crash or that the model does not converge well (i.e., model collapse). Evaluation from multiple aspects can facilitate the diagnosis of the training phase and how domain knowledge is induced. According to E2, _"when a model performs well in the training phase, we have more confidence in its predictive power in the inference phase"_.
**R.4 Inspect the quality of prediction results.** In the inference stage, the missing features are quite unpredictable and do not provide ground truth. The primary concern of experts is to assess the quality of prediction results based on the missing data. The experts need additional auxiliary information to decide whether to accept the prediction results. According to E3, _"we will consider the missing features, the model prediction, and its interpretation together to make the final decision"_. Therefore, checking the quality of the prediction results and assessing their credibility are crucial in such a co-adaptive human-computer decision-making process.
## 4 Approach Overview
To meet the above-mentioned requirements, we propose a CL-based visual analytics system, _CIVis_, to support experts in understanding, analyzing, and improving modeling observed data with missing values. Figure 2 describes the pipeline of our approach. Specifically, the experts first determine the model and features based on their missing data rates. Accordingly, _CIVis_ automatically divides the data into two subsets (i.e., semi-data and full data) with different feature dimensions (i.e., complete and incomplete). The data are fed to two models with the same architecture for pretraining, which are named "full model" and "semi-model", respectively. Subsequently, the experts identify the positive and negative samples in an interactive manner in the positive sampling (section 6.2) and negative sampling views (section 6.1). Then, the selected positive and negative samples and the full model are fed into our CL-based framework to train the semi-model. During the training phase, multiple criteria are presented in the training and comparison views (section 6.3) to facilitate checking embedding differences and diagnosing potential model collapse. In addition, _CIVis_ allows users to refine their sampling strategy by considering the performance of all metrics together to obtain a sufficiently satisfactory model. In the inference phase, the data are clipped and fed into a trained semi-model for prediction. To ensure that the results can be trusted, _CIVis_ employs a well-established interpretability technique, _GradCAM_[16],
Fig. 2: The system pipeline. _CIVis_ includes a back-end engine and a front-end visualization. The back-end engine serves our proposed contrastive framework, based on a motivation that _for an incomplete sample, the ML model should learn the similarity between its complete counterpart and the dissimilarity between any other samples_. The front-end visualization consists of four components (i.e., feature selection, sampling, explanation, and inference) and interacts with the back-end engine.
to calculate the contribution of features to the prediction results, as shown in the inference view (section 6.4). Thus, experts can use their knowledge and the predictive capability of the model to achieve co-adaptive decisions on the final prediction outcome for any incoming data (either complete or incomplete) without ground truth.
## 5 Back-end Model
To mitigate the influence of imputation bias, we present a novel approach in which the task of modeling observed data with missing values is formulated as a subproblem of CL. Our proposed framework is grounded in the principles of CL and is specifically designed to address this issue.
### _Background of CL_
CL has a key design principle: aligning positive samples and maintaining uniformity of data distribution [43]. Specifically, **alignment** represents the similarity between a record and its positive samples, while **uniformity** favors a uniform distribution of record embeddings, which depends on the negative samples. For example, InfoNCE [14] is a mathematical reflection of the design principles and the underlying CL loss function, which is described below.
\[L_{con}=-log\frac{exp(\sigma(q,k_{+})/T)}{exp(\sigma(q,k_{+})/T)+\sum_{i=1}^{ K}exp(\sigma(q,k_{i})/T)}, \tag{1}\]
where \(\sigma\) denotes cosine similarity, \(q\) denotes a given sample, \(k_{+}\) denotes positive pairs, \(k_{i}\) denotes all negative pairs, and \(T\) is a temperature hyperparameter that controls the alignment level of the learned embeddings [44]. The positive pair has the same labels, while the negative pair has a different label. According to the analysis in [43], a small \(\sigma(q,k_{+})\) represents alignment, and a large \(\sigma(q,k_{i})\) denotes uniformity. Alignment and uniformity are achieved by committing to the "hardness" [23, 45] in positive and negative sampling, respectively. By "hardness" we mean hard samples that are similar and difficult to be distinguished from each other. We introduce positive and negative sampling in CL as follows: for positive samples, we generate or find similar samples from the given samples [46, 47, 48]; for negative samples, we look for hard negative samples [11, 12, 13]. When an invalid sampling strategy is chosen, the beneficial hardness becomes harmful indistinguishability. An undesired local solution is satisfied when all the embeddings generated by the model are constant (i.e., model collapse). When we use the collapsed embeddings in fine-tuning, a negative impact is observed, and the performance of the model drops to the same level as the randomly initialized embeddings [23]. Accordingly, selecting a sampling strategy becomes critical. To our knowledge, only one work in the image domain [45] has proposed a method for locating the best positive sampling strategy, and no general measure for promising sampling strategies has been created.
We present an illustrative example of predicting house prices based on critical features including _house style, built year_, and _street location_. We use the output embeddings of a house generated by a CL model as evidence to estimate its price. When houses share the same street location, built year, and style, they should have similar embeddings, indicating alignment. Conversely, when houses have any differences in these features, their embeddings should be distinguishable, indicating uniformity. To achieve both alignment and uniformity in CL, the concept of hardness is introduced. We showcase negative samples with hardness, which are houses sharing similar built years and street locations but possessing slightly different building styles and significantly different sale prices. We also showcase positive samples with hardness, which are houses also sharing similar features but possessing similar sale prices. Model collapse occurs when embeddings cannot accurately reflect house prices. For instance, when the embeddings of houses built in different years are identical, this leads to the loss of information about the year of construction and ultimately results in inaccurate price estimation.
### _Architecture of Back-end Model_
The back-end architecture is inspired by MOON [15] and MoCo [14], for which we first introduce them.
**MOON.** The architecture of MOON is shown in Figure 3(a)), where a party \(i\) implies a batch of data, the local and global models share the same architecture (dashed box) but are provided by different data, and \(t\) represents the current epoch. The global and local models are simultaneously trained by using all data and partial data from a party. Thus, the global model is usually more accurate, and the output of both models constructs a positive pair, while the negative pair is generated by the local model in the last epoch (\(t-1\)). MOON optimizes the local model by minimizing the sum of the contrastive loss of the two pairs and the supervised loss depending on the downstream task.
**MoCo.** The architecture of MoCo is shown in Figure 3(b)), where \(x^{query}\) is augmented from a given instance \(x\), the encoder and the momentum encoder are with the same neural network with different update policies, a queue going through the entire dataset outputs \(x^{key}\), and \(q\) and \(k\) are embeddings. The momentum encoder is updated by the small weights of the gradients from the encoder; \(q\) and the embedding of \(x\) construct a positive pair, while all \(k\) and the embedding of \(x\) construct a negative pair. MoCo optimizes the encoder by minimizing contrastive loss.
**Architecture of our model.** The way MOON constructs positive and negative pairs inspires us that a positive pair of CL could be embeddings of diverse models trained from different data. The momentum encoder in MoCo provides us with a prototype to generate embeddings of negative pairs that cover a variety of data and differ from the original embedding. When we transfer the global and local models in MOON to complete and incomplete data and use the momentum encoder in MoCo to generate negative embeddings, the label prediction task in our case can be formulated as a subproblem of CL: _For an incomplete sample, the ML model should learn the similarity of its complete counterpart and the dissimilarity between any other samples._ Figure 3(c) shows the specific model architecture of our approach, which includes **positive sampling** and learning similarity for alignment, **negative sampling** and learning dissimilarity for uniformity, and **contrastive loss** construction (i.e., optimization objective). Specifically, we first measure the sample distance and sample the hard positive pairs. Then, a
new adaptive MoCo technique is used to identify the hard negative samples in the case of incomplete data. Finally, the similarity between positive pairs and the dissimilarity between negative pairs constitute the contrastive loss. Our objective function is a combination of task and contrastive losses to further consider the downstream task. The details will be explained in the following subsections.
#### 5.2.1 Positive Sampling
In our scenario, the positive pair \(x^{pos}\) (Figure 3(c)) is used as the "complete form" of the given data. The complete form of the data does not contain any missing values and dimensions (i.e., full data). Meanwhile, the given data come from the data after removing some feature dimensions (i.e., semi-data). We remove the unselected features (usually features with high missing rates) to generate semi-data, which come from two types of raw data: 1) raw data without missing values or 2) raw data with missing values. In the former case, the positive pairs of the semi-data are naturally the original data. In the latter case, for a semi-data, we score all the full data by similarity, with the highest being the positive sample.
Equation 2 measures the label and feature similarities between the semi-data and the full data. We follow the definition of hard positive samples (i.e., high label similarity with small embedding distance) [17] in the classification and transfer to the regression (Equation 2), where \(\sigma\) denotes cosine similarity, \(label_{s}\) and \(label_{f}\) denote the labels of the semi-data and full data, \(x\) stands for 1) "embedding" representation: the embedding generated by the current model, or 2) "raw" representation: the feature vector of the raw data, \(x_{s}\) and \(x_{f}\) being \(x\) for semi-data and full data. The former representation of \(x\) is preferred when we expect samples with similar embeddings to be positive, and the latter one is preferred when we expect samples with similar features to be positive. In addition, the second representation of \(x\) is particularly useful when the pre-trained model poorly performs. We normalize the values of \(abs(label_{s}-label_{f})\) and \(abs(label_{a}-label)\) to values between \(0\) and \(1\).
\[score_{p}=\sigma(x_{s},x_{f})-\frac{abs(label_{s}-label_{f})}{\max\{abs(label_ {s}-label_{f})\}} \tag{2}\]
However, the samples found in the limited search space may also be infeasible positive pairs. For example, a semi-data is different from all the full data, and the closest full data can hardly be regarded as the complete form of the semi-data. Accordingly, we set a condition that when \(score_{p}>\mathbb{E}_{(s,f)\sim P}[\sigma(x_{s},x_{f})]\), where \(P\) denotes all positive pairs, no suitable positive sample is found, and we set the coefficient \(\mu\) in Equation 5 (introduced later) to \(0\). By default, \(\mu\) is \(1\) when entering full data and \(0.5\) when entering semi-data because its positive sample comes from another part of the data compared with the full data. Specifically, \(\mu\) is used to adjust the contribution of the discriminant sample and stabilize the model training. Finally, the selected positive samples are fed into the full model to produce the embedding \(k_{+}\) in Equation 1.
#### 5.2.2 Negative Sampling
The negative sample for a given data point is the data point with a different label. MoCo automatically includes the fed data as part of the negative samples for each batch. The total set of negative samples is interactively given with the support of the front-end visualization described later, rather than being drawn from the batch data and potentially covering the entire dataset, to induce domain knowledge. This modification helps the experts in specifying the partition of the data they want the model to get rid of.
To facilitate the experts in constructing the total set, we initialize two strategies for negative sampling, namely **random sampling**[44] and **hard negative sampling** (i.e., low label similarity with small embedding distance [17]) and transferred to the regression in Equation 3. The notation of Equation 3 is similar to that of Equation 2, except that suffix \(a\) represents the random anchor points for other samples. The total set of negative samples can be interactively adjusted, for example, by adding a new negative sample or removing one after the initial sampling.
\[score_{n}=\sigma(x_{a},x)+\frac{abs(label_{a}-label)}{\max\{abs(label_{a}-label )\}} \tag{3}\]
Thereafter, we generate the embedding of negative samples according to MoCo, which consists of two parts (Figure 3(c)). First, MoCo maintains a queue to extend the
Fig. 3: Model architecture: (a) MOON inspires our design in terms of positive sampling; (b) we adopt MoCo’s negative sampling in our scenario; (c) our model, where \(N\) is the batch size. The architecture of our model differs from MoCo in two aspects. First, our positive embedding is generated by another model. Second, the queue precedes the momentum encoder to construct a batch of negative samples. (a) is a reprint of Figure 3 of [15] and (b) is a reprint of Figure 1 of [14].
sources of negative samples. In our model, the semi-data and full data have different dimensions, and our semi- and full-queue follow the same update mechanism. In each training batch, the current sample in the queue is regarded as a negative sample. The queue is then updated by the selected data, first in, first out. Subsequently, the negative samples are fed to the momentum encoder to generate the negative embedding \(k\) for Equation 1. We also have two momentum encoders in our model. After the embedding is generated, the momentum encoder is updated by the parameters of the full/semi model only, instead of back-propagation:
\[\hat{\theta}_{k}=\hat{\theta}_{k}+(1-m)\theta_{k},k=\{s,f\}, \tag{4}\]
where \(\theta_{s}\) and \(\theta_{f}\) denote the parameters of the semi and full model, respectively; and the \(\hat{\theta}_{s}\) and \(\hat{\theta}_{f}\) denote the parameters of the semi and full momentum encoder, respectively. When \(m=1\), the semi- and full momentum encoders are equal to the semi- and full model, respectively. In summary, the hardness in the negative sampling can be adjusted in three ways: 1) the sampling strategy; 2) the samples to which the strategy applies; and 3) the hyperparameter \(m\) in the momentum encoder.
#### 5.2.3 Objective Function
In the training phase, the loss function of the semi-model includes _task loss_ (\(L_{task}\)) and _contrastive loss_ (\(L_{con}\)):
\[L=L_{task}(q,y|\theta_{f})+M\cdot\mu\cdot L_{con}(q|\theta_{s}). \tag{5}\]
The task loss depends on the downstream task (e.g., the MSE loss of the house price prediction scales from zero to one). The observed data \(q\) and its label \(y\) are fed into the task loss to optimize \(\theta_{f}\), i.e., the full model. Contrastive loss (Equation 1) uses the resulting positive and negative embeddings (depending on the input \(q\)) to improve the performance of the semi-model \(\theta_{s}\). \(M\) is the weight of \(L_{con}\) and \(\mu\) is the coefficient for the positive pairs mentioned in section 5.2.1. Then, the full model is trained together with the semi-model. In the inference phase, the separately trained semi-model provides the prediction results for the downstream task.
## 6 Front-end Visualization
Based on the preceding requirements and the proposed CL-based framework, we develop _CIVis_, a visual analytics system (Figure 4), which visualizes a set of information to facilitate the interactive selection of positive and negative sampling strategies and handle the issue of modeling observed data with missing values. We went through several iterations during the development process. The initial version of the system was designed based on the team's requirements and contained various functionalities to support flexible operation during training. Later, the question of whether the interface needed to be simplified was raised, followed by a lengthy discussion about reducing unnecessary elements and adding user guidance. The discussion resulted in the development of two design principles for the interface: 1)
Fig. 4: (1) The overview view enables the user to select various datasets and models, displays statistical information on the missing rate, and permits the specification of feature dimensions for the subsequent procedures. (2) The guidance view shows the current process and prompts for the next step. (3) The negative sampling view includes a circle-based glyph for examining the embedding distribution of the data records (right) and a control panel for further selecting the appropriate negative collection through the negative sampling strategies (left). (4) The positive sampling view displays the relationship between the data and its positive sample via a bipartite glyph and supports interactive adjustments. (5) The comparison view allows comparison and diagnosis during the training phase of both models. (6) The training view allows specifying hyperparameters and performing supervision during the training phase. (7) The inference view shows the predicted values of each test data and their activation. (8) Logs support model saving and switching between different trained models.
visual updates after the operations should be automatic to reduce user manipulations and direct their attention to the next step, and 2) visual design of a progress bar is necessary to provide guidance for users and direct them to different stages of model development. Therefore, we applied the two principles in the final version of the visual analytics system.
The whole pipeline can be summarized in the following steps: _data preparation, model configuration, training, and inference_. The layout of _CIVis_ allows the user to iterate through the entire pipeline from left to right. The leftmost column is like a control panel (Figure 4(1) and (8)). The user can select the dataset and the appropriate model structure, understand the overall missing features and select the necessary ones (**R.1**), recover the trained models, and switch between them. The middle column (Figure 4(2), (3) and (4)) is the CL-oriented area in which we design glyphs to reveal the sampling and learning procedures for CL (**R.2**). In the comparison view (Figure 4(5)), the multi-aspect metrics plot the training procedure (**R.3**). The rightmost column (Figure 4(6) and (7)) refers to the training and inference phases. The user can adjust hyperparameters, such as learning rate, monitor the training process in the training view, and check the predictions for each unlabeled test data in the inference view for increasing confidence (**R.4**). In the following subsections, we introduce the visual design of the negative sampling view, the positive sampling view, the training and comparison view, the inference view, and the guidance view.
### _Negative Sampling View_
The negative sampling view (Figure 4(3)) yields the total negative collection used for CL training. Previous studies [23, 44, 49] demonstrated their validity by projecting high-dimensional latent representations onto a sphere (Figure 5(a)). Inspired by their designs, we propose a circle-based visualization method to facilitate experts to inspect the embedding distribution of data records and further select the appropriate negative collection (Figure 5(c)) (**R.2**). We also show the raw information of the selected data (i.e., the values of the feature dimensions in the bottom right of Figure 4(3)) to induce domain knowledge.
**Visualization.** In Figure 5(c), we randomly choose one embedding of the data record as the "anchor", drawn on the rightmost point with degree \(0\). The other embeddings are drawn on the innermost circle, where the angular difference from the anchor is the arcosine of the cosine distance. The green and red colors indicate semi-data and full data embeddings, respectively (Figure 5(1)). The role of visualization is to perceive uniformity, so the absolute position of points is meaningless. According to the uniformity principle, embeddings close to each other should receive more attention. To help check for "uniformity", we append two arcs with the arc length indicating the variance on the left and mean on the right (Figure 5(2)) to imply uniformity of the selected data records. These two arcs are from the bottom to the top and from the top to the bottom. However, potential visual clutter may appear on the innermost arc (i.e., many points are crowded together) when the model is not robust enough to discriminate the data records well. In this case, the nodes in the innermost circle will overlap, and their distribution of angular scales is difficult to examine in a small area. To solve this problem, we plot the selected points on a larger scale as a preview (Figure 5(4)) and draw a purple curve (Figure 5(3)) to visualize the density distribution of the selected points along the preview. As mentioned in section 5.2.2, we provide two sampling strategies: random sampling and hard negative sampling. The sampled points are added to the total negative collection, indicated by the blue arcs around the inner part of the preview (Figure 5(5)). In addition, we project the original data records onto a 2-dimensional plane by _t-SNE_[50] inside the innermost circle (Figure 5(6)). Upon user interaction with the innermost circle, the blue projected node is linked to its corresponding point on the innermost circle, and the blue projected nodes are also included in the negative sample set.
**Interaction.** The negative sampling view presents the nodes and projections within the innermost circle. The selection of points of interest (PoI) allows the negative sampling view to show information such as the corresponding link to the projection, mean, and variance of the arcs, and a preview on the outermost circle. Only the projection nodes that correspond to the PoI are visible. The expert can select a strategy and adjust the sampling rate to negatively sample the PoI, resulting in the sampled nodes turning blue in the projection. Furthermore, the raw data corresponding to any point in the projection area can be displayed in the table on the right by clicking on it (Figure 4(3)). The expert can also search for a specific feature value by entering a keyword. Finally, the momentum coefficient in MoCo, which
Fig. 5: Design alternatives (a-b). (c) Visualization of the circle-based negative sampling view: From outermost to innermost, (1) indicates semi-data and full data; (2) indicates the variance and mean of the angle of the selected points; (3) indicates the distribution of the preview; (4) is an enlarged preview of the selected green/red points; (5) indicates the points added to the total negative collection; and (6) shows the projection of the selected data, where the points added to the negative sample collection are in blue. (a) is a figure reprinted from Figure 2 of [44]. (b) is the counterpart of (a) in the 2-dimensional space.
determines the update speed of the momentum encoder, can be adjusted using a parameter slider bar named "m". The expert can configure the negative sample collection by clicking the _set Negative_ button.
**Design Alternative**. The design is derived from the sphere [23, 44, 49] commonly used in the embedding analysis. Figure 5(a) depicts a 3-dimensional sphere, but a 128D unit sphere is implied in its original paper [44]. The purpose of the "glyph design" is to facilitate understanding of their method. However, this 3-dimensional design does not facilitate direct access to information, such as distances compared with common 2-dimensional visualizations. Accordingly, we improved the design of Figure 5(a) to Figure 5(b), where an arrow indicates the embedding projected on a 2-dimensional unit circle. The graphics directly shows the cosine distance between embeddings. Although an arrow naturally represents a vector, a node on the circle is sufficient to represent an embedding. In addition, we need to present the projection of the original data, as shown in our final design (Figure 5(c)).
### _Positive Sampling View_
The positive sampling view (Figure 4(4)) is designed to generate and examine the positive samplings for each semi-data (**R.2**). Specifically, positive sampling is a many-to-many mapping, in which the semi-data look for the most similar full data, which is naturally a bipartite graph. However, directly drawing a bipartite graph with many nodes or edges often causes severe visual clutter, which will prevent experts from clearly understanding the mapping and further adjusting it according to their domain knowledge. Therefore, we develop a Sankey-like visualization [51] to describe many-to-many mapping relationships.
**Visualization**. To begin, we partition the semi-data and full data into \(K\)-labeled bins, where the node labels correspond to the bins, and maintain a bipartite graph. The selection of \(K\) is determined based on experimental results for a specific dataset, which enables us to visualize the Sankey-based extended glyph clearly, as shown in Figure 6(b). In Figure 6(a), the node on the left side denotes a bin of semi-data. The diameter of the node is equal to the width of the corresponding intermediate link, which represents the mapping between bins. The width of the link indicates the amount of data in the bin that satisfies the mapping. We employ a Bezier curve [52] to illustrate the links clearly, where the control points are located on the upper node on the left and the lower node on the right. A curve is drawn next to the nodes to display the amount of data in the bin, i.e., a histogram of the data labels. By clicking on the nodes, we can present the maximum and minimum values and feature averages in the bins.
**Interaction.** The positive sampling view initializes the mapping obtained from the search method mentioned in section 5.2. Two interactions are related to the generation of the default mappings. The _set Row_ bar is used to adjust the number of bins (i.e., the number of nodes on both sides) according to the downstream task (e.g., for classification), and the number of bins and the number of classes should be equal. Furthermore, the drop-down box provides _embedding_ and _raw_ options (introduced in section 5.2.1) to specify the representation used in the algorithm. Clicking the _set Positive_ button will finalize the mapping between the semi-data and the full data.
**Design Alternative**. We performed a functional simplification in the positive sampling view during the design iterations. The Sankey-like alternative (Figure 6(b)) describes a one-to-one mapping of an edge in the positive sampling view. Experts can adjust a mapping when they find an anomaly. However, adjusting a mapping requires at least seven steps, and checking the mapping one by one is tedious and confusing for experts. After discussion with the experts, we decided that the automatic method for the positive sampling (section 5.2.1) provides a promising mapping for the positive sampling, so fine-grained adjustment is a dispensable requirement, especially when it requires an unacceptable manual effort. Consequently, we remove this option to reduce the number of operational steps in our system. In addition, we went through one design iteration when we used Sankey glyphs to draw the many-to-many bipartite graph in the positive sampling view (Figure 6(c)). The drawback of Sankey graphs is that the thin links and rectangles are almost invisible. The narrow links typically lead to a bin with a range of different labels and reveal potential anomalies. Therefore, we improved the design by limiting the diameter and width of the nodes and links. Those thin but important links are clearly displayed, and the smaller nodes can be easily interacted with.
### _Training and Comparison View_
The training and comparison view (Figure 4(5) and (6)) controls and supervises the training phase (**R.3**). The expert can adjust hyperparameters, including _Learning rate_, _Epoch_,
Fig. 6: (a) The node on the left denotes a bin containing semi-data, and the node on the right represents a bin containing full data. The links on both sides represent a bin-to-bin mapping relationship. The node diameter and link width represent the number of mapping relationships. (b) and (c) are design alternatives.
_Temperature_, and \(M\) in Equation 1, and control the training state via the _train_ and _stop_ buttons. Then, _CIVis_ updates the training-related metrics in real time.
In the training view, the training loss in _Train_ and the validation error in _Validation_ are updated; in the comparison view, _Var_neg_ indicates the variance of the negative scores (the cosine distance between the fed data and the negative collection), _Mean_neg_ indicates the mean of negative scores, and _Mean_pos_ denotes the mean of the positive scores. After the training is completed, the circle glyph on the left side of the view and _Activation_ in the lower right corner will be updated (Figure 4(5)). The encoding of the circle glyph is the same as the innermost circle of the circle-based glyph in _negative sampling view_. The _Activation_ describes the contribution of a feature to the predictive result.
The current and next rounds of the training yield the "first" and "second" models. The above-mentioned visual cues in the comparison view are indicated in purple and blue to facilitate comparison. Hovering any figure in the training and comparison view puts the curve in the front to avoid visual clutter, and the exact value is displayed by the tooltip. At this point, the trained model (i.e., semi-model) has gone through the entire pipeline of _CIVis_ and can be saved by clicking on the _add_ button, which appends a single line to _Logs_ (Figure 4(8)). Clicking on a single line switches the system to the corresponding model. Meanwhile, clicking on the _delete_ button also deletes the single line of records. Thus, _CIVis_ allows experts to analyze and refine a model in an iterative and interactive way.
**Design Alternative**. In the comparison view, the layout of two circular glyphs undergoes one design iteration when they share the same center but have different radii. This alternative design shows advantages in comparing the distribution of points within the same central angle. However, it does not take into account the visual misinterpretation caused by the different radii, for which the inner circumference covering the same angle is naturally shorter. In other words, even if the inner distribution of points is similar to the outer one, this alternative can cause the misconception that the inner distribution is visually narrower. Therefore, we improved the layout to ensure that the circle glyphs had the same shape and placed them side by side, which eliminated the visual misunderstanding and caused little inconvenience when comparing point distributions within the same range.
### _Inference View_
_CIVis_ preserves a trained semi-model that experts can use in the inference phase (Figure 4(7)) (**R.4**). Although in reality, we cannot guarantee which features are missing in the inference phase, and no ground truth exists, we discard those data records that have missing values in the selected features for experimental simulation. The expert can re-run the feature selection process if the discarded data unexpectedly appear. The remaining data records (i.e., those with values in the selected features) are fed into the trained semi-model, and the _inference view_ will display the predicted values and activation values based on GradCAM [16]. In the upper part, each row refers to the prediction of a data record; in the lower part, each point refers to a feature importance. The expert can further check whether the appropriate features are activated to verify the predicted values.
### _Guidance View_
_CIVis_ includes the entire process of training an ML model and many components must be integrated. In order to enhance clarity and mitigate confusion in the utilization of the system, prescribing guidance is integrated into our approach, as recommended by Ceneda et al. [53]. This prescribing guidance suggests view-by-view actions for optimal utilization of the system. The completion of one view automatically triggers the update of the next view, and the guidance view at the top of the positive sampling view (Figure 4(2)) shows the current progress. For example, when the selection of models, datasets, and features change the positive and negative sampling views, the guidance progress goes from _Specification_ to _Sampling_. In particular, clicking _switch_ in _Logs_ triggers an update of the positive and negative sampling views, and the guidance progress returns to the _Sampling_ step because _CIVis_ supports progressive development. Finally, when experts test the saved model in the inference view, they reach the end of the pipeline. Across the entire pipeline, the updates that are triggered ought to attract the users' attention and direct them towards the subsequent view that requires their focus.
## 7 Evaluation
In this section, we evaluate the effectiveness of _CIVis_ from several aspects of the visualization community [54]. First, we describe two usage scenarios with our collaborative experts (E1-E3) who participated in our user-centered design process. Second, we conduct a quantitative experiment comparing _CIVis_ with several selected data imputation baseline methods. Third, we interview the experts to obtain their feedback on _CIVis_. Finally, we conduct a qualitative user study to further evaluate the effectiveness of _CIVis_.
### _Usage Scenario I: House Price Prediction_
The modeling data of the real estate industry mentioned in the observational study have confidentially rules and cannot be used as experimental data. As an alternative, E1 suggested that we could take a publicly available house price dataset [55], which is similar to their scenario. The dataset includes a feature dimension of 79 and the task is to predict house prices using a convolutional neural network (CNN). In particular, this usage scenario illustrates how the expert (i.e., E2) understood the training phase, steered the CL-based framework, and induced his knowledge into the trained model to deal with the issue of modeling observed data with missing values with _CIVis_. The value types of features include numbers and strings. Consequently, we apply dummy encoding in _Pandas_[56]. Specifically, if a feature dimension has four possible values, then we extend the feature to four dimensions and use a one-hot encoding. We split the first \(10\%\) of the dataset as the validation set. We also split a test set (\(10\%\)) for the inference phase to facilitate the evaluation of the model performance. The CNN model consists of three 2-dimensional convolutional layers and three fully connected layers.
E2 first specified the features to be used and the "to-be-improved" model, as shown in Figure 7(1). After loading the dataset and CNN model into _CIVis_, E2 checked the overall missing data rates in _Overview_. To avoid missing data, he eliminated \(15\) features with missing rates greater than \(0\) and further eliminated \(19\) features that were not clearly defined, such as "MiscVal" representing miscellaneous feature values. Finally, the \(45\) features were retained. Then he clicked the _select Feature_ button and waited for pre-training in _CIVis_. After pre-training, he clicked the _add_ button in _Logs_ to save the pre-trained semi-model as a baseline. Note that _CIVis_ pre-trains the full model by all features and the semi-model by selected features.
**Negative Sampling Interaction.** When the guidance view proceeds to _Sampling_, the negative sampling and positive sampling views are automatically updated, and the expert moved to the negative sampling view. Initially, the green dots (the dashed circle in Figure 7(2)) cover a particularly narrow range, which means that the variance of the semi-data is small. Then, E2 lassoed all the green dots for verification. The updated variance (lower left corner of Figure 7(2)) is relatively short, and the updated preview (right side of Figure 7(2)) draws a green region where the semi-data are dense. The expert then concluded that the current embedding distribution violates the principles of "alignment" and "uniformity". He then wanted to refine the distribution to ensure that at least one significant gap exists between points (e.g., the red points in Figure 4(3)) and added the samples to the negative collection, _"samples added to the negative collection should be pushed away on the circle and have a larger variance"_. Accordingly, he again lassoed all the semi-data, ticked the "hard" sampling strategy, dragged the _Rate_ slider to \(0.6\), and clicked the _apply_ button, which automatically added \(60\)% of the selected samples into the negative collection (Figure 7(2)). The updated blue arc in the preview (on the right side of Figure 7(2)) presents the added samples. Their density indicates that the distribution of the added samples is reasonable (i.e., the dense areas in the preview are also dense in the corresponding areas of the blue arcs). Therefore, he clicked the _set Negative_ button and moved on to the positive sampling view.
**Positive Sampling Interaction.** E2 then went to the positive sampling view. He witnessed some thick links in the middle of the view (Figure 7(3)). Then, he clicked the corresponding bins of the thick links to check the label ranges. Since their feature averages (left and right columns of Figure 7(3)) and label ranges (upper left and upper right corners of Figure 7(3)) are similar between the semi-data and positive bins, E2 was satisfied with the default results. So he just clicked on the _set Positive_ button and the guidance view proceeded to _Train_.
**Training and Model Collapse.** After specifying the negative collection and checking the many-to-many mapping to find the positive samples, E2 dragged the slider in the training view to set the hyperparameter based on his domain experience. The _Temperature_ parameter is designed to amplify the cosine distance between a sample and its positive and negative samples. The expert set the _Temperature_ parameter
Fig. 8: Model collapse. The mean of cosine distance is one.
Fig. 7: Interaction and observation in usage scenario I. The notation \(1-3\) demonstrate how to configure positive and negative sampling based on visual cues; notation \(4-7\) highlight and explain the benefits that _CIVis_ brings. The loss curves in (4) look like an area because the loss sharply and frequently jumps up and down.
to \(0.07\) according to MoCo and set a small number of epochs of \(100\) to see its effect on the model training. The pre-trained full model would be trained at the same time. After clicking the _train_ button, _Var_neg_, _Mean_neg_, _Mean_pos_ in the comparison view, and _Loss_, and _MSE_ in the training view are updated in real time. With a few epochs, _Var_neg_ goes down to around _zero_. _Mean_pos_ and _Mean_neg_ quickly rise to one, implying that the training embeddings are similar (Figure 8). The expert concluded that model collapse occurs. He then proposed a hypothesis that a smaller _Temperature_ might break the training phase. To validate his hypothesis, he set _Temperature_ to one and repeated the above-mentioned procedure (Figure 7(4)). After clicking the _train_ button, the expert found that _Mean_neg_ and _Mean_pos_ started to slowly evolve, just like the first few epochs in Figure 9(a); thus, the problem of the model collapse was solved.
**Training with the Semi-data.** After fixing the model collapse, E2 started another round of training, adding more epochs, to see how the model converged in the updated distribution and activation in the comparison view. The purple curve in Figure 9(a) depicts the training phase of the CL. The _Mean_neg_ ranges from \(1\) to about \(0.95\), implying that negative pairs are pushed away with the contribution of CL. The _Mean_pos_ rises from close to \(0\) to \(0.98\), implying that the distance between the positive pairs becomes smaller with respect to another CL contribution. In summary, the purple curve reflects the training phase exactly as the CL aims to push away the negative pairs and pull close the positive pairs. The distribution of the semi-data is wider compared with the distribution produced by the pre-trained model (green in Figure 7(2)), which is consistent with the expert's purpose of pushing them apart (purple circle in Figure 7(5)). In the training view, the expert observed that the contrastive loss slowly decreases, and the validation MSE of the semi-model catches up with the MSE of the full model (Figure 9(b)). Clicking the _add_ button in the _Logs_ records the hyperparameter for this training and the MSE error (\(0.13486\)).
**Training with the Full Data.** The above training adds only the semi-data to the negative collection (the first run). E2 wanted to compare the case wherein the full data were added to the negative collection (the second run). He clicked on the first row in _Logs_ and then clicked on the _switch_ button to restore the pre-trained model as a baseline, similar to the first run. The guidance view returns to _Sampling_, and the positive and negative sampling are automatically updated. Nevertheless, the negative collection consisting of only the full data increased the possibility that an instance of the full data is the positive pair while within the negative collection, resulting in an abrupt increase in CL loss and instability of the learning process. Therefore, he used the same procedure as above and lassoed the full data at a sampling rate of \(90\%\), adding a sample of \(470\) to the negative collection. After another round of training, he clicked the _add_ button in _Logs_ and found that the MSE decreased (i.e., from \(0.13486\) to \(0.12008\) as shown in Figure 7(6)).
Then, E2 moved to the comparison view (Figure 9(a)) to find the reason for the decrease. When the blue curve in _Mean_neg_ tends to be smooth, the blue curve in _Var_neg_ tends to \(0\), and _Var_neg_ indicates the convergence degree. In _Mean_neg_, the blue curve starts from \(0\), opposite to the purple curve, and reaches below the purple curve. According to E2, _"this observation yields three following insights"_: 1) The 0 starting point of the blue curve indicates that embeddings of full data are initially distinguishable. Meanwhile, the 1 starting point of the purple curve indicates that embeddings of semi-data are difficult to distinguish because they have fewer features compared with the full data. 2) In _Mean_neg_ and _Mean_pos_, the purple curve below the blue curve indicates that all instances are more significantly different from the full data. Accordingly, the loss of the semi-model decreases faster and finally reaches a lower level in the second round ((Figure 9(b))). Zhu et al. [23] also confirmed that discriminative positive samples can improve CL performance. 3) The blue curve up in _Mean_neg_ means that the instances are getting closer to the full data in the negative collection, which is against the purpose of CL at first glance. However, the full data are in the negative collection, which is also a positive sample, so the training phase of CL finds a reasonable distance between instances and the full data. _"Through this scenario, 1 learned profoundly that CL is about trade-offs between alignment and uniformity, not simply pushing away the negative pair and pulling close the positive pair,"_ concluded E2. Meanwhile, the circle glyph in the comparison view (Figure 7(6)) shows a wider distribution of the full data compared with the first run. The blue and purple curves rise in _Mean_pos_. These observations lead to the same conclusion as the first run: the embedding of the data added to the negative collection becomes more discriminative, while the embedding of the positive samples becomes similar.
We summarize the contributions of _CIVis_: 1) _CIVis_ helps the expert in understanding the training phase and deep
Fig. 9: Explanation of the training phase of CL. The difference between the first and the second rounds are with only the negative collection. (a) reveals the distance change between positive and negative pairs during training and _Activation_ describes the feature importance. (b) plots the loss curves of the full and semi-model during training.
his understanding of CL; 2) _CIV_is supports a highly interactive mechanism to generate discriminative samples and control their degree of discriminability (depending on how many negative samples are added); 3) the generated discriminative samples improve the semi-data performance.
**Inference.** After the training, E2 was more confident about the trained model. Then, he clicked the _infer_ button to trigger inference. Subsequently, he proceeded to verify the predictions in the inference view. Specifically, he clicked on the row with the smallest house price prediction, and the highest activation was _YrSold_ (the year the house was sold) (Figure 7(7)). _"In my opinion, the year the house was sold should be similar to the year the house was built, and older houses may have problems such as safety reasons and outdated facilities that may affect the price of the house"_, said E2. He also observed several other lines and was satisfied with the predictions by saying that _"1 can trust its prediction a lot."_
### _Usage Scenario II: Credit Card Bill Prediction_
We use another credit card dataset [57] with a model named _AutoInt_[58] to verify the efficacy of _CIV_is in achieving a promising alignment, which is recommended by E3. The credit card dataset includes a feature dimension of \(25\), and the task is to predict whether the customer will pay the bill next month. _AutoInt_ uses unordered features and is designed for classification. We still divide the training/test set at the same scale to be consistent with the usage scenario I. However, the dataset has no missing data, so we randomly (\(20\%\)) remove some values of the 3 dimensions (_PAY_AMT1_, _PAY_0_, and _BILL_AMT1_) (Figure 10(A)).
**Anomalous Positive Sampling.** The _Set Row_ function is adjusted to \(2\), and the positive sampling representation is set to _embedding_ by default due to binary classification (Figure 10(B)(1)). In Figure 10(A), E3 first deselected the \(3\) dimensions with missing values and clicked the _select Feature_ button, assuming no missing data during the inference process. The positive and negative sampling views are then updated accordingly, and the guidance view proceeds to _Sampling_. In the case of binary classification, two thin rows appear in the positive sampling view (right side in Figure 10(B)(2)). The thin links imply that the positive sampling is highly homogeneous. _"Model collapse occurs or deteriorates when most semi-data are trained closer to the few full data"_. E3 pointed out and considered that the current positive sampling is anomalous. Thus, E3 conjectured that one of the following three scenarios occurred: 1) Model collapse occurs in the semi-model (i.e., most full data corresponds to a few semi-data in terms of its embeddings); 2) Model collapse occurs in the full model (i.e., most semi-data corresponds to a few full data in terms of its embeddings); 3) No model collapse occurs (i.e., semi-data and full data have few correspondences on their raw data.
To validate these hypotheses, E3 first switched to an alternative representation _raw_ for positive sampling, applying the positive sampling algorithm to the raw data instead of the embedding. Two parallel curves with thicker widths are shown (left side in Figure 10(B)(2)). Then, E3 switched to the negative sampling view to diagnose the model collapse from the perspective of uniformity. Most of the green dots have a narrower aggregation range than the red dots, indicating semi-data and full data, respectively (Figure 10(D)(2)). E3 attributed the anomalous positive sampling problem to the semi-model, and the resulting embeddings are not sufficiently different from each other. Thereafter, he was satisfied with the _raw_ option for positive sampling and clicked the _set Positive_ button.
**Alignment in the Prediction.** Based on the narrow green and wide red shown in the negative sampling view (Figure 10(C)), E3 sampled \(90\%\) of the semi-data and \(20\%\) of the full data by lassoing the points and applying the _random_ strategy. He then dragged the slider in the _random_ strategy to sample \(90\%\) of the selected samples and clicked the _set Negative_ after \(m\) is dragged to \(0.99\) to induce the hardness of negative sampling and avoid potential model collapse. The guidance view proceeded to _Train_. Before training, he clicked the _add_ button in the Logs subview to save the initialized model and started the 0-epoch training to trigger the update of the comparison view. The current distribution in the negative sampling view is preserved in the comparison view to allow the comparison of the results before and after training. In the training view, he set _Temperature_ to \(0.07\), _Epoch_ to \(20\), and \(M\) to \(0.1\) and then clicked the _train_ button to start training. After the training is completed, the validation curve shows a clear and promising result. With a \(22\) split on the x-axis, the blue curve is lower than the green curve (i.e., the loss given by the semi- and the full models), but they share a similar shape (left panel Figure 10(D)(1)). Then, the blue curve dives deeper, and the green curve follows and in some cases exceeds (right panel in Figure 10(D)(1)). This observation suggests that the training phase brings the semi-model in line with the full model.
E3 moved to the automatically updated comparison view to determine what brings the performance improvement. In the embedding distribution, the green points follow the distribution of the red points scattered on the circle. _"1 am surprised that the semi-model could exactly mimic the distribution of the full model"_, said E3. In addition, he found that the blue dots raise above the purple dots in the activation of the comparison view (Figure 10(D)(3)). _"1 think that the training distributions of the semi-model are caused by using more feature dimensions to distinguish from each other"_, said E3. Thereafter, he clicked the _add_ button again to test the overall classification error, which decreased from 0.2195 to 0.2126. E3 was satisfied with the trained model and the reasons for the improvement. He finally switched to the inference view to remove his last concern about overfitting. _"In binary classification, the accuracy is still high and the percentage is higher when the model predicts only one label"_, said E3. In this case, the dependence of activation on feature values is small and very low. He clicked on the _infer_ button and then used the rows with predicted values of zero or one to check out the feature importance (Figure 10(E)). The model makes predictions based on the features with high importance, so the display of all these features increases E3's confidence in the model's performance (i.e., accuracy).
### _Quantitative Experiment_
We performed a quantitative experiment comparing _CIV_is with classic data imputation algorithms. Following what we found in the first usage scenario of regression, we looked for
the best ratio between the full data and the semi-data added to the negative collection and used the best semi-model found. We experimented with house price prediction using a test set (\(10\%\)). Few imputation algorithms can be directly utilized because the features may be numbers or strings. Accordingly, we chose _KNN_ and _Most frequent[59]_. We first imputed the missing values and then used the trained full model to predict the house prices. In the validation set, the MSE of the trained full model is \(0.1123\), while the MSE of our best semi-model is \(0.11895\). The results of the test data show that our semi-model achieved the best performance (Table 1). In addition, we trained a semi-model without the help of _CIVis_ to fairly demonstrate the performance due to _CIVis_, which is shown as a pure semi-model in Table 1. The _Pure semi-model_ is the worst of the four models (\(0.207861\)). Thus, we consider that for _CIVis_ there is a \(62.32\%\) performance improvement.
We found the best full and semi-models according to the second usage scenario following a similar procedure and evaluated their performance on the test set (\(10\%\)). The differences lie in the dataset (i.e., the credit card dataset for classification) and the measurements (i.e., accuracy and AUC), which are common in classification. In Table 1, the _most frequent_ achieves the best in both measures. However, the results of _CIVis_ are comparable, with its accuracy dropping by only \(2.33\%\) and its AUC dropping by \(4.78\%\). In comparison with _pure semi-model_, _CIVis_ brought an accuracy loss of \(0.66\%\) (\(0.8013\to 0.796\)), but the AUC improved by \(4.28\%\) (\(0.5869\to 0.612\)).
The different performances in the two scenarios lead to the following observations: **1) CL yields different improvements depending on the task.** CL pushes negative pairs and pulls positive pairs, a necessary benefit for latent representation learning, but the impact on downstream tasks is uncertain. For example, the regression task in usage scenario I gains a surprising boost from the properties of CL because the prediction of the floating-point numbers requires a subtle difference from pushed embeddings. By contrast, the classification task requires critical distinctions, wherein the detailed refinement may be ignored, while the pushed embedding moderates salient features and classification accuracy. Meanwhile, the harmful pushing prevents overfitting and increases the AUC, acting like a regularization term. **2) CL may conflict with the selected model.** In the classification scenario, the selected _AutoInt_ is originally powerful because it induces the dynamic feature weights from the Attention module. The best performance is obtained by _Most frequent_, which implies that the involvement of _CIVis_ compromises the effectiveness of _AutoInt_. The CL approach may induce negative effects when the model is powerful enough. **3) Results of _KNN_ and _Most frequent_ depend on the missing data.** _KNN_ and _Most frequent_ have strong assumptions on the distribution of missing values.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline & KNN & Most frequent & CIVis & Pure semi-model \\ \hline \hline MSE & 0.136286 & 0.136439 & **0.078315** & 0.207861 \\ ACC & 0.8107 & **0.8150** & 0.796 & 0.8013 \\ AUC & 0.6401 & **0.6407** & 0.612 & 0.5869 \\ \hline \end{tabular}
\end{table} TABLE I: Comparison with two imputation algorithms.
Fig. 10: Usage scenario II. The notations from A to G refer to the steps in usage scenario II, in which we use visual tips to highlight the key operations or key discovery.
When we perform another random removal (discussed in section 7.2), their results are different in terms of accuracy and AUC, while _CIVis_ and _Pure semi-model_ are consistent. Thus, _CIVis_ is the best solution when stability is important enough to sacrifice reasonable accuracy from the perspective of missing data.
### _Expert Interview_
We conducted a one-hour semi-structured interview with our collaborating experts (E1-E3) to assess whether our approach helped them in inducing domain knowledge in the absence of data.
**System capabilities and learning curve.** All experts appreciated the ability of _CIVis_ to support interactive configurations to make predictions by using full and semi-data. The experts also expressed satisfaction with the customized visual design and interaction of the system. We deliberately selected familiar visual metaphors from the CL domain to help the experts quickly become familiar with our visual encoding. In addition, we conducted a user-centered design process. After introducing the system, the experts could develop a customized exploration path.
**Scalability.** The experts pointed out the advantages and disadvantages of instance-level visualization. E1 argued that presenting raw data is important for users to understand examples of algorithmic output. However, E2 discussed his concerns about scalability. Modern open-source datasets are always large-scale, and the visual designs that render each instance pose serious performance problems, such as latency. E3 suggested moving our system to a high-performance platform, such as WebGL when we work with large-scale datasets.
### _Qualitative User Study_
We conduct a user study to further evaluate the effectiveness of _CIVis_ in four areas: _Informativeness_, _Decision Making_, _Visual Design and Interaction_, _Usability and Perception_, following a four-layer taxonomy [60].
**Participants.** We recruited \(12\) participants for the user study (\(4\) females, \(8\) males, \(age_{mean}=24.5\), \(age_{sd}=3.55\)) via word-of-mouth, including students and practitioners from the major of Computer Science at ShanghaiTech University and Tencent. The participants all have at least \(2\) years of ML experience. Ten of the participants are students, and the remaining two participants are ML practitioners from the company. In particular, four of the participants have experience or knowledge about CL, and we pay more attention to their ratings and comments. The study was conducted face-to-face, and the users participated in tutorials, experienced the system, performed tasks, and filled out questionnaires. We use \(P1-P12\) to denote the participants, \(M/F\) to denote the participants' gender as male/female, and \(C/NC\) to denote the participants' experience with/without CL. For example, the \(9^{th}\) participant is male, is \(25\) years old, and has experience with CL, denoted by \(P9\) (\(M,25,C\)).
**Tasks.** In our user study, each participant was asked to complete a task in turn. The task had to be completed within \(45\) min and had two purposes: 1) A CNN model is trained to infer house prices from the incomplete house values, and 2) CL is used to refine the performance of the semi-model to approach that of the full model. Given that some participants (i.e., students) are not real-world ML practitioners, we primarily examine the integrity and usability of the system design. For more discussion on system effectiveness, please refer to the two usage scenarios above.
**Procedure.** The entire process lasted for approximately \(1\) h. Before we demonstrated our system, the participants were informed that their responses and feedback would be anonymously collected in a questionnaire. Then, we conducted a 15-20 min tutorial session during which we demonstrated the complete workflow of _CIVis_ while explaining all the operations of each view. Specifically, the system tutorial showed how CL can help solve the problem of modeling observed data with missing values. When the participants felt ready, they were given the aforementioned tasks to complete individually. After completing the task, a questionnaire was distributed with questions on a \(7\)-point Likert scale, with \(1\) representing "strongly disagree" and \(7\) representing "strongly agree" (Table II). The other comments that were not included in the questionnaire were also recorded for reference.
**Results.** The detailed statistics of the questionnaire are shown in Figure 11. _CIVis_ was appreciated by most participants, with no ratings below \(4\) and an overall average rating of \(6.14\). This evaluation result has several interesting features. First, we observed only \(6\) or \(7\) ratings in \(Q2\) and \(Q10\) with average rates of \(6.33\) and \(6.5\), indicating that the system design satisfactorily satisfies the main objective of training a transparent CL model (**R.2 and R.3**). _"Even though I know little about CL, I can understand the obvious benefit of CL in pushing the negative samples away and pulling the positive samples together"_, said \(P2\) (\(F,23,NC\)). Second, the higher average rates of informativeness and decision-making (\(6.33\) and \(6.22\)) can cover all the requirements (**R.1-R.4**). This notion means that the implementation of _CIVis_ succeeded in achieving the blueprint we had at the beginning. _"1 am impressed with the direct control of the embedding distribution"_. The \(P5\) (\(M,23,C\)) with CL experience praised the ability to induce domain knowledge into the model training. Third, \(Q13\) had the lowest mean rate (\(5.25\)) and the lowest mean
\begin{table}
\begin{tabular}{l l} \hline \multicolumn{2}{c}{**Informativeness**} \\ \hline Q1 & The overall situation of missing data is easy to access. \\ Q2 & The information for sampling and learning process \\ Q3 & of CL is detailed and rich. \\ Q3 & _CIVis_ provides sufficient information of the workflow. \\ \hline \multicolumn{2}{c}{**Decision Making**} \\ \hline Q4 & _CIVis_ facilitates inducing domain knowledge into the process of contrastive learning. \\ Q5 & _CIVis_ provides clues to understand and debug \\ Q6 & _CIVis_ helps my decision-making during inference. \\ \hline \multicolumn{2}{c}{**Visual Design and Interaction**} \\ \hline Q7 & The workflow is formed reasonably and logically. \\ Q8 & The interaction facilitates the workflow. \\ Q9 & The visual design is intuitive and easy to understand. \\ \hline \multicolumn{2}{c}{**Usability and Perception**} \\ \hline Q10 & The system can help me train an accurate model. \\ Q11 & I am willing to trust the results given by _CIVis_. \\ Q12 & _CIVis_ is easy to learn and convenient to use. \\ Q13 & I would like to recommend _CIVis_ to other scenarios. \\ \hline \end{tabular}
\end{table} TABLE II: Assessment of _CIVis_ in terms of _informativeness_ (Q1-Q3), _decision making_ (Q4-Q6), _visual design and interaction_ (Q7-Q9), and _usability and perception_ (Q10-Q13).
rate for usability and perception (\(5.92\)). We discuss these two observations together because this question set also includes the highly rated question \(Q10\). Accordingly, we could infer that it is the relatively low rating of \(Q13\) that leads to the relatively low rating of usability and perception. Although the concept of CL and its benefits for solving the problem of modeling observed data with missing values took some time to understand, participants could learn and use _CIVis_. When we asked the participants about their reasons for the learning curves, their focus reflected on understanding the background of CL and figuring out our innovations in using CL to solve the problem of modeling observed data with missing values. _"I think my difficulty is figuring out how to use CL to improve models with missing features, not the usability of the system"_, said \(P3\) (\(M,23,NC\)). This circumstance inspired us to count participants with or without CL experience, who had mean ratings of \(6.33\) and \(6.05\), respectively. Thus, we conclude that the learning curve of CL and its application to modeling observed data with missing values led to relatively low ratings in terms of usability and perception. Finally, the visual design and interaction in _CIVis_ received an average rate of \(6.11\), which is a bit lower than the overall average (\(6.14\)). _"The visualization is easy to understand, but there is still room for improvement, for example, in the positive sampling view, the highlight should be obvious when I click on a circle"_, commented \(P12\) (\(M,33,NC\)) with previous experience using visual analytics systems.
## 8 Discussion and Limitations
In this section, we discuss the high-level synthetics, the design implications, and the limitations of our approach.
**Different Path to Resolving Modeling Observed Data with Missing Values**. Current approaches to improving ML modeling performance generally prioritize dealing with missing data, which can be problematic as imputation techniques estimate missing values more or less (section 2.1). While ML has enabled increasingly accurate missing value estimation, this approach can never perfectly replicate the true distribution. Sarma et al. [8] addressed the visualization of missing data uncertainty through scatterplots and analyzed missingness patterns, and conducted a crowdsourcing study to assess the impact of visual representation on reasoning. However, this work remains confined to automatic imputation. Our approach involves removing missing features and training prediction models using fewer features to avoid the need for imputation. Furthermore, _CIVis_ and imputations are influenced by domain knowledge in different ways. While _CIVis_ is guided by CL theory, imputations depend on the uncertainty of missing data. We believe that our method can help facilitate the modeling of observed data with missing values and provide a novel approach for integrating domain knowledge into the model building process.
**Contribution Over Previous Studies of CL**. The scope of prior works on design principles for CL is limited to the image domain. These studies suggest two key principles: (1) **alignment** and **uniformity**, which entails ensuring that the distance distributions of sampled embeddings are close to their positive pairs and far from their negative pairs [43, 45], and (2) **hardness**, which refers to the existence of a suitable level of difficulty for positive pairs [18, 42, 45, 23]. We have incorporated these principles into our system, and their validity has been confirmed on a dataset with another modality of the unstructured features typically seen in recommendation systems, as shown in two usage scenarios. Our work extends the existing theoretical analysis to a broader domain, thus highlighting the general applicability of CL. However, the two principles outlined above are somewhat contradictory in that induced hardness reduces alignment. This phenomenon has been previously presented in the studies [42], which analyzed how state-of-the-art CL models are implemented, a model-dependent explanation, but the theoretical solution for the optimal trade-off between alignment and hardness remains unsolved. Our system, _CIVis_, allows users to explore potential solutions for this problem by adjusting the sampling ratio between semi- and full data and can serve as a source of inspiration for future research in this area.
**Implication for Explainable ML**. _CIVis_ introduces the use of CL for model debugging, enabled by visual analytics. The CL-based debugging method offers several advantages. First, it is agnostic to the specific deep learning model, as demonstrated by its successful application in two different usage scenarios. Second, the circular visualization of data distributions represents a novel approach that can uncover the reasons behind both positive and negative model performance in a post hoc explanation. Third, runtime metrics for sampled pairs facilitate the rapid detection of anomalies during the training process, providing an advantage over post hoc techniques that require longer waiting periods. Previous research efforts in the field of model interpretability, as discussed in section 2.3, have incorporated one or two of the aforementioned features, but few have integrated all of them. Thus, the proposed CL-based debugging methodology has the potential to motivate further research endeavors that exploit its features and benefit a wider range of machine learning models through its multi-phased interpretability, both during and after the training phase.
**Limitation.** This study has a few limitations that should be taken into consideration. First, the scalability of _CIVis_ may be an issue when a large number of nodes are involved in the projection. A potential solution to this issue is to eliminate invisible points (i.e., points that are not visible due to overlapping) and select samples based on their positions.
Fig. 11: The results of the user study were divided into four areas. The average rate from high to low was \(6.33\) for informativeness, \(6.22\) for decision-making, \(6.11\) for visual design and interaction, and \(5.96\) for usability and perception.
Second, while experts may be able to train an acceptable ML model using _CIVis_, the learning curve at the beginning is steep, which may require users to have a background in ML. Third, when most or key features are missing, _CIVis_ fails to perform, rendering it ineffective. In such cases, no methods may be suitable for the task at hand.
## 9 Conclusion and Future Work
This study presents a framework based on CL that is capable of modeling observed data with missing values in the ML pipeline. Our proposed framework utilizes both full data and semi-data, without any form of imputation. In addition, we introduce a visual analytics system called _CIVis_, which integrates the CL-based framework to assist experts in leveraging their domain knowledge and engaging in co-adaptive decision-making. The effectiveness of our approach has been validated through two usage scenarios, a quantitative experiment, expert interviews, and a qualitative user study. As a future direction, we plan to extend our CL-based framework to other models and downstream tasks.
|
2305.20017 | Controlling the Photon Number Coherence of Solid-state Quantum Light
Sources for Quantum Cryptography | Quantum communication networks rely on quantum cryptographic protocols
including quantum key distribution (QKD) using single photons. A critical
element regarding the security of QKD protocols is the photon number coherence
(PNC), i.e. the phase relation between the zero and one-photon Fock state,
which critically depends on the excitation scheme. Thus, to obtain flying
qubits with the desired properties, optimal pumping schemes for quantum
emitters need to be selected. Semiconductor quantum dots generate on-demand
single photons with high purity and indistinguishability. Exploiting two-photon
excitation of a quantum dot combined with a stimulation pulse, we demonstrate
the generation of high-quality single photons with a controllable degree of
PNC. Our approach provides a viable route toward secure communication in
quantum networks. | Yusuf Karli, Daniel A. Vajner, Florian Kappe, Paul C. A. Hagen, Lena M. Hansen, René Schwarz, Thomas K. Bracht, Christian Schimpf, Saimon F. Covre da Silva, Philip Walther, Armando Rastelli, Vollrath Martin Axt, Juan C. Loredo, Vikas Remesh, Tobias Heindel, Doris E. Reiter, Gregor Weihs | 2023-05-31T16:46:00Z | http://arxiv.org/abs/2305.20017v1 | Controlling the Photon Number Coherence of Solid-state Quantum Light Sources for Quantum Cryptography
###### Abstract
Quantum communication networks rely on quantum cryptographic protocols including quantum key distribution (QKD) using single photons. A critical element regarding the security of QKD protocols is the photon number coherence (PNC), i.e. the phase relation between the zero and one-photon Fock state, which critically depends on the excitation scheme. Thus, to obtain flying qubits with the desired properties, optimal pumping schemes for quantum emitters need to be selected. Semiconductor quantum dots generate on-demand single photons with high purity and indistinguishability. Exploiting two-photon excitation of a quantum dot combined with a stimulation pulse, we demonstrate the generation of high-quality single photons with a controllable degree of PNC. Our approach provides a viable route toward secure communication in quantum networks.
## I Introduction
Single photons are an essential resource for future high-security communication networks, with applications like measurement-based or distributed quantum computing and quantum cryptography [1; 2; 3]. Every quantum information protocol has its unique set of practical requirements [4]. While early quantum key distribution (QKD) protocols [5; 6] primarily relied on high single-photon purity, more advanced schemes have further requirements such as high indistinguishability, for example in quantum repeaters or measurement-device-independent (MDI)-QKD, which relies on remote two-photon interference [7; 8]. The search for efficient single-photon sources has led to semiconductor quantum dots [9], thanks to their high single-photon purity [10], brightness [11], indistinguishability [12], scalability [13] and above all, versatility in emission wavelength selection.
Photon number coherence (PNC) [28; 29] is another crucial quantity relevant for the security of single photon quantum cryptography schemes. It must vanish for most protocols [30], compromising security otherwise [31; 32] due to side-channel attacks enabled by the fixed relative phase between different photon number states [33; 34]. While there are more general security proofs that allow a non-zero PNC, they lead to lower key rates, as some of the bits must be devoted to compensate for the additional information leakage towards an eavesdropper [35]. To achieve zero PNC in practice, actively phase-randomized single photons can be used, as typically implemented for faint laser pulses [36; 37]. Otherwise, a suitable excitation scheme without PNC must be chosen, which might however deteriorate the other single photon properties [28; 30].
In this work, we achieve tailored degrees of PNC, on demand, maintaining high purity and indistinguishability. We implement optical excitation protocols to demonstrate the experimental single-photon generation from quantum dots. The photon output in our scheme can be increased up to twice as high compared to more commonly used methods like resonant excitation. We therefore set the stage for the quantum dot platform to be used for advanced cryptographic implementations.
Due to its versatility, the excitation scheme presented here covers the requirements of a broad range of quantum cryptographic protocols. An overview of various applications in the context of PNC and indistinguishability requirements is given in Fig. 1a. For example, established protocols like BB84 [5], decoy-BB84 [14], 6-state-protocol [15], SARG04 [16], LM05 [17] and primitives like strong quantum coin flipping [21; 22], unforgeable quantum tokens [19; 20], quantum bit commitment [18] or quantum oblivious transfer [38] require the absence of PNC to ensure, for instance, security in QKD or fairness in coin-flipping protocols. On the other hand, there exist protocols that benefit from a finite amount of initial PNC like MDI-QKD when done with phase encoding [8] or twin-field QKD protocols [27] to know and set the initial phase [30].
Our excitation protocols are based on resonant two-photon excitation (TPE) of a quantum dot from the ground state \(|g\rangle\) into the biexciton state \(|xx\rangle\)[39; 40], yielding Rabi rotations. A simulation of TPE Rabi rota
tions is shown in Fig. 2**b** (red dashed line) and experimental data in Fig. 2**c** (blue dots). From the biexciton state, the system relaxes into either horizontally \(|x_{H}\rangle\) or vertically \(|x_{V}\rangle\) polarized exciton state, from which we collect only horizontally (\(H\)) polarized photons. We call this scheme _relaxation into the exciton_ (reX). The reX scheme is advantageous over the direct, resonant excitation of the exciton, due to the suppressed re-excitation and therefore provides high-purity photon states [10]. Because the exciting laser energy is different from the emitted photon energy, a challenging cross-polarization filtering is avoided and the photon count rate can be increased up to a factor of two, which is also achieved by several other recently-proposed excitation schemes [41, 42, 43, 44, 45, 46, 47]. However, the indistinguishability of the single photons via the reX scheme suffers greatly from the spontaneous decay of the biexciton [48] and if a specific polarization is required, the photon output is reduced due to the two available decay channels.
An improved protocol to overcome these problems uses an additional stimulation laser pulse following the TPE pulse [49]. This _stimulated preparation of the exciton_ (stiX) scheme can generate higher indistinguishability exciton photons due to the reduced time jitter [50, 51, 52]. Because the stimulation pulse determines the polarization of the emitted photon, the photon counts in that polarization state is also enhanced up to a factor of two (see also Fig. 2**c**). Although the presence of PNC under resonant excitation and reX has been investigated before [28, 30], it remains to be seen if PNC exists in the stiX scheme. Additionally, assessing the controllability of PNC is essential for advancing optical preparation schemes of quantum dot states for quantum cryptography applications.
## II Results
### Definition of photon number coherence
In a pure state \(|\Psi\rangle=\sum_{n=0}^{\infty}c_{n}|n\rangle\) in the photon number Fock basis with eigenstates \(|n\rangle\) and the complex coefficients \(c_{n}\), we define PNC as the absolute value of the coherence between the Fock states. For QKD based on single photons, as considered in this paper, the PNC refers to the coherence between the Fock states \(|0\rangle\) and \(|1\rangle\). More generally, we employ a density matrix description using
\[\rho=\left(\begin{array}{cc}\rho_{0,0}&\rho_{0,1}\\ \rho_{1,0}&\rho_{1,1}\end{array}\right)\qquad\text{with}\qquad\text{PNC}=|\rho _{0,1}|, \tag{1}\]
\(\rho_{1,1}\) (\(\rho_{0,0}\)) being the occupation of the one (zero)-photon state and \(\rho_{0,1}\) being the coherence. We recall that it holds that \(|\rho_{0,1}|^{2}\leq\rho_{1,1}\,\rho_{0,0}\) with equality in case of a pure state. The inequality implies that for \(\rho_{1,1}=1\) or \(\rho_{0,0}=1\) the PNC vanishes, while for \(\rho_{0,0}=\rho_{1,1}=1/2\) it is maximal.
There are several factors that affect the PNC. One aspect is the non-perfect preparation of the photon state. The properties of photons from a quantum dot depend on the preparation fidelity of the quantum dot electronic state. This fidelity, in turn, is affected by the interaction with the environment of the quantum dot, most strongly by the interaction with phonons [53]. Phonons can also degrade the coherence properties of the photons [54] and therefore also the PNC. Additional losses of photons into other modes, which are not detected, further affect the photon properties.
It is likewise important to consider the measurement process. To detect PNC, a phase-evolving Mach-Zehnder Interferometer (MZI) is employed [28]. The outputs of the MZI are simultaneously recorded with two avalanche photodiodes (APDs). The count rates \(N_{1},N_{2}\) in the APD result in the visibility
\[v_{i}=\frac{N_{i}^{\text{max}}-N_{i}^{\text{min}}}{N_{i}^{\text{max}}+N_{i}^{ \text{min}}}\,. \tag{2}\]
In the case of an ideal measurement and perfectly indistinguishable photons, the visibility is connected to the PNC via
\[v=\frac{|\rho_{0,1}|^{2}}{\rho_{1,1}}\,. \tag{3}\]
In the MZI used in the experiment, the interference of subsequent single photons takes place. Phase scrambling
Figure 1: **Overview of various quantum information protocols:** The protocols are sorted by their requirements on indistinguishability \(\mathcal{I}\) and PNC, specifically focusing on discrete variable cryptographic protocols using polarization or time-bin encoding. Protocols that require low PNC and need no high \(\mathcal{I}\) are BB84 [5], decoy-BB84 [14], 6-state-protocol [15], SARG04 [16], LM05 [17], QBC: quantum bit commitment [18], UT: unforgeable quantum tokens [19, 20], CF: quantum coin flipping [21, 22], OT: Oblivious Transfer [23]; low PNC and high \(\mathcal{I}\) is required by MDI-QKD [8], quantum repeaters and entanglement swapping for QKD [7, 24], quantum teleportation for QKD [25], DI-QKD with single photons [26]; high PNC and variable \(\mathcal{I}\) are needed in TF: twin-field QKD with single photons [27]. Note that carrying out protocols from the low PNC column with phase encoding would require an initially defined phase, before randomizing it in a reversible way, which requires PNC in the beginning. To illustrate this we have also added MDI-QKD with phase encoding to the diagram.
between subsequent emission events leads to a further reduction of the visibility in addition to the aforementioned phonon and loss effects. It should be kept in mind that a vanishing visibility in the experiment can therefore result either from vanishing PNC, phase scrambling, or a combined effect of both.
### Theoretical expectations
We perform theoretical simulations to estimate the PNC for both reX and stiX for a quantum dot modeled as a four-level system driven by a (classical) laser field and coupled to discrete photon modes. The simulation considered the presence three photon modes, however, only the zero and one modes were found to be noticeably populated. We further account for coupling with acoustic phonons within a numerically exact path integral formalism [55]. In addition, we include relaxation between the quantum dot states accounting for photons not being emitted into the relevant modes (see Methods section for details of the model and calculation). We assume an ideal detection, i.e., no phase scrambling and perfect indistinguishability and model the visibility via
Figure 2: **Generating single photons with variable PNC:** (**a**) Level scheme of a quantum dot consisting of ground state \(\ket{g}\), two linearly polarized exciton states \(\ket{x_{V/H}}\) and biexciton state \(\ket{xx}\). Straight lines indicate laser excitation, while dashed lines denote relaxation processes with rate \(\gamma\). Both schemes start with a two-photon excitation (TPE) from \(\ket{g}\rightarrow\ket{xx}\). In stiX, an additional H-polarized laser pulse stimulates the transition \(\ket{xx}\rightarrow\ket{x_{H}}\). We only collect \(H\)-polarized photons. (**b**) Theoretically calculated biexciton (\(\ket{xx}\)) occupation showing Rabi rotations as a function of the TPE pulse area (red curve) and the corresponding coherence between \(\ket{g}\) and \(\ket{xx}\) (purple curve). (**c**) Exciton photon counts recorded under reX (blue dots) and six (red dots) manifesting the enhancement of photon counts under stimulation. (**d**) Sketch of the experimental setup: a Ti:Sapphire laser source producing \(\approx\)\(2\,\mathrm{ps}\)-long laser pulses, with a spectral FWHM of \(0.5\,\mathrm{nm}\), is used to spectrally shape TPE and stim. pulses at appropriate wavelengths \(\lambda_{\text{TPE}}\) and \(\lambda_{\text{stim}}\) using two \(4f\) pulse shapers. A fiber-optic delay line enables the time control of the stim. pulse with respect to the TPE pulse. An electronic variable optical attenuator (VOA) helps sweep the laser power. The two pulses meet at a 10:90 beamsplitter (BS) and propagate to the cryostat which holds the quantum dot at \(1.5\,\mathrm{K}\). Emitted single photons from the quantum dot are spectrally filtered by a notch filter (NF) and send to an unbalanced Mach-Zehnder interferometer with a freely evolving phase on one arm (labeled as PNC setup). Two single-photon sensitive avalanche photodiodes (APD1 and APD2) detect the single photon counts at the output arms of the interferometer. pol: linear polarizer, HWP: half-wave plate, QWP: quarter-wave plate, BS: beamsplitter, PBS: polarizing beam splitter, FBS: fiber beam splitter.
Eq. (3).
The time evolution of the biexciton and the \(|x_{H}\rangle\) exciton occupation together with the PNC \(|\rho_{0,1}|\) is shown in Fig. 3**a**,**b**. Both schemes start with excitation from the ground into the biexciton state induced by a Gaussian-shaped laser pulse with a TPE pulse area of \(\pi/2\). In the reX scheme, the biexciton state then relaxes into the exciton states via the emission of photons. However, these photons are at a different wavelength and therefore ignored. The exciton state is transiently occupied because it rapidly generates the desired photon relaxing further into the ground state. The corresponding PNC (blue curve in Fig. 3**a**) is almost vanishing, too, because of the incoherent biexciton-exciton relaxation destroying the electronic coherence. The remaining PNC can be traced back to deviations from the ideal case caused by phonon interaction, radiative losses, as well as relaxation into other (undesired) states in the quantum dot.
In contrast, in the stiX scheme, the stimulating pulse brings the biexciton coherently into \(|x_{H}\rangle\) by the application of a \(\pi\)-pulse resonant with the \(|xx\rangle\rightarrow|x_{H}\rangle\) transition as evidenced in Fig. 3**b**. The small oscillations on top of the population-exchange result from the off-resonant driving of the complementary transition \(|x_{H}\rangle\rightarrow|g\rangle\). Because the transition to the exciton state \(|x_{H}\rangle\) is coherent, the electronic coherence, which translates to the PNC, is preserved. Accordingly, in Fig. 3**a** (red curve), we see that as soon as the stimulating pulse sets in, the PNC becomes very high. In other words, a timed stimulation preparation of the exciton state recovers the PNC that is lost in the reX scheme.
By controlling the electronic coherence through the pulse areas of the exciting pulses, we can thus manipulate the PNC. To ensure the best comparability, we fix the stimulating pulse to a \(\pi\) pulse and vary the pulse area of the TPE pulse, which results in an oscillating coherence as shown in Fig. 2**c**. The time-integrated occupation of the one-photon state \(\mathrm{occ}^{\mathrm{calc}}\) follows the Rabi rotations of the biexciton. We have checked that under the present conditions, the higher Fock states always have negligible occupations. The highest coherence is expected for pulse areas \((2n+1)\pi/2\), where also the electronic coherence is maximal. However, due to the incoherent relaxation process from the biexciton into the exciton, in reX the PNC is close to zero for all pulse areas as expected. This is confirmed by the numerical results in Fig. 3**e**. Only for large pulse areas, detrimental processes due to phonons or losses lead to some residual PNC. Accordingly, for the reX scheme also the measured visibility \(v^{\mathrm{calc}}\) in Fig. 3**d** is vanishing.
For stiX, the coherence is preserved and we find an oscillating behaviour as a function of the TPE pulse area with maxima of the PNC occurring for pulse areas with \((2n+1)\pi/2\) and minima for pulse areas \(n\pi\). Ideally, PNC should be zero for pulse area \(n\pi\). In the full simulation including finite pulse lengths and losses, the TPE pulse does not fully invert the system, leading to a residual PNC even for a TPE \(\pi\)-pulse. The visibility behaves differently: While a clear minimum at \(\pi\) is recovered, the PNC is not maximal at \(\pi/2\). Instead, due to its definition, \(v^{\mathrm{calc}}\) increases for the even smaller TPE pulse areas. Still, compared to reX, visibility for stiX shows a strong dependence on the TPE pulse area.
### Experimental data
We perform the reX and stiX experiments to test the theoretical prediction on a single quantum dot in our setup displayed in Fig. 2**d**. We note that in stiX we fix the time delay between the TPE pulse and stimulating pulse to \(7\,\mathrm{ps}\), where the photon count is maximal. A detailed description on the experiment is provided in the Methods Section.
We start by quantifying the photon properties for reX and stiX, at various powers (see SI Table S1) measuring the single photon purity in a Hanbury Brown and Twiss (HBT) setup and the indistinguishability via Hong-Ou-Mandel (HOM) measurements. At \(\pi\) power of both reX and stiX we validate that the generated photons have high purity with \(g_{\mathrm{ex}}^{(2)}(0)=0.0004(1)\) and \(g_{\mathrm{s}\mathrm{i}\mathrm{i}\mathrm{X}}^{(2)}(0)=0.0009(1)\). For the indistinguishability, the HOM visibility reaches only 58(3)% under reX, while for stiX it increases to 95(6)%, in line with previous observations [50, 51, 52]. These results already underline that stiX is a advantageous scheme compared to reX.
We then sweep the TPE pulse area under reX and stiX yielding Rabi rotations for the exciton (X) photon counts (blue and red curves in Fig. 4) and investigate the PNC. For each TPE pulse power, we analyze the spectrally filtered X photons using a phase-evolving MZI [28]. Its outputs are simultaneously recorded with two avalanche photodiodes (APDs) for \(20\,\mathrm{s}\) each. In the bottom panel of Fig. 4, we display exemplary time traces (denoted by green and magenta curves, representing the two detector outputs of the MZI, see Fig. 2**d**) at TPE powers \(0.5\pi\), \(1\pi\), \(1.5\pi\). From the time traces, we compute the visibility according to Eq. (3) from the normalized detector counts taking the average of the two detectors as \(v^{\mathrm{exp}}=(v_{1}+v_{2})/2\).
The visibility \(v^{\mathrm{exp}}\) as a function of pulse area is displayed alongside the respective Rabi rotations in Fig. 4 as black dots. Under reX, the visibility \(v^{\mathrm{exp}}\) is vanishing for all TPE pulse areas and no clear dependence is found. This is in agreement with the exemplary time traces (displayed in the bottom panel of Fig. 4**a**)., where indeed no oscillations are seen for different pulse areas
In contrast, for stiX, the PNC shows a more interesting behaviour: in the exemplary time traces of the MZI outputs (displayed in the bottom panel of Fig. 4**b**), we observe clear oscillations for TPE powers \(0.5\pi\), and \(1.5\pi\) and almost no oscillations at \(\pi\). Accordingly, the visibilities vary from \(\approx\) 0.6 at TPE power \(0.5\pi\) to being minimal at \(\pi\) and then rise again until \(1.5\pi\).
From the visibilities, using the formalism from Ref. [28], we extract the PNC\({}^{\text{exp}}\) shown as the yellow
line in Fig. 4. The data clearly confirms the trend expected from the theory: We find minima of PNC when exciting with TPE pulses of pulse area \(n\pi\) and maxima at \((2n+1)\pi/2\). This behaviour is evident in stiX, while in reX only a small modulation is found.
Hence, we conclude that the PNC is negligible in reX, while in stiX we have tuneable PNC controlled via the TPE pulse area.
## III Discussion
We now set our results in the context of finding the optimal photon source for high-security quantum networks. As indicated before, purity, indistinguishability, and PNC are the key parameters that must be known when choosing an excitation scheme. We have shown that reX generates high-purity photons, while indistinguishability and PNC are low, and also, if filtering only a single polarization, the photon output is reduced. Looking back at Fig. 1**a**, we find that reX produces photons in the bottom left corner with low PNC and low indistinguishability, which limits the amounts of applicable protocols.
Within stiX, photons with high purity and high indistinguishability are generated. More importantly, the PNC can be controlled via the pulse area. If the TPE power is set to \((2n+1)\pi/2\), one obtains high PNC, enabling protocols in the top right corner of the diagram in Fig. 1**a**. By changing the pulse area to \(n\pi\), the PNC is minimal which allows performing protocols in the top left corner of the diagram in Fig. 1**a**. For all TPE powers, stiX is suitable for protocols that require a high indistinguishability. Besides power control, the time delay of the stimulating pulse also controls the PNC (see SI Section E). The largest PNC is obtained when the time separation between the TPE and the stimulating pulse is optimal.
In summary, we showed a controlled generation of single photons with variable degrees of PNC as well as high purity, high indistinguishability, and high brightness via a stimulated two-photon excitation. This is a big step forward towards the realization of secure quantum networks based on single photons.
## IV Methods
### Theoretical model
For the theoretical modelling, we set up the Hamiltonian consisting of the quantum dot system \(\hat{H}^{\text{QD}}\), the outcoupling to two-photon modes \(\hat{H}^{\text{photon}}\), the excitation of the TPE \(\hat{H}^{\text{TPE}}\) and the stimulating laser pulse \(\hat{H}^{\text{stim}}\), as well as the coupling to phonons
\[\hat{H}=\hat{H}^{\text{QD}}+\hat{H}^{\text{photon}}+\hat{H}^{\text{TPE}}+\hat {H}^{\text{stim}}+\hat{H}^{\text{phonon}}\,. \tag{4}\]
In addition, we consider radiative decay and losses by a Lindblad operator \(\mathcal{L}\). In the following, we describe the individual terms in detail.
The quantum dot is modeled using four states (see also Fig. 2**b**) denoted by \(|g\rangle\) as the ground state, \(|x_{H}\rangle\) and \(|x_{V}\rangle\) as the two excitons and \(|xx\rangle\) as the biexciton. The
Figure 3: **Theoretical predictions**: Left: Dynamics of the four-level system coupled to two photon modes including phonons and losses calculated via a numerically exact path integral formalism. The exciting laser pulses with the TPE pulse (orange) and stimulating pulse (red) shown in (**c**). The occupation of the biexciton and exciton state for stiX are displayed in (**b**) with the dashed line indicating the behaviour for reX. The PNC for stiX (red) and reX (blue) is displayed in (**a**). Note the logarithmic scale. During the stimulating pulse the exciton becomes occupied resulting in a rise of the PNC. Right: Time-integrated coherence \(\text{PNC}^{\text{calc}}\). (**c**) and visiblity \(v^{\text{calc}}\) (**d**) as a function of TPE pulse area for both reX (blue,magnified) and stiX (red). The TPE pulse areas of \(\pi\) and \(\pi/2\) are marked by vertical lines. The time-integrated occupation of the one-photon Fock state \(\text{occ}^{\text{calc}}\) is shown as a green dashed line. Due to the relaxation process the PNC is almost lost in the reX case. In stiX, we find that the PNC is controlled via the TPE pulse area.
ground-state energy is set to zero, while both excitons have the same energy \(\hbar\omega_{x}\), i.e., no fine-structure splitting is assumed. The biexciton has a binding energy \(E_{B}\) such that its energy is given by \(\hbar\omega_{xx}=2\hbar\omega_{x}-E_{B}\).
\[\hat{H}^{\text{QD}}= \hbar\omega_{x}\left(|x_{H}\rangle\langle x_{H}|+|x_{V}\rangle \langle x_{V}|\right)+\hbar\omega_{xx}|xx\rangle\langle xx| \tag{5}\]
The quantum dot is coupled to two photon modes with polarisations \(V\) and \(H\) for the out-coupling of the photons, similar to positioning the quantum dot in a photonic cavity. We model the photon modes by the Fock states \(|n_{H}\rangle\) and \(|n_{V}\rangle\) with the frequency \(\omega_{c}\) via the annihilation (creation) operators \(\hat{a}_{H/V}(\hat{a}_{H/V}^{\dagger})\). The photonic modes are coupled to the quantum dot transitions with the same strength via the coupling constant \(\hbar g=0.05\) meV, yielding
\[\begin{split}\hat{H}^{\text{photon}}=&\hbar\omega_{c }\left(\hat{a}_{H}^{\dagger}\hat{a}_{H}+\hat{a}_{V}^{\dagger}\hat{a}_{V}\right) \\ +&\hbar g\,\hat{a}_{H}\left(|x_{H}\rangle\langle g|+| xx\rangle\langle x_{H}|\right)+h.c.\\ +&\hbar g\,\hat{a}_{V}\left(|x_{V}\rangle\langle g |+|xx\rangle\langle x_{V}|\right)+h.c.\\ =&\hat{H}_{0}^{\text{photon}}+\hat{H}_{\text{ coupl.}}^{\text{photon}}.\end{split} \tag{6}\]
We use the Hamiltonian in a rotating frame with \(\omega=\omega_{l}=\omega_{x}-E_{B}/(2\hbar)\), which corresponds to the frequency of the TPE laser pulse. With this, the QD-photon
Figure 4: **Controlled generation of PNC on a single quantum dot for reX (a, left) and stiX (b, right)**: **Top panel**: Measured purity of the generated single photons. For \(g_{\text{ex}}^{(2)}(0)\) (blue), the TPE pulse power is kept at \(\pi\)-power, and stim pulse is absent. For \(g_{\text{aux}}^{(2)}(0)\)(red), TPE and stim pulses are kept at \(\pi\)-power. For \(\text{HOM}_{\text{wlx}}\), blue and gray shaded curves represent HOM coincidences recorded for parallel and orthogonal polarizations respectively, for \(\pi\) TPE pulse area. For \(\text{HOM}_{\text{sixk}}\), red and gray shaded curves represent HOM coincidences recorded for parallel and orthogonal polarizations respectively, TPE and stim pulses are kept at \(\pi\)-power. **Middle panel**: Extracted visibilities \(v^{\text{exp}}\) (dark-blue dots) and the reconstructed \(\text{PNC}^{\text{exp}}\) (yellow) at different TPE pulse areas alongside the measured X counts (blue curve for X\({}_{\text{ex}}\) and red curve for \(X_{\text{sixk}}\)). The X counts are normalized to their respective values at \(\pi\)-power. **Bottom panel**: Exemplary time traces recorded at the two detector outputs of the PNC setup at TPE pulse areas of \(0.5\pi\), \(\pi\), and \(1.5\pi\).
Hamiltonian has the form
\[\begin{split}\hat{H}^{\text{Q0-photon}}=&\hbar\Delta \omega_{x-l}\left(|x_{H}\rangle\langle x_{H}|+|x_{V}\rangle\langle x_{V}|\right) \\ +&(\hbar 2\Delta\omega_{x-l}-E_{B})|xx\rangle \langle xx|\\ +&\hbar\Delta\omega_{c-l}\left(\hat{a}_{H}^{\dagger} \hat{a}_{H}+\hat{a}_{V}^{\dagger}\hat{a}_{V}\right)\\ +&\hat{H}_{\text{coupl.}}^{\text{photon}}.\end{split} \tag{7}\]
The index convention of frequency differences is chosen such that the second index is subtracted from the first, e.g., \(\Delta\omega_{x-l}=\omega_{x}-\omega_{l}\). We choose the photon mode to be resonant with the quantum dot transition from the ground to the excited state, i.e., \(\hbar\Delta\omega_{c-x}=0\) meV.
The TPE is modelled by an external classical laser field with diagonal polarization in dipole and rotating wave approximation. We consider a resonant TPE process and accordingly set the detuning \(\Delta\omega_{x-l}=E_{B}/(2\hbar)\). With this, the Hamiltonian reads
\[\begin{split}\hat{H}^{\text{TPE}}(t)=-\frac{\hbar}{2}f^{\text{TPE }}(t)&(|g\rangle\langle x_{H}|+|g\rangle\langle x_{V}|\\ &+|x_{H}\rangle\langle xx|+|x_{V}\rangle\langle xx|+h.c.).\end{split} \tag{8}\]
Here, \(f^{\text{TPE}}(t)\) denotes the instantaneous Rabi frequency as given by the product of dipole moment and electric field. We use Gaussian pulses
\[f^{\text{TPE}}(t)=\frac{\Theta_{\text{TPE}}}{\sqrt{2\pi}\,\sigma_{\text{TPE}} }\text{e}^{-\frac{t^{2}}{2\sigma_{\text{TPE}}}}, \tag{9}\]
with the pulse area \(\Theta_{\text{TPE}}\) and the pulse width \(\sigma_{\text{TPE}}\). We assign the TPE pulse area \(\pi\) in the plot (cf. Fig. 3 to the one which results in the first maximum of the biexciton occupation and the TPE pulse area of \(\pi/2\) to the first maximum of the electronic coherence. In the calculations, these values were determined numerically.
We describe the stimulating laser with the same approximations, but assume it to be horizontally polarized. Its frequency is set to match the \(|xx\rangle\rightarrow|x_{H}\rangle\) transition, such that
\[\hat{H}^{\text{stim}}(t)=-\frac{\hbar}{2}f^{\text{stim}}(t)\,\left[\text{e}^{ \text{i}\Delta\omega_{l}^{\text{stim}}}(|g\rangle\langle x_{H}|+|x_{H}\rangle \langle xx|)\right]+h.c.. \tag{10}\]
Here, \(\Delta\omega_{l}^{\text{stim}}=\omega_{l}^{\text{stim}}-\omega_{l}=-E_{B}/(2 \hbar)\). The stimulating laser's envelope function \(f^{\text{stim}}\) is delayed by a time \(\Delta t\) compared to the TPE laser. We also assume a Gaussian envelope for the stimulating pulse
\[f^{\text{stim}}(t)=\frac{\Theta_{\text{stim}}}{\sqrt{2\pi}\,\sigma_{\text{ stim}}}\text{e}^{-\frac{(t-\Delta t)^{2}}{2\sigma_{\text{stim}}^{2}}}. \tag{11}\]
with the pulse area \(\Theta_{\text{stim}}\) and the pulse length \(\sigma_{\text{stim}}\). Here, a "\(\pi\)-pulse" refers to a full inversion of the resonantly driven transition for ideal conditions (without losses/phonons).
In addition we consider the coupling to longitudinal-acoustic (LA) phonons via the deformation potential coupling. Here, \(\hat{b}_{\mathbf{k}}\) (\(\hat{b}_{\mathbf{k}}^{\dagger}\)) annihilates (creates) a phonon of mode \(\mathbf{k}\) with energy \(\omega_{\mathbf{k}}\). We consider the typical pure-dephasing type coupling in the standard Hamiltonian [56; 57]
\[\hat{H}^{\text{phonon}}=\hbar\sum_{\mathbf{k}}\omega_{\mathbf{k}}\hat{b}_{ \mathbf{k}}^{\dagger}\hat{b}_{\mathbf{k}}+\hbar\sum_{\mathbf{k},S}\left(\gamma _{\mathbf{k}}^{S}\hat{b}_{\mathbf{k}}^{\dagger}+\gamma_{\mathbf{k}}^{S^{*}} \hat{b}_{\mathbf{k}}\right)|S\rangle\langle S|, \tag{12}\]
coupling each mode \(\mathbf{k}\) to the quantum dot state \(|S\rangle\), where \(S\in\{x_{H},x_{V},xx\}\). The coupling constant \(\gamma_{\mathbf{k}}^{S}\) and the material parameters are taken to be the same as in Ref. [55].
Both, cavity and quantum dot, are subject to losses into the free photonic field outside of the cavity. These losses are described by Lindblad-superoperators, affecting the density operator \(\hat{\rho}\)
\[\mathcal{L}_{\hat{O},\hat{\delta}}[\hat{\rho}]=\delta\left(\hat{O}\hat{\rho}\, \hat{O}^{\dagger}-\frac{1}{2}\left[\hat{\rho},\hat{O}^{\dagger}\hat{O}\right]_{ +}\right), \tag{13}\]
where \(\hat{O}\) is an operator, \(\delta\) a rate and \([.,.]_{+}\) the anti-commutator. We assume that the decay processes of the quantum dot take place with rate \(\gamma\) and losses of the photonic modes go with the rate \(\kappa\), such that Lindblad-superoperators are
\[\begin{split}\mathcal{L}[\hat{\rho}]&:=\mathcal{L}_{ \hat{a}_{H},\kappa}[\hat{\rho}]+\mathcal{L}_{\hat{a}_{V},\kappa}[\hat{\rho}]\\ &+\mathcal{L}_{|g\rangle\langle x_{H}|,\gamma}[\hat{\rho}]+ \mathcal{L}_{|g\rangle\langle x_{V}|,\gamma}[\hat{\rho}]\\ &+\mathcal{L}_{|x_{H}\rangle\langle xx|,\gamma}[\hat{\rho}]+ \mathcal{L}_{|x_{V}\rangle\langle xx|,\gamma}[\hat{\rho}]\,.\end{split} \tag{14}\]
The rates are chosen such that we are in the weak coupling regime.
With the Hamiltonian and the Lindbladian terms we calculate the dynamics of the system states via the Liouville-von Neumann equation
\[\frac{\text{d}}{\text{d}t}\hat{\rho}=-\frac{i}{\hbar}\left[\hat{H}(t),\hat{ \rho}\right]+\mathcal{L}[\hat{\rho}]. \tag{15}\]
As initial state we assume that the quantum dot is in its ground state and no photonic excitation exists. For the numerical integration, we use a numerically complete path-integral method, which is described in Refs. [55] and [58] and the parameters from Tab. 1, to solve Eq. (15).
We obtain results for the full density matrix, from which we can obtain the reduced density matrices for the quantum dots \(\rho_{S,S^{\prime}}^{\text{QD}}\), with \(S\in\{g,x_{H},x_{V},xx\}\) and for the photons \(\rho_{n_{i},n_{i}}^{\text{photon}}\) with \(i\in\{H,V\}\), by tracing out the other degrees of freedom. We are interested in the coherence \(\rho_{0,1}=\rho_{0_{H},1_{H}}^{\text{photon}}\). The absolute value of \(\rho_{0,1}=\rho_{0_{H},1_{H}}^{\text{photon}}\) is referred to as PNC.
As a measure for the overall PNC at a given pulse area, we introduce the time-integrated absolute value of the instantaneous PNC
\[\text{PNC}^{\text{calc}}\propto\tilde{\rho}_{0,1}=\int|\rho_{0_{H},1_{H}}^{ \text{photon}}|dt\,. \tag{16}\]
\(\text{PNC}^{\text{calc}}\) is the calculated quantity that corresponds with experimental quantity \(\text{PNC}^{\text{exp}}\) below in Eq. 20.
Analogously, we define the time-integrated occupation of the one-photon number states as
\[\text{occ}^{\text{calc}}\propto\tilde{\rho}_{1,1}=\int\rho_{1_{H},1_{H}}^{\text{ photon}}dt\,. \tag{17}\]
We assume that the photonic space can be reduced to a two-level system consisting of \(|0_{H}\rangle\) and \(|1_{H}\rangle\). This is reasonable because the higher-order Fock states are not occupied. We then follow Ref. [28] to calculate the visibility \(v\) as measure in a MZI for a mixed state as
\[v^{\text{calc}}=\frac{\tilde{\rho}_{0,1}^{2}}{\tilde{\rho}_{1,1}}\,. \tag{18}\]
We stress that this is an estimate of the visibility, which does not account for the imperfection of the beam splitter, higher photon states, phase scrambling, or reduced indistinguishability. Nonetheless, we expect the qualitative behaviour to agree with the experiment.
### Experimental setup
Our setup (Fig. 2**d**) consists of a Ti:Sapphire laser source (Tsunami 3950, SpectraPhysics) producing \(2.7\,\mathrm{ps}\) pulses (measured as intensity autocorrelation FWHM), that is tuned to \(793\,\mathrm{nm}\), enabling spectral shaping of both the TPE and stimulating (stim). pulses via two independent \(4f\) pulse shapers. The intensities of the TPE and stim pulses are individually controlled via electronic variable optical attenuators (VOA, V800PA, Thorlabs) and the arrival time of the stimulating pulse is precisely controlled via a fiber optic delay line (ODL-300, OZ Optics). The two beams are combined at a 10:90 beamsplitter near the optical window of a closed-cycle cryostat (base temperature \(1.5\,\mathrm{K}\), ICEOxford) where the quantum dot sample is mounted on a three-axis piezoelectric stage (ANPx101/ANPz102, attocube systems AG). The two beams are focused on a single quantum dot with a cold objective (numerical aperture 0.81, attocube systems AG).
Our sample consists of GaAs/AlGaAs quantum dots with exciton emission centered around 790 nm grown by the Al-droplet etching method [59, 60]. The dots are embedded in the center of a lambda-cavity placed between a bottom (top) distributed Bragg reflector consisting of 9 (2) pairs of \(\lambda/4\) thick Al\({}_{0.95}\)Ga\({}_{0.05}\)As/Al\({}_{0.2}\)Ga\({}_{0.8}\)As layers.
The quantum dot emission is collected via the same path as the excitation, where the exciton (X) photons are spectrally separated from the scattered laser light and phonon side-bands using a home-built monochromator equipped with two narrow-band notch filters (BNF-805-OD3, FWHM \(0.3\,\mathrm{nm}\), Optigrate). To improve the suppression of the reflected TPE pulse we employ a cross-polarized configuration in which two orthogonal linear polarizers on excitation and collection paths block any residual laser scattering. In fact, this would not be necessary for a sufficiently narrow laser spectrum, as the TPE energy is detuned from the exciton energy. To measure the spectra, collected photons are routed to a single-photon sensitive spectrometer (Acton SP-2750, Roper Scientific) equipped with a liquid Nitrogen cooled charge-coupled device camera (Spec10 CCD, Princeton Instruments). For lifetime measurements, we use an avalanche photodiode (SPAD, Micro Photon Device) together with time-tagging electronics.
**Phase scan HOM setup**: To measure the indistinguishability, the filtered X photons are sent through a Mach-Zehnder Interferometer (MZI) with a path-length difference of \(12.5\,\mathrm{ns}\), to interfere with successively emitted photons from the quantum dot in a 50:50 fiber beam splitter (TW805R5A2, Thorlabs) for HOM measurement. The two output ports of the fiber beam splitter are monitored by avalanche photodiodes (SPCM-NIR, Excelitas). The arrival times of the photons are recorded using a time tagger (Time Tagger Ultra, Swabian Instruments), and coincidence counting is employed to determine the correlation between the photons. In the HOM measurement, the polarization in both MZI arms is controlled individually, enabling a comparison between the co-polarized scenario with maximum indistinguishability and the cross-polarized situation with distinguishable photons to obtain the HOM visibility.
For PNC measurements, a phase shifter is placed into one of the arms of the unbalanced MZI. The phase shifter consists of a motorized rotation stage (ELL14K, Thorlabs) holding a half-wave plate positioned between two quarter-wave plates that are oriented orthogonally with respect to each other's fast axis. This arrangement effectively acts as a variable phase shifter for linearly polarised input light since:
\[\text{J}(\theta) =\text{QWP}\left(\frac{\pi}{4}\right)\cdot\text{HWP}\left(\theta \right)\cdot\text{QWP}\left(-\frac{\pi}{4}\right) \tag{19}\] \[=-\frac{i}{2}\begin{bmatrix}1&-i\\ -i&1\end{bmatrix}\begin{bmatrix}\cos^{2}\theta-\sin^{2}\theta&2\sin\theta\cos \theta\\ 2\sin\theta\cos\theta&\sin^{2}\theta-\cos^{2}\theta\end{bmatrix}\begin{bmatrix} 1&i\\ i&1\end{bmatrix}\] \[=\begin{bmatrix}0&e^{-i2\theta}\\ -e^{i2\theta}&0\end{bmatrix}.\]
Here \(\theta\) is the orientation of the fast axis of the half
\begin{table}
\begin{tabular}{l l|l} \hline QD-cavity detuning & \(h\Delta\omega_{c-x}\) & 0 meV \\ QD-laser detuning & \(h\Delta\omega_{x-l}\) & 2 meV \\ detuning stim. pulse & \(h\Delta\omega_{l}^{\text{stim}}\) & 2 meV \\ duration stim. pulse & \(\text{FWHM}_{\text{stim}}\) & 3 ps \\ duration TPE pulse & \(\text{FWHM}_{\text{TPE}}\) & 4.5 ps \\ delay between pulses & \(\Delta t\) & 15 ps \\ QD-cavity coupling & \(h_{g}\) & 0.05 meV \\ Binding energy & \(E_{B}\) & 4 meV \\ cavity loss rate & \(\kappa\) & 0.577 ps\({}^{-1}\) \\ QD loss rate & \(\gamma\) & 0.001 ps\({}^{-1}\) \\ QD size & \(a\) & 3 nm \\ temperature & \(T\) & 1.5 K \\ \end{tabular}
\end{table}
Table 1: Parameters used in the simulation. Material parameter are taken as in Ref. [55].
wave plate. By rotating the half-wave plate at a fixed speed, the phase in one of the arms is varied continuously without changing the polarization, while the phase in the other arm remains constant on the timescale of the rotation. The two arms are then recombined at the fiber beam splitter, where the interference occurs and photons are directed towards two separate single-photon detectors. The matching of the timing and relative polarization of the two arms was ensured by interfering the excitation laser with itself and maximizing the contrast, which yielded a visibility of 98 %.
### Extraction of the PNC from data
We follow Ref. [28] to compute the PNC from the visibility. We remind that we only consider the \(H\)-polarized photons and stay in the approximation of the two-level system composed of the \(|0\rangle\) and \(|1\rangle\) Fock state. From the detector counts, we obtain the visibility \(v^{\text{exp}}\), which is proportional to the occupation \(\rho_{0,0}\). In the next step, we decompose the density matrix \(\rho=\lambda\rho_{\text{pure}}+(1-\lambda)\rho_{\text{mixed}}\) into a part corresponding to a pure state and a part being a statistical mixture with the off-diagonal elements being zero. Note that we are only interested in the absolute value of the coherence and not in its phase. Following Ref.[28], the visibility can be approximated by \(v\approx\lambda^{2}\rho_{0,0}\sqrt{V_{\text{HOT}}}\) with \(0\leq\lambda\leq 1\) and \(V_{\text{HOT}}\) being the photon indistinguishability. Considering the slope of the visibility as a function of \(\rho_{0,0}=(1-\rho_{1,1})\) allows us to extract \(\lambda\) (see SI Section F). Together with the knowledge of \(\rho_{1,1}\) via the photon counts, we can estimate the PNC as
\[\text{PNC}^{\text{exp.}}=\lambda\sqrt{\rho_{1,1}(1-\rho_{1,1})}\,. \tag{20}\]
## V Acknowledgements
The authors gratefully acknowledge insightful discussions with Stefan Frick, Robert Keil, Tommaso Faleo, Mathieu Bozzio and Serkan Ates. Nils Kewitz and Bhavana Panchumarthi supported the early phases of the experiment. YK, FK, RS, VR and GW acknowledge financial support through the Austrian Science Fund FWF projects W1259 (DK-ALM Atoms, Light, and Molecules), FG 5, TAI-556N (DarkEneT) and I4380 (AEQuDot). DAV and TH acknowledge financial support by the German Federal Ministry of Education and Research (BMBF) via projects 13N14876 ('QuSecure') and 16KISQ087K (tubLAN Q.0). TKB and DER acknowledge financial support from the German Research Foundation DFG through project 428026575 (AEQuDot). A.R. and SFCdS acknowledge the FWF projects FG 5, P 30459, I 4320, the Linz Institute of Technology (LIT) and the European Union's Horizon 2020 research, and innovation program under Grant Agreement Nos. 899814 (Qurope), 871130 (ASCENT+) and the QauntERA II Programme (project QD-E-QKD). LMH, PW and JCL acknowledge financial support from the European Union's Horizon 2020 and Horizon Europe research and innovation programme under grant agreement No 899368 (EPIQUS), the Marie Sklodowska-Curie grant agreement No 956071 (AppQInfo), and the QuantERA II Programme under Grant Agreement No 101017733 (PhoMemtor); FWF through F7113 (BeyondC), and F65 (Research Group 5); from the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the Christian Doppler Research Association. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
## VI Author contributions
The experimental setup was built by Y.K., F.K., R.S., V.R., J.C.L., D.A.V., L.M.H., and the measurements were performed by Y.K., F.K., D.A.V. The numerical calculations were done by P.C.A.H. The sample was provided by C.S., S.F.CdS, A.R. The first draft of the manuscript was written by D.A.V., F.K., Y.K., P.C.A.H., V.R., D.E.R.. Conceptual work and supervision was done by G.W., P.W., V.M.A, D.E.R., V.R., J.C.L., T.H., A.R.. All authors discussed the results and were involved in writing the manuscript.
|
2309.06463 | Neutrino mass models at $μ$TRISTAN | We study the prospects of probing neutrino mass models at the newly proposed
antimuon collider $\mu$TRISTAN, involving $\mu^+e^-$ scattering at $\sqrt{s}=
346$ GeV and $\mu^+\mu^+$ scattering at $\sqrt{s}= 2$ TeV. We show that
$\mu$TRISTAN is uniquely sensitive to leptophilic neutral and doubly-charged
scalars naturally occurring in various neutrino mass models, such as Zee,
Zee-Babu, cocktail, and type-II seesaw models, over a wide range of mass and
coupling values, well beyond the current experimental constraints. It also
allows for the possibility to correlate the collider signals with neutrino
mixing parameters and charged lepton flavor violating observables. | P. S. Bhupal Dev, Julian Heeck, Anil Thapa | 2023-09-12T18:00:00Z | http://arxiv.org/abs/2309.06463v2 | # Neutrino mass models at \(\mu\)TRISTAN
###### Abstract
We study the prospects of probing neutrino mass models at the newly proposed antimuon collider \(\mu\)TRISTAN, involving \(\mu^{+}e^{-}\) scattering at \(\sqrt{s}=346\,\)GeV and \(\mu^{+}\mu^{+}\) scattering at \(\sqrt{s}=2\,\)TeV. We show that \(\mu\)TRISTAN is uniquely sensitive to leptophilic neutral and doubly-charged scalars naturally occurring in various neutrino mass models, such as Zee, Zee-Babu, cocktail, and type-II seesaw models, over a wide range of mass and coupling values, well beyond the current experimental constraints. It also allows for the possibility to correlate the collider signals with neutrino mixing parameters and charged lepton flavor violating observables.
## I Introduction
The origin of neutrino mass and mixing remains one of the important open questions in fundamental physics [1; 2]. It clearly requires the introduction of new particles beyond the particle content of the Standard Model (SM). Qualitatively, we can expect these new particles to induce novel experimental signatures, such as lepton number violation (LNV) and charged lepton flavor violation (LFV), which are either forbidden or highly suppressed in the SM. Arguably, the cleanest method to identify the new particle(s) would be via their direct production at a high-energy collider. By studying the subsequent decays of these new particles to SM particles, preferably involving LNV and/or LFV to reduce SM background, one might be able to pinpoint the underlying neutrino mass model. A summary of existing collider constraints on various neutrino mass models can be found in Refs. [3; 4]. Similarly, a summary of the LFV constraints can be found in Refs. [5; 6].
All past and current high-energy colliders constructed so far [7] involve electron or proton beams and are therefore particularly sensitive to new particles that couple to electrons or quarks. An entirely new class of couplings could be probed using muon colliders, originally proposed long ago [8]. The main advantage is that leptons provide a much cleaner collision environment than hadrons, and muon beams suffer less synchrotron radiation loss than electron beams, thus making muon colliders capable of reaching higher center-of-mass energies with a reasonable-size circular ring design [9; 10]. They have gained considerable attention in recent years [11; 12; 13; 14; 15], as novel muon cooling techniques are now available [16], and other technical difficulties related to the muon lifetime and radiation seem solvable [15], making muon colliders an increasingly realistic and desirable option. Most work has been done in the context of future \(\mu^{+}\mu^{-}\) colliders [17], which would mimic LEP [18] and could reach a center of mass energy of \(10\,\)TeV or more.
Here, we will focus on a different experimental setup, \(\mu\)TRISTAN [19], which is a proposed high-energy lepton collider using the ultra-cold antimuon technology developed at J-PARC [20]. It can run in the \(\mu^{+}e^{-}\) mode with \(\sqrt{s}=346\) GeV, and later, in the \(\mu^{+}\mu^{+}\) mode [21] with \(\sqrt{s}=2\) TeV or higher. It can serve as a Higgs factory and do precision physics [22]. Other new physics studies for the \(\mu^{+}e^{-}\) and \(\mu^{+}\mu^{+}\) collider options can be found in Refs. [23; 24; 25] and [26; 27; 28], respectively. As we will show in this article, the unique initial states of \(\mu\)TRISTAN make it especially sensitive to neutrino mass models involving leptophilic neutral and/or doubly-charged scalars, allowing for direct production and study of these new scalars in regions of parameter space otherwise untestable. We take examples from both tree- and loop-level neutrino mass models. Specifically, we use the Zee model [29], Zee-Babu model [30; 31], cocktail model [32], and type-II seesaw model [33; 34; 35; 36; 37] as concrete examples, and we consider the cleanest final states (with the least SM background), i.e., the LFV channels \(\mu^{+}e^{-}\to\ell_{\alpha}^{+}\ell_{\beta}^{-}\) and \(\mu^{+}\mu^{+}\to\ell_{\alpha}^{+}\ell_{\beta}^{+}\) mediated by the scalars, as well as the associated production of scalars with a photon or \(Z\) boson.1 We show that \(\mu\)TRISTAN can provide unprecedented sensitivity well beyond existing constraints and complementary to future low-energy LFV searches.
Footnote 1: All models under consideration also generate LNV signatures, such as \(\mu^{+}\ell_{\alpha}^{\pm}\to W^{+}W^{\pm}\), but since these are typically suppressed by a product of many couplings or even the neutrino mass, we will focus on LFV processes.
The rest of this article is organized as follows: in Sec. II we briefly describe the details of the \(\mu\)TRISTAN collider. In Sec. III we go through several neutrino mass models (both radiative and tree-level), derive \(\mu\)TRISTAN's sensitivity and compare to other LFV observables, notably lepton flavor violation. We conclude in Sec. IV.
## II \(\mu\)TRISTAN
The ultra-cold antimuon technology developed for the muon anomalous magnetic moment and electric dipole moment experiment at J-PARC [20] uses laser ionization
of muonium atoms to provide a low-emittance \(\mu^{+}\) beam, which can be re-accelerated to high energies [38]. Allowing a \(1\,\mathrm{TeV}\)\(\mu^{+}\) beam to collide with a high-intensity \(e^{-}\) beam at the TRISTAN (Transposable Ring Intersecting Storage Accelerators in Nippon [39]) energy of \(30\,\mathrm{GeV}\) in a storage ring of the same size as TRISTAN (\(3\,\mathrm{km}\) circumference), one can realize the \(\mu^{+}e^{-}\) mode of \(\mu\)TRISTAN with a center-of-mass energy \(\sqrt{s}=346\,\mathrm{GeV}\).2 Taking into account muon decay, the deliverable instantaneous luminosity for a single detector at any collision point in the storage ring is estimated as \(4.6\times 10^{33}\)\(\mathrm{cm}^{-2}\)\(\mathrm{s}^{-1}\)[22], which translates to an integrated luminosity of \(100\) fb\({}^{-1}\) year\({}^{-1}\).
Footnote 2: A larger storage ring allows for higher-energy collisions. One can reach \(\sqrt{s}=775\,\mathrm{GeV}\) with \(50\,\mathrm{GeV}\) electrons and \(3\,\mathrm{TeV}\) muons.
Using the same \(3\,\mathrm{km}\) storage ring and \(1\,\mathrm{TeV}\)\(\mu^{+}\) beams, one can also consider a \(\mu^{+}\mu^{+}\) collider [21] with \(\sqrt{s}=2\,\mathrm{TeV}\) (or \(6\) TeV for the larger ring option). The beam intensity will be lower than in the \(\mu^{+}e^{-}\) mode due to both muons decaying in the storage ring. The instantaneous luminosity is estimated as \(5.7\times 10^{32}\)\(\mathrm{cm}^{-2}\)\(\mathrm{s}^{-1}\)[22], which translates to an integrated luminosity of \(12\) fb\({}^{-1}\) year\({}^{-1}\).
The precise luminosity numbers depend on various efficiencies for the muon production, as well as the detailed designs of the muon accelerator and storage ring. For instance, a higher luminosity is, in principle, achievable with better focusing of the \(e^{-}\) beam (compared to the \(\mu^{+}\) beam [20]), following the SuperKEKB design [40]. We will use the numbers given above from Ref. [22] as realistic but conservative order-of-magnitude estimates to work with. Assuming negligible SM background for the LFV signals we study below, the above-mentioned luminosities correspond to a minimum signal cross section of \(0.09\) (\(0.75\)) fb in the \(\mu^{+}e^{-}\) (\(\mu^{+}\mu^{+}\)) mode in order to achieve \(3\sigma\) sensitivity with \(1\) year runtime. To be conservative, we will use a signal cross section of \(0.1\) (\(1\)) fb in the \(\mu^{+}e^{-}\) (\(\mu^{+}\mu^{+}\)) mode to derive our sensitivity limits. These limits can be easily scaled for a longer runtime. For instance, \(10\) years of runtime with \(1\) ab\({}^{-1}\) integrated luminosity can achieve the same level of sensitivity with a signal cross section ten times smaller, thus being capable of probing a larger model parameter space than what is shown here.
Since the details of the \(\mu\)TRISTAN detector design and acceptance efficiencies are currently unknown, we will only impose basic trigger-level cuts on the transverse momenta and pseudorapidity of the outgoing leptons and photons, i.e., the default MadGraph5 cuts \(p_{T}^{\ell,\gamma}>10\) GeV and \(|\eta^{\ell,\gamma}|<2.5\)[41] while calculating the cross sections in the \(\mu^{+}\mu^{+}\) option. For the asymmetric beams in the \(\mu^{+}e^{-}\) option, we only keep the trigger-level \(p_{T}\) cuts and remove the \(\eta\) cuts because the final state particles are boosted in the \(\mu^{+}\) direction; the detector should be designed to cover the small-angle region from the beam direction.
We will use unpolarized beams for both \(\mu^{+}e^{-}\) and \(\mu^{+}\mu^{+}\) modes to derive our sensitivity limits. Although the surface antimuons produced by the \(\pi^{+}\) decay are \(100\%\) polarized due to the \(V-A\) nature of the weak interaction, the final polarization of the antimuon beam depends on a detailed understanding of the beam emittance under the applied magnetic field, which in some cases can reduce the polarization down to \(25\%\)[22]. Similarly, the beam polarization option for the \(e^{-}\) beam is still under discussion for the SuperKEKB upgrade [42]. Including realistic beam polarization effects could modify our cross sections by a factor of few due to the chiral nature of the scalar couplings.
## III Neutrino mass models with leptophilic scalars
The leptonic initial states and clean environment at \(\mu\)TRISTAN provide an unprecedented opportunity to directly probe heavy leptophilic particles with possible LFV interactions. We will mainly focus on the leptophilic neutral and doubly-charged scalars that arise in well-known neutrino mass models, both tree-level and radiative, such as the Zee model [29], Zee-Babu model [30; 31], cocktail model [32], and type-II seesaw model [33; 34; 35; 36; 37]. If kinematically allowed, a neutral scalar \(H\) with sizable LFV coupling \(e\mu\) can be resonantly produced in \(\mu^{+}e^{-}\) collisions either by itself or in association with a photon or \(Z\) boson, as shown in Fig. 1(a) and (b) respectively, thus providing unparalleled sensitivity to the LFV scalar sector. Even for \(m_{H}>\sqrt{s}\), the dilepton channels \(\mu^{+}e^{-}\to\ell_{\alpha}^{+}\ell_{\beta}^{-}\) and \(\mu^{+}\mu^{+}\to\ell_{\alpha}^{+}\ell_{\beta}^{+}\), shown in Fig. 1(c) and (d), respectively, are sensitive to the LFV couplings of \(H\) and give rise to a contact-interaction-type bound on the scalar parameter space. Similarly, a doubly-charged scalar can be resonantly produced at a \(\mu^{+}\mu^{+}\) collider, either by itself or in association with a photon or \(Z\) boson (see Fig. 3). The higher center-of-mass energy of the \(\mu^{+}\mu^{+}\) option at \(\mu\)TRISTAN allows us to probe doubly-charged scalars beyond the current LHC constraints [43]. We only focus on the LFV final states, as they are free from the SM background (modulo lepton misidentification, whose rate is negligible at lepton colliders [44; 45]). Also, we do not consider processes involving singly-charged scalars, as they necessarily involve neutrinos in the final state, making it harder to separate our signal from the SM background.
### Zee model
In the Zee model [29], the SM scalar sector with one Higgs doublet \(H_{1}\) is extended by adding a second Higgs doublet \(H_{2}\) and an \(SU(2)_{L}\)-singlet charged scalar \(\eta^{+}\). The relevant Lagrangian terms are given by
\[\mathcal{L}\supset\mu H_{1}H_{2}\eta^{-}-f\bar{L}^{c}L\eta^{+}-\tilde{Y}\bar{ \ell}L\tilde{H}_{1}-Y\bar{\ell}L\tilde{H}_{2}+\mathrm{H.c.}\,, \tag{1}\]
where the superscript \(c\) stands for charge conjugate and \(\tilde{H}_{a}\equiv i\sigma_{2}H_{a}^{*}\) (\(a=1,2\), \(\sigma_{2}\) is the second Pauli matrix). We have suppressed the flavor and \(SU(2)_{L}\) indices. Note that the Yukawa coupling matrix \(f\) is anti-symmetric in flavor space, while \(Y\) is an arbitrary complex coupling matrix. We go to the Higgs basis [46; 47], where only \(H_{1}\) acquires a vacuum expectation value, \(\langle H_{1}\rangle\equiv v/\sqrt{2}\simeq 174\,\mathrm{GeV}\)
and the charged leptons obtain a diagonal mass matrix \(M_{\ell}=\tilde{Y}v/\sqrt{2}\). We work in the alignment limit [48], as preferred by the LHC Higgs data [49], where the neutral scalars of \(H_{2}\) (the CP-even \(H\) and the CP-odd \(A\)) do not mix with the neutral Higgs contained in \(H_{1}\) that can be identified as the SM Higgs boson. The \(\mu\) term in the Lagrangian (1) will induce a mixing of \(\eta^{+}\) with the charged scalar contained in \(H_{2}\) upon electroweak symmetry breaking; we denote the mixing angle by \(\phi\) and the two mass eigenstates by \(h^{+}\) and \(H^{+}\), see Refs. [50; 51] for details.
The simultaneous presence of \(f\), \(Y\), and \(\mu\) breaks lepton number by two units and leads to a one-loop Majorana neutrino mass matrix
\[M^{\nu}=\kappa\left(fM_{\ell}Y+Y^{T}M_{\ell}f^{T}\right), \tag{2}\]
with prefactor \(\kappa\equiv(16\pi^{2})^{-1}\sin 2\phi\log(m_{h^{+}}^{2}/m_{H^{+}}^{2})\). This matrix is manifestly symmetric and can be diagonalized as usual via
\[M^{\nu}=U\,\text{diag}(m_{1},m_{2},m_{3})\,U^{T}\,, \tag{3}\]
where \(U\) is the unitary Pontecorvo-Maki-Nakagawa-Sakata matrix and \(m_{j}\) the neutrino masses. Through neutrino oscillations we have obtained information about the mass splittings and the three mixing angles in \(U\). The overall neutrino mass scale, ordering, and CP phases are unknown, although their ranges are partially restricted [52].
With the parametrization of Refs. [53; 54] we can express \(Y\) in terms of \(M^{\nu}\) and \(f\). The \(\mu^{+}e^{-}\) run of \(\mu\)TRISTAN will be uniquely sensitive to \(Y_{e\mu}\) and \(Y_{\mu e}\), see Fig. 1(a)-(c), so we investigate \(Y\) textures where one of these entries is non-vanishing, which is hardly a restriction. The simultaneous presence of \(Y_{e\mu}\) and \(Y_{ee}\) (or \(Y_{\mu\mu}\)) however would induce large LFV amplitudes, e.g. \(\mu\to e\gamma\) and \(\mu\to 3e\)[55; 56; 57; 58; 59], leaving little parameter space for \(\mu\)TRISTAN to probe. To evade LFV constraints and simplify our analysis, we will set as many \(Y\) entries to zero as possible, leading to the four benchmark textures
\[Y_{A_{1}} \propto\begin{pmatrix}0&1&0\\ 0&0&-\frac{2m_{e}}{m_{\mu}}\frac{M^{\nu}_{\mu e}}{M^{\nu}_{\mu\nu}}\\ 0&0&0\end{pmatrix}\sim\begin{pmatrix}0&1&0\\ 0&0&0.0035\\ 0&0&0\end{pmatrix}, \tag{4}\] \[Y_{B_{2}} \propto\begin{pmatrix}0&1&0\\ -\frac{m_{e}}{m_{\mu}}\frac{M^{\nu}_{\mu e}}{M^{\nu}_{\mu\nu}}&0&0\\ 0&0&0\end{pmatrix}\sim\begin{pmatrix}0&1&0\\ 0.013&0&0\\ 0&0&0\end{pmatrix},\] (5) \[Y_{B_{3}} \propto\begin{pmatrix}0&0&0&1\\ -\frac{m_{e}}{2m_{\mu}}\frac{M^{\nu}_{\mu e}}{M^{\nu}_{\mu\nu}}&0&0\\ 0&0&0\end{pmatrix}\sim\begin{pmatrix}0&0&1\\ 0.0023&0&0\\ 0&0&0\end{pmatrix},\] (6) \[Y_{B_{4}} \propto\begin{pmatrix}0&1&0\\ 0&0&0\\ -\frac{m_{e}}{2m_{e}}\frac{M^{\nu}_{\mu e}}{M^{\nu}_{\mu\nu}}&0&0\\ \end{pmatrix}\sim\begin{pmatrix}0&1&0\\ 0&0&0\\ 0.00013&0&0\end{pmatrix}. \tag{7}\]
All these \(Y\) textures lead to viable two-zero textures in \(M^{\nu}\)[60], indicated by their common name as a subscript, following the nomenclature of Ref. [61]. The \(M^{\nu}\) two-zero textures predict the unknown parameters in the neutrino sector, i.e., the lightest neutrino mass and the three phases. We show in Tab. 1 the predictions for the sum of neutrinos masses \(\sum_{j}m_{j}\) (testable via cosmology [62]), the effective mass parameter for neutrinoless double beta decay \(\langle m_{\beta\beta}\rangle=\sum_{i}U_{ei}^{2}m_{i}\) (testable in the next-generation experiments [63]), and the Dirac CP phase (testable in neutrino oscillation experiments [64; 65]). Notice that the \(\sum m_{\nu}\) predictions of the \(B\) textures are already in tension [66] with limits from cosmology, \(\sum m_{\nu}<0.12\,\text{eV}\)[67],3 but perfectly in line with laboratory constraints [71].
Footnote 3: Even stronger limits have been obtained in Refs. [68; 69], while mild indications of a nonzero sum of neutrino masses (in tension with the stringent Planck limits) was suggested in Ref. [70].
The many zeros in these four \(Y\) benchmarks ensure highly suppressed LFV. Indeed, neither of them give rise to the most stringent LFV modes, \(\mu\to e\gamma\) and \(\mu\to 3e\), despite the non-zero \(e\mu\) entry in \(Y\). However, all cases induce muonium-antimuonium oscillation [72; 73; 74; 75] through those \(e\mu\) entries, which will turn out to be an important constraint. In addition, all textures except for \(Y_{B_{2}}\) also give rise to LFV tauon decays. Furthermore, all textures contribute to \((g-2)_{\mu}\), although the \(2\sigma\)-preferred region turns out to be already excluded by the muonium constraint.
The overall scale of \(Y\) is degenerate with \(f\) and \(\kappa\) from Eq. (2) and can effectively be adjusted at will. The \(e\mu\) entry of \(Y\) is then a free parameter, subject only to perturbative unitarity constraints. The second non-zero entry of \(Y\) is not free, however, but rather predicted by lepton masses and neutrino mass matrix entries. The latter are essentially predicted due to the two-zero textures in \(M^{\nu}\), allowing us to predict the \(Y\) entries, as already shown above. For \(A_{1}\), \(B_{2}\), and \(B_{4}\), we find a large \(e\mu\) entry in \(Y\) that drives the \(H\) production at \(\mu\)TRISTAN, plus a suppressed second \(Y\) entry that induces LFV. For \(B_{3}\), the \(e\tau\) entry dominates and \(\mu\)TRISTAN's reach is severely limited by tau LFV. Notice that we are focusing on such extreme textures just for the sake of illustration to emphasize \(\mu\)TRISTAN's complementarity to other experimental probes.
Assuming \(H\) to be the lightest scalar, the textures \(Y_{A_{1}}\), \(Y_{B_{3}}\), and \(Y_{B_{4}}\) lead to \(\tau^{-}\to\mu^{-}\mu^{\pm}e^{\mp}\), \(\tau^{-}\to e^{-}\mu^{\pm}e^{\mp}\), and \(\tau^{-}\to e^{-}e^{\pm}\mu^{\mp}\), respectively, which give limits of order \(|Y_{\tau\alpha}Y_{\beta\delta}|<(m_{H}/5\,\text{TeV})^{2}\), as shown by the solid black lines in Fig. 2. For all textures except \(B_{3}\) these are very suppressed by the small \(Y_{\tau\alpha}\) entry. For those textures,
\begin{table}
\begin{tabular}{c|c|c|c|c} name & texture zeros & \(\sum_{i}m_{j}/\text{eV}\) & \(\langle m_{\beta\beta}\rangle/\text{eV}\) & \(\delta_{\text{CP}}/^{\circ}\) \\ \hline \(A_{1}\) & \(M_{ee}\), \(M_{\mu e}\) & 0.062–0.071 & 0 & 44–341 \\ \(B_{2}\) & \(M_{\tau\tau}\), \(M_{\mu e}\) & \(>0.13\) & \(>0.036\) & 85-90 \(\wedge\) 270-275 \\ \(B_{3}\) & \(M_{\mu\mu}\), \(M_{\mu e}\) & \(>0.16\) & \(>0.047\) & 87-90 \(\wedge\) 270-273 \\ \(B_{4}\) & \(M_{e\tau}\), \(M_{\tau\tau}\) & \(>0.14\) & \(>0.039\) & 90-94 \(\wedge\) 266-270 \\ \end{tabular}
\end{table}
Table 1: Predictions for the sum of neutrino masses \(\sum_{j}m_{j}\), the effective \(0\nu\beta\beta\) Majorana neutrino mass \(\langle m_{\beta\beta}\rangle\), and the Dirac CP phase \(\delta_{\text{CP}}\) from the texture zeros employed in the See model, using the \(3\sigma\) normal-ordering ranges for the oscillation parameters from NuFit 5.2[52].
as well as for the \(Y_{B_{2}}\) texture which does not give rise to tau (or muon) LFV decay, the most important LFV process is the \(|\Delta L_{\mu}|=|\Delta L_{e}|=2\) conversion of muonium (\(M=e^{-}\mu^{+}\)) to antimuonium (\(\bar{M}=e^{+}\mu^{-}\)) [72; 73; 74; 75], which only requires the \(Y_{e\mu}\) entry we are interested in for \(\mu\)TRISTAN. The conversion probability is currently limited to \(P(M\leftrightarrow\bar{M})<8.3\times 10^{-11}\) at 90% C.L. by the MACS experiment at PSI [76], while a sensitivity at the level of \(\mathcal{O}(10^{-14})\) is expected in the future by the proposed MACE experiment [77]. The current MACS limit sets stringent constraints on the Yukawa couplings \(Y_{e\mu}\) and \(Y_{\mu e}\):
\[|Y_{e\mu,\mu e}|<\frac{m_{H}}{0.85\,\mathrm{TeV}}\,. \tag{8}\]
This is the most important limit for \(\mu\)TRISTAN, as shown in Fig. 2 by the gray-shaded region (current) and black dotted line (future).
The muonium limit can be significantly weakened due to destructive interference in the \(M-\bar{M}\) amplitude [78] if we choose \(m_{A}\simeq m_{H}\), which renders even the future MACE projection insensitive to our parameter space of interest. However, for \(m_{H}\simeq m_{A}\ll m_{H^{+}}\), we would generate large oblique parameters due to custodial symmetry breaking [79; 80]; this puts an upper limit on the mass splitting between the neutral and charged scalars in the Zee model [54; 78]. On the other hand, the leptophilic charged scalars in this model are constrained from lepton searches at the LHC because the lepton decay \(\bar{\ell}^{+}\to\ell^{+}\tilde{\chi}^{0}\) mimicks a charged scalar decay \(H^{+}\to\ell^{+}\nu\) in the massless neutralino limit. The current LHC bound is \(m_{H^{+}}>425\) GeV at 90% CL [81] for BR(\(H^{+}\to\mu^{+}\nu_{e}\)) = 1. To evade the muonium bound while satisfying the global electroweak precision constraint [82; 83], we then require \(m_{H}\simeq m_{A}\gtrsim 320\,\mathrm{GeV}\), making direct \(H\) production in \(\mu\)TRISTAN's \(\mu^{+}e^{-}\) mode difficult. To extend our analysis to lighter \(H\), we therefore assume the scalar hierarchy \(m_{H}\ll m_{A}\simeq m_{H^{+}}\), subject to the muonium constraint from Eq. (8).4 Moreover, to set the scale of neutrino masses, we choose the \(f\) couplings to be much smaller than \(Y\) and can hence neglect the \(\eta^{\pm}\)-mediated processes at \(\mu\)TRISTAN entirely.
Footnote 4: Note that our results are symmetric under \(m_{H}\leftrightarrow m_{A}\); we simply choose \(H\) to be the lighter one for concreteness.
Having established our benchmark scenarios and relevant LFV signatures, we can study this region of the Zee-model parameter space at \(\mu\)TRISTAN. The relevant Feynman diagrams and processes are shown in Fig. 1. Away from the \(s\)-channel resonance at \(\sqrt{s}\sim m_{H}\), the dilepton cross section takes on the simple form
\[\sigma(\mu^{+}e^{-}\to\mu^{-}e^{+})\simeq\frac{|Y_{e\mu}|^{4}}{64\pi s}\, \begin{array}{l}1\,,\qquad m_{H}\ll\sqrt{s}\,,\\ \frac{s^{2}}{12m_{H}^{2}}\,,\quad m_{H}\gg\sqrt{s}\,.\end{array} \tag{9}\]
This was numerically verified in MadGraph5_aMC@NLO[41] using the general 2HDM FeynRules model file [84]. The exact analytic expression for the cross section is not very illuminating, and therefore, we do not show it here. We demand this cross section to be of order 0.1 fb (after applying the cuts specified in Sec. II) for a discovery, since this flavor-violating channel is background-free. The textures \(A_{1}\), \(B_{2}\), and \(B_{4}\) dominantly induce this channel.5 We show the \(\mu\)TRISTAN reach of this process \(\mu^{+}e^{-}\to\mu^{-}e^{+}\) in Fig. 2 (solid red curve), after applying the basic trigger cuts. We find that the \(\mu\)TRISTAN sensitivity surpasses the current limit from muonium conversion for \(m_{H}>50\,\mathrm{GeV}\). The \(B_{3}\) texture is the only one that is already too constrained by tau LFV to give large \(\sigma(\mu^{+}e^{-}\to\ell_{\alpha}^{+}\ell_{\beta}^{-})\). Future muonium data
Figure 2: \(\mu\)TRISTAN sensitivity to the Zee model parameter space for various channels as shown in Fig. 1. The shaded regions are excluded: Purple (pink) shaded from LEP (LHC) dilepton data, green shaded from \((g-2)_{\mu}\), and gray shaded from muonium oscillation. The future muonium (ILC) sensitivity is shown by the black (purple) dashed line (curve). The solid black lines show the \(\tau\) LFV constraints for different \(Y\) textures \((A_{1},B_{3},B_{4})\).
Figure 1: Relevant Feynman diagrams for the processes involving the neutral scalar \(H\) in the Zee model at \(\mu\)TRISTAN.
can cover almost the entire relevant parameter space for \(\mu\)TRISTAN's dilepton mode in the Zee model, offering confirmation potential in case of a discovery.
In Fig. 2, we also show the existing collider constraints from LEP \(e^{+}e^{-}\to\mu^{+}\mu^{-}\) data (purple shaded) [85; 86] and from LHC \(pp\to e\mu\) data (pink shaded) [87; 88].6 The future ILC sensitivity from \(e^{+}e^{-}\to\mu^{+}\mu^{-}H\) is also shown by the pink dashed curve [51; 89; 90] for comparison with the \(\mu\)TRISTAN sensitivity. The green-shaded region is excluded by demanding the \(H\) contribution to \((g-2)_{\mu}\) not to exceed \(5\sigma\) deviation between the world average of the SM prediction [91] and the experimental value [92].7
Footnote 6: As noted in Ref. [78], the \(3.8\sigma\) CMS excess in the \(e\mu\) channel [88] can be explained by \(H\) using lepton PDF, but only for \(m_{H}\simeq m_{A}\) to avoid the muonium limit.
Footnote 7: Taking the BMW result [93] instead of the world average [91] for the SM prediction does not make much difference to our allowed parameter space, which is dominated by the muonium limit.
For the associated production of \(H\) with a photon or a \(Z\) boson (cf. Fig. 1(b)), the cross sections for small \(m_{H}\ll\sqrt{s}\) take the form
\[\sigma(\mu^{+}e^{-}\to H\gamma)\simeq\frac{\alpha_{\rm EM}|Y_{e \mu}|^{2}}{8s}\,\log\left(\frac{s}{m_{e}m_{\mu}}\right), \tag{10}\] \[\sigma(\mu^{+}e^{-}\to HZ)\simeq\frac{\alpha_{\rm EM}|Y_{e\mu}|^{2} \left(s-m_{Z}^{2}\right)}{32s_{w}^{2}c_{w}^{2}s^{2}}\left[\frac{s}{4m_{Z}^{2}}\right.\] (11) \[\left.-(1-2s_{w}^{2}+4s_{w}^{4})-(1-4s_{w}^{2}+8s_{w}^{4})\log \left(\frac{m_{H}m_{Z}}{s-m_{Z}^{2}}\right)\right],\]
where \(\alpha_{\rm EM}\) is the electromagnetic fine-structure constant, and \(s_{w}\equiv\sin\theta_{w}\) (\(c_{w}\equiv\cos\theta_{w}\)) is the (co)sine of the weak mixing angle. These cross sections are typically larger than the dilepton channel but are open only for \(m_{H}\lesssim\sqrt{s}\) for the photon case (or \(\sqrt{s}-m_{Z}\) for the \(Z\) case). The photon cross section exhibits an infrared divergence for \(\sqrt{s}\to m_{H}\) that is regulated by the cut \(p_{T}^{\gamma}>10\,\)GeV, reducing the total cross section compared to the analytical expression above. The \(Z\) cross section is well behaved near the kinematic threshold but diverges for \(m_{H}\to 0\), not of any concern for us. As can be seen in Fig. 2, both modes are important for \(\mu\)TRISTAN and cover parameter space that cannot be probed with other colliders or LFV.8 The \(H\) scalars subsequently decay promptly into \(\mu^{\pm}e^{\mp}\), half of which being background free even without any momentum reconstruction.
Footnote 8: The only exception is texture \(B_{3}\), for which only a tiny region survives the tau LFV bound.
The Zee model also makes predictions for \(\mu\)TRISTAN's \(\mu^{+}\mu^{+}\) mode, as there are \(t\)-channel diagrams for \(\mu^{+}\mu^{+}\to\ell^{+}\ell^{\prime+}\) (cf. Fig. 1(d)). All textures except \(B_{3}\) induce the background free \(\mu^{+}\mu^{+}\to e^{+}e^{+}\), with testable allowed cross sections for \(m_{H}>300\,\)GeV, as shown in Fig. 2 by the brown curve. We find that the \(H\) sensitivity in this channel is worse than or comparable to the dilepton channel in the \(\mu^{+}e^{-}\) mode, so it can only be used as a secondary channel for verifying any signal found in \(\mu\)TRISTAN's first run.
Before we move on to other neutrino mass models, let us briefly comment on the discrepancy in the muon magnetic moment [92]. While the status of the SM prediction is currently unclear, it is worthwhile to entertain the possibility that the discrepancy is real and a sign for new physics. The benchmark values taken above are incapable of explaining \((g-2)_{\mu}\) due to LFV constraints. A recent study [54] has shown that the Zee model is in principle able to explain \((g-2)_{\mu}\), but this requires one of the following textures:
\[Y=\begin{pmatrix}0&0&0\\ 0&\times&\times\\ 0&\times&\times\end{pmatrix}\,\,\text{or}\,\,\begin{pmatrix}\times&0&\times\\ 0&\times&0\\ \times&0&\times\end{pmatrix}. \tag{12}\]
The first (second) requires \(M_{ee}^{\nu}=0\) (\(M_{\mu\mu}^{\nu}=0\)) and effectively conserves electron (muon) number, which makes it obvious that muon LFV is evaded, including muonium conversion. The first texture could only show up in \(\mu\)TRISTAN's \(\mu^{+}\mu^{+}\) run via \(\mu^{+}\mu^{+}\to\mu^{+}\tau^{+}\) or \(\tau^{+}\tau^{+}\); the second texture can give \(\mu^{+}e^{-}\to\mu^{+}\tau^{-}\) in \(\mu\)TRISTAN's first run. A dedicated study of this scenario will be postponed until the \((g-2)_{\mu}\) anomaly is clarified.
Overall, we see that \(\mu\)TRISTAN could probe the Zee model in regions of parameter space that are inaccessible by other means. A exhaustive study of the Zee model at \(\mu\)TRISTAN goes beyond the scope of this work but the benchmarks discussed here indicate a very promising situation.
### Zee-Babu model
In the Zee-Babu model [30; 31], we extend the SM by two \(SU(2)_{L}\)-singlet scalars \(h^{+}\) and \(k^{++}\) with hypercharge 1 and 2, respectively, which have the following couplings relevant for neutrino masses:
\[-\mathcal{L}\supset f\bar{L}^{c}Lh^{+}+g\bar{\ell}^{c}\ell\,k^{++}+\mu h^{-}h^ {-}k^{++}+\text{H.c.} \tag{13}\]
The matrix \(g\) (\(f\)) is symmetric (antisymmetric) in flavor space. Taken together, these couplings break lepton number and generate a Majorana neutrino mass matrix
\[M^{\nu}\simeq 16\mu\,I(m_{h},m_{k})\,fM_{\ell}g^{*}M_{\ell}f\,, \tag{14}\]
where \(I(m_{h},m_{k})\) is a two-loop function [94; 95]. The antisymmetry of \(f\) leads to \(\det M^{\nu}=0\) and thus predicts one massless neutrino.
Similar to the Zee model, we can make the overall scale of \(g\) as large as we want and compensate for that with a smaller \(f\) matrix or \(\mu\) coupling. For simplicity we assume \(h^{+}\) to be very heavy and the \(f\) couplings to be small, effectively decoupling \(h^{+}\). This leaves us with the doubly charged \(k^{++}\) with coupling matrix \(g\). At \(\mu\)TRISTAN's \(\mu^{+}\mu^{+}\) run, this \(k^{++}\) leads to dilepton and associated production signatures as long as \(g_{\mu\mu}\neq 0\), see Fig. 3(a)-(b). We show \(\mu\)TRISTAN's reach and competing constraints in Fig. 4, having computed the cross sections with MadGraph5_aMCONLO [41] using the model file given in Ref. [96].
\(\mu\)TRISTAN can easily probe a large region of parameter space as long as \(g_{e\mu}\) is somewhat suppressed compared to \(g_{\mu\mu}\) to evade the \(\mu\to e\gamma\) constraint. This is hardly a
restriction and we can even find \(g\) textures that eliminate almost all LFV constraints, e.g.
\[g^{*}\propto\begin{pmatrix}0&0&0\\ 0&1&-\frac{m_{u}}{m_{\tau}}\frac{M^{\nu}_{\mu\tau}}{M^{\nu\tau}_{\tau\tau}}\\ 0&-\frac{m_{u}}{m_{\tau}}\frac{M^{\nu}_{\mu\tau}}{M^{\nu\tau}_{\tau\tau}}& \frac{m^{\nu}_{\mu\tau}}{m^{\tau}_{\tau\tau}}\end{pmatrix}\sim\begin{pmatrix} 0&0&0\\ 0&1&0.1\\ 0&0.1&5\times 10^{-3}\end{pmatrix}. \tag{15}\]
This structure does not lead to any eLFV. The only process we could worry about is \(\tau\to 3\mu\), which is however not particularly stringent and could be further suppressed by tuning \(|M^{\nu\tau}_{\mu\tau}/M^{\nu\tau}_{\tau\tau}|\ll 1\). \(\mu\)TRISTAN has a large region of testable parameter space even without this tuning. Notice that the dominant \(g_{\mu\mu}\) entry here leads to the dominant channels \(n^{+}\mu^{+}\to\mu^{+}\mu^{+}\) and \(\mu^{+}\mu^{+}\to\gamma/Z\,(k^{++}\to\mu^{+}\mu^{+})\); these are not exactly background free, even though invariant mass distributions and angular observables can be used to isolate new-physics contributions. The subleading channels \(\mu^{+}\mu^{+}\to\mu^{+}\tau^{+}\) and \(\mu^{+}\mu^{+}\to\gamma/Z\,(k^{++}\to\mu^{+}\tau^{+})\) on the other hand are smoking-gun observables.
The texture from Eq. (15) does not induce any interesting signatures in the \(\mu^{+}e^{-}\) run, but other textures might, see Fig. 3. For example, a \(\mu\mu\) and \(ee\) entry in \(g\) would give the very clean \(\mu^{+}e^{-}\to\mu^{-}e^{+}\) (in addition to \(\mu^{+}\mu^{+}\to e^{+}e^{+}\)), allowed by current muonium-conversion constraints, as shown in Fig. 4.
We also show other relevant constraints in Fig. 4. The \((g-2)_{\mu}\) excluded region is shown by the black shaded region on top left corner. The vertical pink shaded region is the current LHC bound [43], and the vertical pink dashed line is the future HL-LHC sensitivity [96]. Thus, we find that \(\mu\)TRISTAN will probe a wide range of the Zee-Babu model parameter space well beyond the HL-LHC sensitivity. Similar sensitivities are also achievable at a future \(\mu^{+}\mu^{-}\) collider [97].
### Cocktail model
The cocktail model [32] is an SM extension by two \(SU(2)_{L}\)-singlet scalars \(h^{-}\) and \(k^{++}\), as well as a second Higgs doublet \(H_{2}\). The field content is reminiscent of the Zee and Zee-Babu models, but here an extra \(\mathbb{Z}_{2}\) symmetry is imposed under which \(h^{-}\) and \(H_{2}\) are odd, which leaves the following relevant terms in the Lagrangian:
\[\begin{split}-\mathcal{L}&\supset g\bar{\ell}^{ \leftarrow}\ell\,k^{++}+\mu h^{-}h^{-}k^{++}+\kappa\tilde{H}_{2}^{\dagger}H_{1} h^{-}\\ &\quad+\xi\tilde{H}_{2}^{\dagger}H_{1}h^{+}k^{--}+\frac{\lambda_{ 5}}{2}(H_{1}^{\dagger}H_{2})^{2}+\text{H.c.}\,,\end{split} \tag{16}\]
where \(g\) is once again a symmetric Yukawa matrix in flavor space. Lepton number is broken explicitly if all the above couplings are non-zero. We assume parameters in the scalar potential so that \(\langle H_{2}\rangle=0\), leaving the \(\mathbb{Z}_{2}\) unbroken. In that case, Majorana neutrino masses arise at three-loop level:
\[M^{\nu}\simeq\frac{F_{\text{cocktail}}}{(16\pi^{2})^{3}\,m_{k^{++}}}\,M_{t}gM _{\ell}\,, \tag{17}\]
where \(F_{\text{cocktail}}\) is a complicated dimensionless loop function that depends on scalar masses and couplings [98; 99]. The three-loop suppression factor and additional suppression by charged-lepton masses require large entries in \(g\) that are easily in the non-perturbative regime, even when all scalar masses are close to their experimental limits and the scalar-potential couplings as large as allowed by perturbative unitarity. To keep \(g\) perturbative and evade stringent constraints from muon LFV, one is more or less forced to consider the two-zero texture
Figure 3: Relevant Feynman diagrams for the doubly-charged scalars in the Zee–Babu, cocktail, and triplet seesaw models.
Figure 4: \(\mu\)TRISTAN sensitivity to the Zee–Babu and cocktail model parameter space for various channels as shown in Fig. 3. The shaded purple region is excluded from LHC dilepton data [43], the dashed purple line shows the HL-LHC reach [96]. The diagonal non-solid lines indicate LFV constraints on the coupling products \(|g_{\mu\mu}g_{\alpha\beta}|\). For the Zee–Babu \(g\) texture from Eq. (15), only \(g_{\mu\mu}\) and \(g_{\mu\tau}\) are relevant. For the cocktail-model texture from Eq. (18), mainly \(g_{\mu\mu}\) and \(g_{e\tau}\) are relevant.
for \(M^{\mu}\)[98; 99], which then results in a \(g\) matrix
\[g\propto\begin{pmatrix}0&0&1\\ 0&\frac{m_{e}m_{\pi}}{m_{\pi}^{2}}&\frac{M^{\mu}_{\pi}}{M^{\mu}_{\pi}}&\frac{m_ {e}}{m_{\mu}}&\frac{M^{\mu}_{\pi}}{M^{\mu}_{\pi}}\\ 1&\frac{m_{e}}{m_{\mu}}&\frac{M^{\mu}_{\pi}}{M^{\mu}_{\pi}}&\frac{m_{e}}{m_{\pi} }&\frac{M^{\mu}_{\pi}}{M^{\mu}_{\pi}}\\ \end{pmatrix}\sim\begin{pmatrix}0&0&1\\ 0&0.24&0.01\\ 1&0.01&6\times 10^{-4}\\ \end{pmatrix} \tag{18}\]
and the neutrino-parameter predictions from the first row of Tab. 1. The strongest LFV constraint mediated by \(k^{++}\) then comes from \(\tau^{-}\to e^{+}\mu^{-}\mu^{-}\), requiring \(|g_{e\tau}|<0.17\,m_{k^{++}}/\text{TeV}\), although, by coincidence, \(\mu\to e\gamma\) gives essentially the same limit for this texture.
The LFV constraints of this texture are severe enough that \(\mu\)TRISTAN in the \(\mu^{+}e^{-}\) mode would not observe the characteristic \(\mu^{+}e^{-}\to\tau^{+}\mu^{-}\), see Fig. 4. However, \(\mu\)TRISTAN in the \(\mu^{+}\mu^{+}\) run could potentially see \(\mu^{+}\mu^{+}\to e^{+}\tau^{+}\) or \(\mu^{+}\mu^{+}\to k^{++}\gamma/Z\) followed by prompt \(k^{++}\to e^{+}\tau^{+}\) decays.
Notice that the \(\mathbb{Z}_{2}\) symmetry renders the lightest particle among the \(H_{2}\) and \(h^{-}\) stable. We can choose scalar-potential parameters to make this one of the neutral scalars inside \(H_{2}\), which could then form dark matter. We will not discuss this here since there is very limited connection to \(\mu\)TRISTAN.
### Type-II or triplet seesaw
In the type-II or triplet seesaw mechanism [33; 34; 35; 36; 37], we extend the SM by an \(SU(2)_{L}\)-triplet with hypercharge \(+2\), usually written as the \(SU(2)_{L}\) matrix
\[\Delta=\begin{pmatrix}\Delta^{+}/\sqrt{2}&\Delta^{++}\\ \Delta^{0}&-\Delta^{+}/\sqrt{2}\\ \end{pmatrix}. \tag{19}\]
This triplet couples to the left-handed lepton doublets \(L_{e,\mu,\tau}\) and the SM scalar doublet \(H\), giving rise to the Lagrangian
\[-\mathcal{L}\supset Y\bar{L}^{c}i\sigma_{2}\Delta L+\mu H^{\dagger}i\sigma_{ 2}\Delta H^{*}+\text{H.c.} \tag{20}\]
This Lagrangian breaks lepton number and induces a small vacuum expectation value \(\langle\Delta^{0}\rangle=v_{\Delta}/\sqrt{2}\), which in turn generates the Majorana neutrino mass matrix \(M^{\nu}=\sqrt{2}Yv_{\Delta}\). The Yukawa couplings thus inherit the structure from the neutrino mass matrix but come with an unknown scaling factor \(v_{\Delta}\).
In the limit of \(v_{\Delta}\ll v\), the mass eigenstates that dominantly come from the triplet, \(H^{++}\simeq\Delta^{++}\), \(H^{+}\simeq\Delta^{+}\), \(H\simeq\sqrt{2}\text{Re}\,\Delta^{0}\), and \(A\simeq\sqrt{2}\text{Im}\,\Delta^{0}\), have mass splittings
\[m_{H}^{2}\simeq m_{A}^{2}\simeq m_{H^{+}}^{2}+\frac{\lambda_{4}v^{2}}{4}\simeq m _{H^{++}}^{2}+\frac{\lambda_{4}v^{2}}{2}\,, \tag{21}\]
specified exclusively by the coupling \(\lambda_{4}\,H^{\dagger}\Delta\Delta^{\dagger}H\)[100; 101]. For simplicity we will assume an almost degenerate spectrum here, even though a mass splitting could resolve [102; 103; 104] the recently observed discrepancy in CDF's \(W\)-boson mass measurement [105]. The large Yukawa couplings required to produce \(\Delta^{++}\) at \(\mu\)TRISTAN also lead to strong constraints from searches at the LHC, which exclude masses below 1 TeV [43] and can be improved at the HL-LHC [106].
Even more importantly, the triplet scalars induce LFV decays, for example [107; 108; 109; 101]
\[\text{BR}(\mu\to e\gamma)\simeq\frac{\alpha_{\text{EM}}\left|(M^{\nu} \mathord{\shortstack{+}{\small 1}\mskip-4.0mu }M^{\nu})_{e\mu}\right|^{2}}{48\pi G_{F}^{2}v_{\Delta}^{4}} \left(\frac{1}{m_{H^{+}}^{2}}+\frac{8}{m_{H^{+}}^{2}}\right)^{2}, \tag{22}\]
\[\text{BR}(\mu^{+}\to e^{+}e^{-}e^{+})\simeq 4\frac{\left|M^{\nu}_{ee^{ \prime}}M^{\nu}_{\mu e}\right|^{2}}{G_{F}^{2}v_{\Delta}^{4}m_{H^{++}}^{4}}\,, \tag{23}\]
where \(G_{F}\) is the Fermi coupling constant. \(\mu\to e\gamma\) is particularly important because the prefactor \(\left|(M^{\nu}\mathord{\shortstack{+}{\small 1}\mskip-4.0mu }M^{\nu})_{e\mu} \right|^{2}\) is completely specified by the known neutrino oscillation parameters [110] and is limited from below by \((0.016\,\text{eV})^{4}\), using the \(2\sigma\) range from NuFit 5.2 [52]. The current limit \(\text{BR}(\mu\to e\gamma)<4.2\times 10^{-13}\)[111] then gives \(m_{\Delta^{++}}>1.5\,\text{TeV}(\text{eV}/v_{\Delta})\). The \(\mu\to e\gamma\) limit can be improved by almost an order of magnitude with MEG-II [112; 113] but will eventually be surpassed by muon-conversion in Mu2e [114; 115], which probes the same coupling in our case and effectively has a sensitivity down to \(\text{BR}(\mu\to e\gamma)<2\times 10^{-14}\). This would improve the limit to \(m_{\Delta^{++}}>3\,\text{TeV}(\text{eV}/v_{\Delta})\).
Notice that the other LFV decays, notably \(\mu\to 3e\)[116], could give even stronger limits on \(v_{\Delta}m_{\Delta^{++}}\), especially with the upcoming Mu3e [117], but depend on the so-far unknown neutrino parameters such as the lightest neutrino mass and the Majorana CP phases. These allow us, for example, to set \(M^{\nu}_{ee}=0\) and thus eliminate \(\mu\to 3e\) entirely. For simplicity we will therefore ignore these other LFV processes and only consider the unavoidable \(\mu\to e\gamma\).
In Fig. 5, we show the LFV and LHC constraints together with the \(\mu\)TRISTAN sensitivities in various
Figure 5: \(\mu\)TRISTAN sensitivity to the triplet/type-II seesaw model parameter space for various channels as shown in Fig. 3. We have set \(M^{\nu}_{\mu\mu}=0.05\,\text{eV}\) to fix the \(\Delta^{++}\mu\mu\) coupling, see text for details.
channels. We have implemented the model file in FeynRules[84] and computed the cross sections using MadGraph5_aMC@NLO[41]. To specify the production Yukawa coupling \(Y_{\mu\mu}\) we set \(M^{\nu}_{\mu\mu}=0.05\,\)eV; this satisfies the cosmology bound \(\sum m_{\mu\nu}<0.12\,\)eV[67], otherwise we could go to larger \(M^{\nu}_{\mu\mu}\) values and increase the \(\mu\)TRISTAN cross sections without changing the LFV bound.
The cross section \(\sigma(\mu^{+}\mu^{+}\to\ell^{+}_{\alpha}\ell^{+}_{\beta})\) scales with \(|M^{\nu}_{\mu\mu}|^{2}|M^{\nu}_{\alpha\beta}|^{2}\), at least away from the resonance. The on-shell produced \(\Delta^{++}\) has decay rates into charged leptons proportional to \(|M^{\nu}_{\alpha\beta}|^{2}\). Our current lack of information about the lightest neutrino mass and the CP phases preclude us from making definite predictions for these final states, but this will improve with future neutrino data [118]. Generically, we expect final states with more muons and tauons than electrons at \(\mu\)TRISTAN from \(\Delta^{++}\) processes for normal-ordered neutrino masses. Di-boson decays \(\Delta^{++}\to W^{+}W^{+}\) are heavily suppressed by \(v_{\Delta}\) in our region of interest [119; 120]. Similarly, the cascade decays of \(\Delta^{++}\) involving neutral or singly-charged scalars depend on the choice of mass spectrum and can be ignored here.
Unlike for the doubly charged scalars in the Zee-Babu or cocktail models, the \(\Delta^{++}\) in the triplet model cannot generate clean \(\mu^{+}e^{-}\to\ell^{+}\ell^{-}\) signatures in \(\mu\)TRISTAN's first run, since this region of parameter space is already excluded by \(\mu\to e\gamma\) (Fig. 5).
### Other neutrino mass models
The \(\mu^{+}\mu^{+}\) mode of \(\mu\)TRISTAN will also be uniquely sensitive to the LNV/LFV signatures arising from other neutrino mass models. For instance, the heavy neutral leptons appearing in type-I [121; 122; 123; 124; 125] and type-III [126] seesaw models will induce a clean LNV signal \(\mu^{+}\mu^{+}\to W^{+}W^{+}\to\text{jets}\), which is like an inverse neutrinoless double beta decay [127] but in the muon sector [21]. This channel has been recently analyzed in Refs. [128; 27], so we will not repeat this analysis here. Similarly, the \(\mu\)TRISTAN sensitivities for the neutral and/or doubly-charged scalars derived here can also be applied to other models, such as the left-right symmetric model [129; 130; 131], and other radiative neutrino mass models [58], although the connection to neutrino mass may not be as direct as in the models studied here.
## IV Conclusion
Neutrino masses provide the most convincing laboratory evidence for physics beyond the SM, making searches for the underlying new particles highly motivated. In this article, we have shown that \(\mu^{+}e^{-}\) and \(\mu^{+}\mu^{+}\) colliders in the vein of the recently proposed \(\mu\)TRISTAN experiment offer a new way to search for a variety of neutrino mass models. As exemplified by several benchmark scenarios of the popular Zee, Zee-Babu, cocktail, and triplet seesaw models, we showed that \(\mu\)TRISTAN could probe regions of parameter space that are out of reach of other experiments, be it future hadron colliders or future low-energy LFV searches.
## Acknowledgements
The work of BD is supported in part by the U.S. Department of Energy under grant No. DE-SC 0017987 and by a URA VSP fellowship. The work of JH and AT was supported in part by the National Science Foundation under Grant PHY-2210428. For facilitating portions of this research, BD and AT wish to acknowledge the Center for Theoretical Underground Physics and Related Areas (CETUP*), The Institute for Underground Science at Sanford Underground Research Facility (SURF), and the South Dakota Science and Technology Authority for hospitality and financial support, as well as for providing a stimulating environment. BD and JH would like to thank the Fermilab Theory Group for their hospitality during the completion of this work.
|
2302.10173 | Ghost Particles, Entanglement of Historical Epochs and Time Machine | In the article the possibility of creating a time machine, based on the
mechanism of quantum entanglement of macroscopic ordinary (many)partial
configurations and (many) partially ghostly configurations of different
historical epochs belonging to different parallel Everett universes is
investigated | Alexander K. Guts | 2022-12-28T15:12:21Z | http://arxiv.org/abs/2302.10173v1 | Mathematical
Mathematical
November 6, 2021
**GHOST PARTICLES, ENTANGLEMENT**
**OF HISTORICAL EPOCHS AND TIME MACHINE**
**A.K. Guts**
Dr.Sc. (Phys.-Math.), Professor, e-mail: [email protected]
Dostoevsky Omsk State University, Omsk, Russia
**Abstract. In the article the possibility of creating a time machine, based on the mechanism of quantum entanglement of macroscopic ordinary (many)partial configurations and (many)partially ghostly configurations of different historical epochs belonging to different parallel Everett universes is investigated.**
**Keywords: Time machine, ghost particles, entanglement historical epochs, parallel universes.**
## Introduction
Deutsch in <<The Fabric of Reality>>[1] universe, that is parallel to our universe in the sense of the Everett's interpretations of quantum mechanics, called _shadow_.
Elena Palesheva in the article [2] linked shadow particles of parallel universe with _ghost particles_ of our Universe. She also confirmed the Deutsch's point that shady particles, i.e. ghost particles can weakly interact with ordinary particles of our universe through quantum interference.
In the [3, 4, 5] articles we suggested linking space-time trajectories appearing in Wheeler-DeWitt geometrodynamics with real-life parallel historical epochs, which are various time periods of human civilization. Transition from one epoch to another carried out by launching a special apparatus, called a time machine, was realized as the mechanism of quantum entanglement of compact spatio-temporal regions of different historical epochs. However, in this article we did not say how this entanglement occurs.
In given article we proposes to consider as such mechanism the entangling of macroscopic usual (many) partial configurations and (many) partial-ghost configurations. The latest are the configurations from the parallel universe due to the idea of Elena Palesheva.
## 1 Ghost particles
A ghost particle is a particle whose moment-momentum tensor is zero, but its current is non-zero, which in in the case of a bispinor is
\[j^{i}=\psi^{+}\gamma^{i}\psi.\]
Therefore, such a particle carries neither energy nor momentum.
Such non-Abelian solutions of the Yang-Mills equations were found by Loos in 1967 [6]. Recall that the quanta of the Yang-Mills fields are vector particles (i.e. bosons with spin 1) and have zero mass. However, through the mechanism of spontaneous symmetry breaking physical Yang-Mills fields can acquire non-zero mass. Under considering the weak interaction the quantum of Yang-Mills fields is considered a W-particle having a charge of +1, 0 or -1. For the strong interaction, the quantum of the Yang-Mills field is gluons hold protons and neutrons together.
In the case of particles propagating in outer space, ghost particles were discovered in 1974 by Griffith [7], and then the corresponding solutions of the Einstein-Dirac equations were found and published in works [8, 9, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 22, 23] 1970-80s of the XX century, and already in the XXI century -- [24, 25, 26, 27, 28, 29, 30]. Neutrino ghosts and massive bisprinor ghosts were found in the universes Robinson-Walker, i.e. in Friedmann's universes [30]. "What the most interesting thing about ghost solutions is how the neutrino field propagates in the background space-time without changing him, which means we can't detect ghost neutrinos by their gravitational effects" [25].
"The first reaction, writes M. Novello in the article "The ghostly foundations for neutrinos" (1974), -- these solutions may be rejected on physical grounds. However, the fact that they seem to be present in any geometry1, makes their less superficial study expedient. To be able to study some of the details of these ghosts, we must find not only one, but also a class of these solutions in the given geometry". And he discovers a surprising result: he gives an example space-time generated by a neutrino, and this itself neutrino is a linear combination of neutrino-ghosts [12].
Footnote 1: It is stated in [13] that ghost neutrinos (described by the general covariant Weyl equation \(\varphi^{A}_{||A\dot{X}}=0\)) exist only in algebraic special space-times with a neutrino flux vector parallel to one of the main zero vectors of the conformal tensor.
However, before the work of Elena Palesheva no one gave at least some interpretations to ghost particles.
Combining the results of Palesheva and Novello, we can state the following hypothesis.
The universe, in which we are aware of our presence, consists of real particles, i.e. particles with a non-zero tensor energy-momentum. Ghost particles are guests from parallel universes. But there are infinitely many parallel universes; all of them are symmetrical with respect to our analysis (there is no dedicated "our" Universe), therefore, one can only exist ghost particles. Energy and momentum are imparted to a particle from the specific one under consideration, i.e., fixed by someone consciousness, the universe, if, from the point of view of mathematics, it is linear combination of ghost particles. But to decompose a particle into linear combination requires some mechanism present in universe, which brings about and confirms the fact of decay.
Obviously, this is the same mechanism that fixed a particular universe. And this mechanism is consciousness, there is an observer, present, living in this universe.
Consciousness, by its attention to the universe surrounding it, performs an act creation, expressed in the generation of linear combinations ghost particles: from chaos of ghost particles to order="linear combinations", from simple to complex.
It is important that Palesheva showed that ghost particles, i.e. particles parallel universe, interact with our particles and this interaction manifests itself in the form of quantum interference.
However, another interaction of the particles of our Universe is possible and parallel. This is quantum entanglement (entanglement). Let's describe it the _non-force_ interaction of particles from different universes.
## 2 Examples of ghost particles
Consider spinor particles described by the equation Dirac:
\[i\hbar\gamma^{(k)}\frac{\partial\psi}{\partial x^{k}}-mc\psi=0. \tag{1}\]
Then bispinor
\[\psi(x)=\left[\begin{array}{c}1\\ 1\\ -1\\ 1\end{array}\right]e^{\frac{mc}{\hbar}x^{2}+f(x^{0}+x^{3})+ig(x^{0}+x^{3})} \tag{2}\]
is a solution to the Dirac equation. Here \(g(x^{0}+x^{3})\) and \(f(x^{0}+x^{3})\) -- smooth real functions.
Based on the results of Theorem 12.1 from [31, 32], we obtain that (2) describes a spinor ghost only if \(g(x^{0}+x^{3})=const\in\mathbb{R}\).
Let's take the solution for a real wave in the form:
\[\psi(x)=\left[\begin{array}{c}1\\ 1\\ -1\\ 1\end{array}\right]e^{\frac{mc}{\hbar}x^{2}+i(x^{0}+x^{3})}, \tag{3}\]
and for the spinor ghost we set
\[\phi(y)=\left[\begin{array}{c}1\\ 1\\ -1\\ 1\end{array}\right]e^{\frac{mc}{\hbar}y^{2}}. \tag{4}\]
## 3 Particle entanglement and ghost particles
Let \(|\psi(\pm)\rangle_{r}\) and \(|\phi(\pm)\rangle_{g}\) be the states of a real particle and a ghost corresponding to bispinor solutions \(\psi(x,\pm)\) and \(\phi(x,\pm))\) of Dirac equations with different spin projections \(\pm\)[33].
The entangled state of two particles in this case is described by ket-vector
\[|\psi(+)\rangle_{r}\otimes|\phi(-)\rangle_{g}+|\psi(-)\rangle_{r}\ \otimes|\phi(+) \rangle_{g}.\]
According to the **ER**=**EPR** statement about the existence of an Einstein-Rosen bridge, or a 3-dimensional wormhole, connecting the locations of entangled particles
(EPR-pairs) [34], we can say that there is a 3-dimensional wormhole connecting a particle of our universe with a shadow particle, or ghost particle, from a parallel universe.
## 4 How to entangle the particles?
"How to entangle the particles: take a crystal with non-linear optical properties -- that is, such, the interaction of light with which depends on the intensity of this light. For example, triborate lithium, barium beta-borate, potassium niobate. Irradiate it with a laser appropriate wavelength -- and high-energy laser photons radiation will sometimes decay into pairs of entangled photons less energy (this phenomenon is called "spontaneous parametric scattering") and polarized in perpendicular planes. It remains to keep the entangled particles intact and spread as far apart as possible" [35].
As you can see, the entanglement mechanisms are already in place. Although not yet those we need.
## 5 Macroscopic entanglement
To achieve our goal, we need a macroscopic multiparticle entanglement. Does it exist? Common the opinion that macroscopic entanglement does not occur in nature outside artificially created situations. What is this opinion based on?
It is known that as the size of physical systems increases it gets _harder completely isolate them from the environment_, and interaction with the environment -- decoherence -- destroys quantum characteristics such as superpositions and entanglement, i.e., in particular, the entanglement many-particle configurations from parallel epochs.
The beauty of a quantum description of an object is that such an object is a quantum object represented by a coherent superposition waves, which has both an interference pattern and, possibly, entanglement. But even if there is macroentanglement, it is difficult to discover on a macroscopic scale, since quantum objects interact with a large number of degrees of freedom, and any inaccuracy in the original data or follow-up (coarse-grained) leads to decoherence, loss of consistency between the waves describing all these degrees of freedom, and to the disappearance of their interference pattern, to the loss of entanglement. Consequently, the essence of the quantum descriptions, giving fantastic from the point of view of the classical physics possibilities is lost.
For our goal - the transfer of a macro-object, a person to another epoch -- it is necessary to make sure that the laws of quantum mechanics are also valid for macroobjects. To do this, you need to pay attention to those of their mechanical degrees of freedom of the macro object, which are well isolated from the environment and therefore well protected from decoherence. We need macroscopic quantum mechanics.
The appearance of such a "mechanics" became possible thanks to the recent progress in quantum optomechanics: physicists were able to use light (sometimes
microwaves) to transform macroscopic mechanical objects into almost pure quantum states <...>. Soon they will be able to allow these mechanical objects to evolve without much decoherence and measure the final states, thereby making comparisons with the predictions of quantum mechanics" [36].
Note that we are using the terms "macroscopic" on a purely intuitive level, and it's not very correctly with a scientific approach to the problem under study.
It is important to decide in advance with what is considered a macroscopic object: "although quantum systems are difficult to maintain and observe in macroscale, they can be easily created. On the other hand, the question naturally arises: is the received macroscopic state? Based on experimental results, we have shown that an entangled state that can be be obtained with the help of currently available technologies, will include a sufficiently large number of photons, which can be seen with the naked eye" [37]. This makes our approach is satisfactory if macroscopicity is concept related to size. We also mentioned that the components entangled state can be easily distinguished by a simple avalanche photodiode, if you look at the dispersion of the distribution of the number photons. This pleases those who believe that macro-entangled states should have components that can be easily distinguished. Although our study showed that the resulting state was surprisingly resilient to losses, we showed that it also becomes more and more brittle under phase perturbation at increasing its size. So our approach is also satisfactory if macroscopic agents are sensitive to decoherence, and highlights the complexity of possible interactions between a given quantum system and its environment. We also saw that the accuracy of measurement needed to detect the quantum nature of the created state, increases with its size. This also makes our scheme satisfactory if macroscopicity associated with the requirement for measurement accuracy. In conclusion, we note that there are many other candidates for a measure of macroscopicity [38]. Testing each one is "working for the future" [39].
## 6 Entanglement and wormholes
Suppose that in the near future we will learn how to create macroscopic entangled multiparticle configurations. But for our goals related to the time machine, this is not enough, it is necessary chain configurations with ghost configurations.
What should we expect if we manage to do this?
The quantum phenomenon of entanglement is closely related to the classical the phenomenon of the formation of a 3-dimensional wormhole. Consequently, entanglement in space will give rise to a 3-dimensional wormhole or 4-dimensional wormhole between parallel universes, between various historical epoches. Transitions throat such a wormhole are the quantum time machine [3, 4].
However, since 3-dimensional wormholes are unstable, it follows think about generating 4-dimensional wormholes [43]. Last thing, most likely indicates entanglement in time [44] (Time Entanglement), i.e. the formula **EPR**=**ER** is replaced by the formula **EPR**=**TE**. |
2303.00049 | Thin Films on the Skin, but not Frictional Agents, Attenuate the Percept
of Pleasantness to Brushed Stimuli | Brushed stimuli are perceived as pleasant when stroked lightly on the skin
surface of a touch receiver at certain velocities. While the relationship
between brush velocity and pleasantness has been widely replicated, we do not
understand how resultant skin movements - e.g., lateral stretch, stick-slip,
normal indentation - drive us to form such judgments. In a series of
psychophysical experiments, this work modulates skin movements by varying
stimulus stiffness and employing various treatments. The stimuli include
brushes of three levels of stiffness and an ungloved human finger. The skin's
friction is modulated via non-hazardous chemicals and washing protocols, and
the skin's thickness and lateral movement are modulated by thin sheets of
adhesive film. The stimuli are hand-brushed at controlled forces and
velocities. Human participants report perceived pleasantness per trial using
ratio scaling. The results indicate that a brush's stiffness influenced
pleasantness more than any skin treatment. Surprisingly, varying the skin's
friction did not affect pleasantness. However, the application of a thin
elastic film modulated pleasantness. Such barriers, though elastic and only 40
microns thick, inhibit the skin's tangential movement and disperse normal
force. The finding that thin films modulate affective interactions has
implications for wearable sensors and actuation devices. | Merat Rezaei, Saad S. Nagi, Chang Xu, Sarah McIntyre, Hakan Olausson, Gregory J. Gerling | 2023-02-28T19:46:49Z | http://arxiv.org/abs/2303.00049v1 | # Thin Films on the Skin, but not Frictional Agents, Attenuate the
###### Abstract
Brushed stimuli are perceived as pleasant when stroked lightly on the skin surface of a touch receiver at certain velocities. While the relationship between brush velocity and pleasantness has been widely replicated, we do not understand how resultant skin movements - e.g., lateral stretch, stick-slip, normal indentation - drive us to form such judgments. In a series of psychophysical experiments, this work modulates skin movements by varying stimulus stiffness and employing various treatments. The stimuli include brushes of three levels of stiffness and an unployed human finger. The skin's friction is modulated via non-hazardous chemicals and washing protocols, and the skin's thickness and lateral movement are modulated by thin sheets of adhesive film. The stimuli are hand-brushed at controlled forces and velocities. Human participants report perceived pleasantness per trial using ratio scaling. The results indicate that a brush's stiffness influenced pleasantness more than any skin treatment. Surprisingly, varying the skin's friction did not affect pleasantness. However, the application of a thin elastic film modulated pleasantness. Such barriers, though elastic and only 40 microns thick, inhibit the skin's tangential movement and disperse normal force. The finding that thin films modulate affective interactions has implications for wearable sensors and actuation devices.
## I Introduction
We commonly give and receive touch with others in affective social and emotional interactions. For instance, a caress of another's forearm might provide comfort while in distress, a hug from a loved one might signal remorse or help reestablish a long-awaited connection, and a series of taps and pats might signal gratitude or attention. In these types of affective exchange, the receiver judges emotional valence of the communication, which might be signaled by many interrelated physical factors [1].
Within the field of affective touch, the percept of 'pleasantness' is typically studied by delivering soft brush stimuli to the skin of human volunteers, who evaluate the touch they receive [2, 3, 4]. In addition to brush stimuli, human touch is similarly perceived as pleasant when likewise delivered slowly at low forces, and may help suppress pain and negative emotions [5, 6, 7, 8]. Typically, a soft brush is stroked along the skin of the dorsal forearm at forces about 0.2 to 0.4 N and velocities between 0.1 and 30 cm/s [3]. Psychophysical evaluation shows that, at a group level, the velocity of the stimulus modulates pleasantness in a relationship that resembles an inverted U-shaped curve, with the greatest pleasantness reported at velocities between 1 and 10 cm/s [2, 9]. Both robot controlled and human delivered brushing has produced similar results [10]. Additional efforts have considered distinct body sites, brushes with textured surfaces (e.g., velvet, burlap, cotton, denim), ties to affiliative bonds and social cognition, and inter- versus intra-personal touch, but none have inquired into modulation of the mechanical properties of contact [4, 11, 12, 13, 14].
Aside from the impact of brush velocity, we do not understand the nature of the resultant skin movements that drive our judgment of pleasantness. For instance, a brush stroke stretches the skin laterally, generates a range of forces and force rates, vibrational waves upon contact, and stick-slip events. Such interactions could drive observed firing patterns in certain afferent subtypes, such as C-tactile afferents' preference for 1-10 cm/s stroking velocities, as opposed to A\(\beta\) afferents' linearly increasing firing rate with velocity. Further, high-threshold mechanoreceptors do not respond to a soft brush, but they do respond to a rough (stiffer) brush [15]. At present, we do not understand the origin of such signaling differences, which could be related, in part, to skin mechanics.
Most efforts to directly quantify the deformation and stretch of the skin have focused on contact with transparent glass or elastomer surfaces [16, 17]. Other approaches have imaged contact interactions between human touchers and receivers, though neither for brushing stimuli nor local states of stress [18]. For non-transparent, brush stimuli, visualizing skin movement is particularly difficult. Moreover, placing a sensor or barrier on the receiver's skin changes the nature of the contact interaction. Therefore, approaches using microphones have sought to analyze audible output resulting from skin contact [19]. Furthermore, various engineered devices have sought to produce social touch [20]. In experiments focusing on the pleasantness of performing active touch, as opposed to its passive receipt, various frictional agents have been applied to the skin [21] as well as emollients [22]. Such efforts seek to perturb contact interactions at the skin surface.
This work describes psychophysical experiments to modulate skin movements and evaluate their impact on pleasantness. In contrast to measurements between the brush and skin, our distinct approach 1) varies stimulus properties, by using brushes of distinct bristle stiffness and the human finger, and 2) utilizes skin treatments to isolate attributes of adhesion, friction, film thickness, and lateral mobility.
## II Methods
### _Stimuli and Skin Treatments_
Three brushes were employed with increasing levels of bristle stiffness, Fig. 1A, named'smooth,' 'hybrid,' and 'rough.' The smooth brush is made of goat hair, similar to those used in prior efforts [2, 3]. The hybrid brush is made of coarser pig hair. The rough brush is made of stiff, synthetic plastic. All brushes were 5 cm wide. The fourth stimulus, only used in Experiment 2, was the ungloved finger, which was marked at a length of 5 cm to maintain about the same contact width as the brushes.
Several treatments were used to alter the properties of the skin across the psychophysical experiments, Table 1. In Experiment 1, a thin film (Tegaderm, 3M, Part 1626W, 40 microns thick, adhesive on one side, 10 by 12 cm), calamine spray (CVS Calamine Plus, active ingredient calamine 8%), and an embilient lotion (Vaseline Advanced Repair) were used. An example application of Tegaderm film on the forearm of one participant is shown in Fig. 1B. Tegaderm film, calamine spray, and embilient lotion create a direct barrier between skin and stimulus, stiffen the skin, and smoothen the skin, respectively. In Experiment 2, hyaluronic acid (Cosmedica Skincare, humectant, main ingredients: distilled water, sodium hyaluronate, benzylalcohol-DHA), room temperature water (washed skin, then patted dry), and soap (washed skin, then patted dry, main ingredient: sodium tallowate) were used. Hyaluronic acid and water increase hydration and therefore friction, and soap decreases friction, as detailed further in _Section III.B_. In Experiments 3, 4, and 5, distinct configurations of Tegaderm film were used to decouple attributes of skin adhesion, film thickness, and friction. Configurations included two layers applied on top of each other (adhesive, 80 microns thick), two layers folded over each other (non-adhesive, 80 microns thick), one layer (9 cm length by 5 cm width), and one layer (6 cm length by 5 cm width).
### _Participants_
Thirty-four participants, balanced roughly by gender, ages 18-35, were recruited across all experiments, with n=14 in Experiment 1, and n=5 in each of Experiments 2, 3, 4, and 5, respectively. No participant was used in more than one experiment to avoid potential biases. The study was approved by the local institutional review board, with informed consent obtained from all participants.
### _Experimental Procedures_
Each participant was seated on the opposite side of a curtain from the trained experimenter, who delivered stimuli by hand using published protocols [2], Fig. 1C. The same site on a participant's dorsal forearm was used for every trial, except when a skin treatment might cause lingering or skin property-changing effects. For example, hyaluronic acid changes the skin's friction. In such situations, both arms of a participant were used interchangeably. In particular, this was the case between conditions in Experiments 1 and 2 of calamine/emollient, hyaluronic acid/water, and hyaluronic acid/soap. In contrast, Tegaderm can leave a tingling sensation when detached from the skin, so participants were given a 5-minute break upon its removal, or a duration necessary for this sensation to cease. The order of the stimuli was selected per trial by a custom computer program which randomized the brush velocity and brush stiffness. The treatment order was counterbalanced between participants.
To reduce variability in delivering the stimuli, the angle of contact between stimulus and skin was kept at 90 degrees, while its normal force was delivered at about 0.4 N [2, 3]. The velocities delivered were a subset of 1, 3, 10, and 30 cm/s, varying by experiment. The experimenter who delivered the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \cline{2-6} \multicolumn{1}{c|}{} & **Experiment 1** & **Experiment 2** & **Experiment 3** & **Experiment 4** & **Experiment 5** \\ \hline \multirow{2}{*}{**Stimulus**} & Smooth, Hybrid, & Smooth, Hybrid, & Smooth, Rough & Smooth, Rough & Smooth, Hybrid \\ & Rough Brush & Human Finger & Brush & Brush & Brush \\ \hline \multirow{2}{*}{**Skin Treatment**} & Tegaderm, Calamine & Hylahuronic Acid, & Tegaderm, Folded, & Tegaderm, & Tegaderm, 9 cm \\ & Spray, Emollient & Water, Soap & 2xTegaderm & & Hole, 6 cm Hole \\ \hline \multirow{2}{*}{**Velocities**} & 1, 3, 10, 30 cm/s & 1, 3, 30 cm/s & 10 cm/s & 10 cm/s & 1, 3, 10 cm/s \\ \hline \multirow{2}{*}{**Number of Participants**} & 14 & 5 & 5 & 5 & 5 \\ \end{tabular}
\end{table}
Table 1: Overview of conditions used in each psychophysical experiment
Figure 1: Experimental setup. **(A)** Brush stimuli increasing in stiffness from top to bottom were presented in randomized order, under various skin treatments, including **(B)** Tegaderm film applied to the dorsal forearm. **(C)** Participants were separated from the experimenter by a curtain and asked to rate stimulus pleasantness per trial using a visual analog scale, ranging from ‘very unpleasant’ to ‘very pleasant.’
stimuli practiced the technique beforehand against a high resolution, pressure sensitive mat (TactArray Sensor, PPS, Hawthorne, CA, USA) to become consistent at delivering this force over the full length of the stroke.
After each trial, participants were asked to rate pleasantness using a graphical user interface with a visual analog scale from'very unpleasant' to'very pleasant' with blind values of -5 to 5 [23].
## III Experiments and Results
Five psychophysical experiments were performed, as outlined in Table 1. Their procedures and results are given below. In addition, an instrumented, simulated skin was used to evaluate the force rates delivered across brush stiffness.
### _Experiment 1_
_Procedures._ Three brush stiffness stimuli were employed under four skin conditions: 1) untreated skin; 2) direct barrier (Tegaderm film); 3) stiffened skin (calamine spray); and 4) smoothed skin (emollient tion). See _Section II.4_ for exact product numbers. Four brush velocities employed were 1, 3, 10, and 30 cm/s.
_Results._ A three-way repeated measures ANOVA was conducted to determine the effects of brush, velocity, and skin treatment on pleasantness. All three factors significantly affected pleasantness ratings, with the largest effect from brush (brush: F = 49.27, p \(<\) 0.05, \(\eta^{2}\) = 0.57; skin treatment: F = 13.0, p \(<\) 0.05, \(\eta^{2}\) = 0.07; velocity: F = 3.4, p \(<\) 0.05, \(\eta^{2}\) = 0.007), Fig. 2A-C. Post-hoc contrast tests reveal that only Tegaderm had a significant effect compared to untreated skin, with an overall improvement of 0.757 (p \(<\).001), whereas calamine and mollient did not. An increase in brush stiffness consistently decreased pleasantness with no overlap in 95% confidence intervals (smooth: [1.42, 2.68]; hybrid: [-0.54, 0.73]; rough: [-2.6, -1.34]).
### _Experiment 2_
_Procedures._ To alter the frictional properties of the skin due to the hydration of the stratum comeum [24], other non-impedimentary skin treatments were introduced. Three treatments were selected, including hyaluronic acid (a humectant), and water (washed skin, then patted dry) to increase hydration and therefore friction, and soap (washed, patted dry, main ingredient: sodium tallowate) to decrease friction. We expect untreated skin, hyaluronic acid, water, and soap treatments to yield coefficients of kinetic friction of 0.45-0.65, 1.05-2.62, 0.7-1.0, and \(<\)0.45, respectively [24]. Given the rough brush had such a significant impact on pleasantness in Experiment 1, which might override any effect of a skin treatment, we focused Experiment 2 on the smooth and hybrid brushes, while introducing the human finger for comparison.
_Results._ Fig. 2D-F shows that even large changes to the surface friction of skin incite little if any change in perceived pleasantness (F = 0.4, p \(>\) 0.5, \(\eta^{2}\) = 0.003), with an overlap of 95% confidence intervals for all skin treatments. Post-hoc contrast tests showed no significant difference in pleasantness compared to untreated skin for any of the skin treatments. This is observed across all brush stimuli. On another note, the pleasantness of the finger as the stimulus (CI [-0.02, 3.07]) was similar to that of the smooth brush (CI [-0.85, 3.93]).
### _Experiment 3_
_Procedures._ To further analyze the various coupled attributes that Tegaderm film might induce, three factors decoupled included skin adhesion, film thickness, and frictional change. Film thickness and adhesion were varied by using one sheet of Tegaderm (40 microns thick), two stacked sheets of Tegaderm with one adhesive side ('2xTegaderm,' 80 microns thick), and two stacked sheets of Tegaderm with no adhesive side ('Folded Teegaderm', 80 microns thick), held in place with thin strips of tape on the edges. Only the smooth and rough brushes were evaluated, and at a single velocity.
_Results._ As observed for Experiment 1, pleasantness decreased for the rough brush (CI [-1.86, 0.47]), Fig. 2G-H. Likewise, for the rough brush, each of the Tegaderm configurations modulate pleasantness with an increase to a more neutral value. A two-way, repeated measures ANOVA shows no significant effect on pleasantness ratings by skin treatments, in contrast to the stimuli (F = 16.1, p \(<\) 0.05). The Tegaderm configurations do not exhibit significant differences compared to each other. However, with the smooth brush (CI [1.32, 3.65]), the 'Folded Teagaderm' (CI [-0.4, 1.4]) case with no adhesive side impeded pleasantness compared to 'Tegaderm' (CI [0.58, 2.38]) and '2xTegaderm' (CI [0.33, 2.13]) that adhere to the skin. After conducting post-hoc contrast tests, no difference was observed between the adhesive Tegaderm configurations and the 'Normal' (CI [-0.55, 1.25]) case in this experiment (p \(>\) 0.05), as had been observed in Experiment 1, thus leading into Experiment 4, which directly investigated the use of one sheet of Tegaderm.
### _Experiment 4_
_Procedures._ A direct comparison was made between the 'Normal' untreated skin and 'Tegaderm' applied cases, for smooth and rough brushes. Only a single velocity was tested, at 10 cm/s. The reasoning behind running this experiment is detailed in the _Results_ of Section 3.3.
_Results._ In the absence of skin treatments other than just a single layer of Tegaderm, the results remained consistent with Experiment 1 for the smooth brush (CI [1.03, 4.74]), Fig. 2I-J. The pleasantness of the rough brush (CI [-1.88, 1.83]) was only slightly more neutral than unpleasant, as in Experiment 1. This could be due to sample size limitations, or may indicate that absolute values of pleasantness are not comparable between experiments with unique skin treatments and stimulus factors.
### _Experiment 5_
_Procedures._ The impact of modulating the skin's lateral motion on pleasantness was investigated by varying rectangular hole sizes in the Tegaderm of 6 cm and 9 cm lengths and 5 cm width, using the smooth and hybrid brushes, Fig. 2K-L. The level of lateral mobility in the skin was hypothesized to decrease in the order of 'Normal', '9 cm Hole', '6 cm Hole', and 'Tegaderm' respectively. To maintain a consistent stroke length and contact duration, all brush strokes were made at a 6 cm length. Brush strokes were executed at 1, 3, and 10 cm/s.
_Results._ As with the other experiments, the smooth brush was more pleasant than the hybrid brush, (linear contrast estimate, -1.68, p \(<\) 0.05). The effects of 'Tegaderm' are
consistent with those of Experiment 1 in the attenuation of pleasantness across brushes (linear decrease in pleasantness: normal: -2.32, Tegaderm: -0.17). However, the use of Tegaderm film with a hole played no role, compared to the normal non-Tegaderm film condition (6 cm hole: -2.13; 9 cm hole: -2.11). This further suggests that the presence of a direct barrier at the contact interface, along with the stiffness of the stimulus, impact pleasantness more than modifications to the skin's friction or stiffness.
### _Quantitative Measurement of Force during Brushing_
_Procedures._ Perceptual differences were observed between the brush stimuli, though their forces and velocities, angles of contact, and location and area on the forearm, were controlled by a trained experimenter. To evaluate the force characteristics produced by each brush, we devised a test rig to measure normal force during brush strokes over a silicone-elastomer substrate (10 cm diameter, 60 kPa modulus, BJB Enterprises, Tustin, CA; TC-5005 A/B/C) lightly covered with baby powder to mimic the elastic and frictional properties of
Figure 2: **Results of psychophysical experiments 1-5. (A-C) Experiment 1 shows the relationships between brushes, velocities (1, 3, 10, 30 cm/s), and skin treatments meant to block direct contact, stiffen, and smoothen skin, respectively. (D-F) Experiment 2 investigated changes in frictional properties of the skin on pleasantness; with hyaluronic acid, washing with room temperature water (pated dry), and soap (patted dry) used to drastically increase friction, moderately increase friction, and decrease friction compared to the ‘normal’ condition at velocities of 1, 3, 30 cm/s. (G-J) Experiments 3 and 4 consider Tegaderm as a barrier and its adhesion to the skin when folded with no adhesion and with adhesion but two layers. (K-L) Experiment 5 shows the relationships between smooth and hybrid brush stimuli, accompanied by modulation of the skin’s lateral movement, achieved by cutting holes of various sizes in the Tegaderm. In summary, brush stiffness and Tegaderm film modulated pleasantness, whereas other skin treatments, notably involving increases and decreases in friction, yielded little to no effect.**
skin, Fig. 3A. Normal force data was captured via a uniaxial load cell (5 kg, 80 Hz, HTC Sensor TAL220, Colorado USA).
Brush strokes were executed at velocities of 1, 3, 10, and 30 cm/s, and at two different force levels. In Fig. 3A-C, the experimental setup is shown with the smooth and rough brushes in contact with the silicone substrate, respectively. 'Regular Force' was the force (0.4 N) used in all prior psychophysical experiments, whereas 'Low Force' denotes a minimal level of contact between the stimulus and substrate, executed for comparative purposes. Brushing procedures were identical to Experiments 1-5 with each trial consisting of three separate, forward, back, and forward motions.
_Results._ Force rate over the first 100 msec of contact was analyzed due to its role as an efficient means in encoding object compliance, as opposed to other cues tied to stimulus velocity [25, 26]. This method was more appropriate due to the placement of the uniaxial load cell, as continuous force readings would not account for unavoidable torques produced during brush strokes. The rough brush has a faster increase in force than the smooth brush, Fig. 3D. Fig. 3E-F show the force rates across all brushes and velocities, highlighting their relationship with respect to using 'low' and'regular' contact forces. In Fig. 3E, at the 'Low Force' level, peak force rates were consistent between brushes as well as the velocities. Likewise, for the smooth brush at 'Regular Force,' force rate remains relatively unchanged between velocities, Fig. 3F, as well as compared to its 'Low Force' level in Fig. 3E. However, for the hybrid and rough brushes, force rates at 'Regular Force' increase significantly over their 'Low Force' levels, as well as compared to the smooth brush at the 'Regular Force' level, Fig. 3F. They also exhibit larger trial to trial variability.
## IV Discussion
This effort performs a series of psychophysical experiments to study the role of brush stiffness and skin treatments in encoding pleasantness at skin contact. While the relationship between brush velocity and pleasantness has been widely replicated, we do not yet understand how skin movements - e.g., lateral stretch, stick-slip, normal indentation - drive us to form such judgments. We take a distinct approach by 1) varying the properties of stimuli, by using brushes of distinct bristle stiffness and the human finger, and 2) utilizing skin treatments that isolate the underlying attributes of adhesion, friction, film thickness, and lateral mobility at the contact interface. Overall, the results indicate that a brush's stiffness influenced pleasantness more than any skin treatment. Velocity has been shown to have selective effects on pleasantness in earlier work, but more recent research suggests that the negative quadratic relationship between velocity and pleasantness ratings does not exist at the individual level [9]. Surprisingly, varying the skin's friction did not affect pleasantness. However, the application of thin film modulated pleasantness. Such barriers, though elastic and only 40 microns thick, inhibit the skin's tangential movement and disperse normal force.
First, we find that greater brush stiffness decreases pleasantness. Indeed, most prior works on pleasantness tend to use only a smooth brush and vary velocity, but changing brush stiffness decreases pleasantness much more, comparatively, than change in velocity. Work is still required to understand exactly why. A likely possibility, is a higher activation of c-nociceptors [27] in conjunction with c-tactile afferents when increasing brush stiffness. In alignment, in our instrumented force measurement experiment, Fig. 3, we find that differences between the brushes in their produced force rate at the onset of contact. Indeed, higher force rates may be less pleasant and their modulation may inform the dimension of valence. In Fig. 3, testing the stimuli at a low force level revealed a cross-velocity similarity for force rates, and for hand held stimuli [10]. Interestingly, the smooth brush's force rate did not vary with increased force application. However, such an increase was observed for the hybrid and rough brushes. Furthermore, since a low force rate shows high correlation with brush stiffness, and the smooth brush was the most pleasant of the stimuli, we can speculate that if force rate is controlled at a sufficient precision, a conventionally stiff stimulus might be made to be perceived as pleasant. That said, since these brushes are composed of different materials, factors other than just bristle stiffness are changing simultaneously, such as contact area and force concentrations on the skin. These factors need to be decomposed individually.
Second, skin treatments such as Tegaderm, attenuated the pleasantness of brush stimuli, while the modulation of friction played a minimal role. While initially it might seem intuitive to draw the conclusion that this is solely due to the presence of a direct barrier between skin and stimulus, there are likely more complex phenomena at play. Pleasantness perception has been strongly correlated across the range of velocities from 0.1
Figure 3: **Evaluation of force rate with brush stimuli.** (**A**) Experimental setup to collect force data from brush stimuli using a uniaxial load cell underneath a skin-like silicone-dashomer substrate, at two force levels with ‘Low Force’ meaning barely making contact and ‘Regular Force’ used in the psychophysical experiments, at velocities of 1, 3, 10, 30 cm/s. **(B,C)** Smooth and rough brushes in contact with the surface, respectively. **(D)** Force data over first 100 msec of contact onset at ‘Regular Force’ for an example trial per brush. The rough brush exhibits a higher force rate than the smooth brush. **(E,F)** Force rate at onset of contact, for all three brushes, again at two force levels and four velocities. Force rates at ‘Low Force’ are stable around 0.5 N/s for all stimuli, but at ‘Regular Force’ the force rate magnitude and variance increase significantly for the stiffer brushes.
to 30 cm/s to the firing frequency of C-tactile afferents, with a lack of correlation to the firing patterns of A\(\beta\) afferents [3]. C-tactile afferents respond optimally to lateral brush strokes of 1-10 cm/s, but no systematic work has been done on the force ranges that either saturate the afferents or fail to evoke a response. Moreover, a comparison between 0.2 and 0.4 N indentation force on the responsiveness of C-tactile afferents to brushing revealed no consistent effect [3]. In addition to vertical inhibition of skin movement and modulation of force, Tegaderm film may also be effective in inhibiting lateral movement of the skin, though our attempt to simply cutting holes in the Tegaderm film did not attenuate pleasantness; therefore, its role as a direct barrier seems to be still required. It is important to note Tegaderm's side effect of immobilizing hair follicles during brushing. However, a prior study showed that depilation does not affect perception [28].
Finally, the finger as a stimulus was perceived to be close to the smooth brush in pleasantness, Fig. 2D and 2F. We do not know what exactly causes this similarity since the smooth brush and finger are quite different from each mechanically in both static and dynamic conditions. Perhaps there are ties to recent work finding that softness, as a psychophysical percept, comprises of five separate dimensions of granularity, deformability, viscoelasticity, furniness, and roughness in active, discriminative touch [29].
## Acknowledgments
This work was supported in part by grants from the National Science Foundation (IIS-1908115) and National Institutes of Health (NINDS R01NS105241). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NSF or NIH.
|
2309.14067 | Charge dynamics in the 2D/3D semiconductor heterostructure WSe$_2$/GaAs | Understanding the relaxation and recombination processes of excited states in
two-dimensional (2D)/three-dimensional (3D) semiconductor heterojunctions is
essential for developing efficient optical and (opto)electronic devices which
integrate new 2D materials with more conventional 3D ones. In this work, we
unveil the carrier dynamics and charge transfer in a monolayer of WSe$_2$ on a
GaAs substrate. We use time-resolved differential reflectivity to study the
charge relaxation processes involved in the junction and how they change when
compared to an electrically decoupled heterostructure, WSe$_2$/hBN/GaAs. We
observe that the monolayer in direct contact with the GaAs substrate presents
longer optically-excited carrier lifetimes (3.5 ns) when compared with the
hBN-isolated region (1 ns), consistent with a strong reduction of radiative
decay and a fast charge transfer of a single polarity. Through low-temperature
measurements, we find evidence of a type-II band alignment for this
heterostructure with an exciton dissociation that accumulates electrons in the
GaAs and holes in the WSe$_2$. The type-II band alignment and fast
photo-excited carrier dissociation shown here indicate that WSe$_2$/GaAs is a
promising junction for new photovoltaic and other optoelectronic devices,
making use of the best properties of new (2D) and conventional (3D)
semiconductors. | Rafael R. Rojas-Lopez, Freddie Hendriks, Caspar H. van der Wal, Paulo S. S. Guimarães, Marcos H. D. Guimarães | 2023-09-25T12:03:24Z | http://arxiv.org/abs/2309.14067v1 | # Charge dynamics in the 2D/3D semiconductor heterostructure WSe\({}_{2}\)/GaAs
###### Abstract
Understanding the relaxation and recombination processes of excited states in two-dimensional (2D)/three-dimensional (3D) semiconductor heterojunctions is essential for developing efficient optical and (opto)electronic devices which integrate new 2D materials with more conventional 3D ones. In this work, we unveil the carrier dynamics and charge transfer in a monolayer of WSe\({}_{2}\) on a GaAs substrate. We use time-resolved differential reflectivity to study the charge relaxation processes involved in the junction and how they change when compared to an electrically decoupled heterostructure, WSe\({}_{2}\)/hBN/GaAs. We observe that the monolayer in direct contact with the GaAs substrate presents longer optically-excited carrier lifetimes (3.5 ns) when compared with the hBN-isolated region (1 ns), consistent with a strong reduction of radiative decay and a fast charge transfer of a single polarity. Through low-temperature measurements, we find evidence of a type-II band alignment for this heterostructure with an exciton dissociation that accumulates electrons in the GaAs and holes in the WSe\({}_{2}\). The type-II band alignment and fast photo-excited carrier dissociation shown here indicate that WSe\({}_{2}\)/GaAs is a promising junction for new photovoltaic and other optoelectronic devices, making use of the best properties of new (2D) and conventional (3D) semiconductors.
Transition metal dichalcogenides (TMDs) have received a lot of attention because of their atomically-thin thickness and interesting optical and electronic properties [1; 2; 3]. Their thickness confines the charges in the plane of the monolayer, resulting in strikingly different properties from their bulk counterpart [3; 4; 5]. Additionally, the stacking and/or twisting of consecutive monolayers into heterostructures has been shown to give rise to new physical phenomena and makes them strong candidates for the next generation of nanodevices [6; 7]. Their low dimensionality also makes them very sensitive to local changes, such as defects in the crystal lattice, strain, or impurities [8; 9; 10]. The interaction with the environment can also modify the properties of the two-dimensional (2D) semiconductor through, for instance, the interaction with gases or substrates with different electronic properties [11; 12]. The dielectric environment for the Coulomb interaction that gives place to excitonic phenomena in TMDs is particularly important and has been shown to be able to modulate its optical properties [13]. Therefore, we can use this as an advantage for developing new nanodevices such as gas sensors, photodetectors, and solar cells [14; 15; 16].
Gallium arsenide (GaAs) is one of the most studied semiconductors because of its applications in electronics as well as its very high electronic mobility, which allows for efficient gate-induced quantum confinement to one or two dimensions [17]. In particular, previous studies have demonstrated that the junction of this three-dimensional (3D) semiconductor with TMDs (i.e., 2D semiconductors) is a promising junction for optoelectronic devices [15; 16; 18; 19; 20]. In order to optimize and manipulate such systems for improving the design of new (opto)electronic devices, we need to obtain a high level of understanding of the electronic properties and time-evolution of their excited states. Nonetheless, the charge dynamics and band alignment between these materials are still largely unexplored.
In this work, we study the carrier dynamics in a monolayer of WSe\({}_{2}\) in contact with a GaAs substrate. We use an optical pump-probe approach, by measuring the time-resolved differential reflectivity (TRDR) of the junction and compare it with an electrically-isolated WSe\({}_{2}\), by adding a hexagonal boron nitride (hBN) layer (Fig 1.a). The WSe\({}_{2}\) monolayer in direct contact with the GaAs shows carriers that decay much slower with respect to the isolated WSe\({}_{2}\) at low temperatures. This can be understood through a type-II band alignment that dissociates the optically-excited excitons and creates an excess of electrons in the GaAs substrate and an excess of holes in the WSe\({}_{2}\) layer. Nonetheless, at room temperature we did not observe any important differences in the dynamics between the two regions, indicating a strong role of thermal effects on the relaxation process of photoexcited carriers.
Our samples were fabricated by mechanical exfoliation of WSe\({}_{2}\) and hBN from their bulk crystals (supplied by HQ Graphene). The hBN flakes were exfoliated directly onto a commercial undoped (100) GaAs substrate (supplied by Wafer Technology) and the WSe\({}_{2}\) monolayers transferred on top by the viscoelastic stamp method [21]. We identified WSe\({}_{2}\) monolayers by optical contrast and photoluminescence in an optical microscope. The hBN thickness for the sample for which the results are shown here was (21\(\pm\)2) nm, determined by atomic force microscopy. Time-resolved measurements were performed with a tunable Ti:Sapphire pulsed laser with a pulse width \(<\) 300 fs. We used a single-color (degenerate)
pump-probe technique in a double modulation configuration as described in detail in our previous works [22; 23]. All measurements were carried out at a temperature of 70 K unless otherwise indicated.
Figure 1.b shows the normalized TRDR of the two regions of the sample: the direct contact (WSe\({}_{2}\)/GaAs - green) and the isolated (WSe\({}_{2}\)/hBN/GaAs - blue) heterostructures when excited in resonance with the exciton transition of the WSe\({}_{2}\) layer as identified by TRDR spectroscopy (see below). Our results are well described by a three-processes exponential decay fit, \(\Delta R/R=\sum R_{0i}e^{-t/\tau_{i}}\), with \(i\) from 1 to 3. Such multi-exponential decay has been reported by several works in literature, but the origin of the different decay processes has been attributed to various sources, depending on the specifics of the system. Overall, it has been observed that radiative processes occur in no longer than a few hundred picoseconds, while non-radiative phenomena may last longer [24; 25; 26; 27; 28].
From our results, we observe a longer-lived component determined to be (3.50 \(\pm\) 0.04) ns in the region of direct contact compared with (1.00 \(\pm\) 0.01) ns for the isolated one. To understand the origin of this difference, it is necessary to look into the bandgap alignment between the materials as it provides a picture of the possible charge dynamics in a junction. For instance, previous reports observed that a junction of semiconductors with a type-I band alignment can result in a reduction of the lifetime of the material with the larger band gap when placed in such a junction [29; 30]. This phenomenon can be associated with an energy transfer process where, for instance, the optically generated exciton in one material transfers energy generating an exciton in the other material [31]. On the other hand, a type-II band offset has been observed to increase the lifetime of the studied process [28; 32]. In those cases, the photo-generated excitons dissociate, resulting in a charge transfer, with electrons lying in one material and holes in the other. In light of this, our measurements point towards the existence of a type-II band offset in the WSe\({}_{2}\)/GaAs heterojunction.
Simple band alignment estimations, as shown in Figure 1.c, further corroborate the proposed type-II band offset between monolayer WSe\({}_{2}\) and GaAs. Here, we consider an electron affinity of 3.3 eV and an electronic bandgap of 2.08 eV for WSe\({}_{2}\), as determined in a previous work [33]. While the bandgap \(E_{g}\) and electronic affinity \(\chi\) of GaAs are well-established in the literature, for WSe\({}_{2}\) these values can change from one reference to another. The sensitivity of monolayer TMDs with the electric environment and other experimental and theoretical details can lead to a variation of these values, making it challenging for an accurate determination of these properties in a generic fashion. Nevertheless, even if differences in the exact values may arise, we can set an upper boundary for a type-II band offset. For this condition, the valence band maximum of WSe\({}_{2}\) has to be higher than the valence band of the GaAs: \(\chi_{WSe_{2}}+E_{g(WSe_{2})}<\) 5.49 eV. For simplicity, here we do not take into account band bending effects due to surface states, which should be considered for a more accurate model.
Figure 1: (a) Schematics of our sample indicating the two regions of interest WSe\({}_{2}\)/hBN/GaAs, and WSe\({}_{2}\)/GaAs. (b) Normalized differential reflectivity of the two regions using a laser energy for excitation in resonance with the WSe\({}_{2}\) exciton. (c) Estimated band alignment of the 2D/3D semiconductor heterojunction with \(E_{vac}\) the vacuum energy, CB the energy of the bottom of the conduction band, and VB the top of the valence band. Presented values consider T = 300 K. (d) Representation of the excitons generated in the WSe\({}_{2}\)/GaAs region of the sample. Photo-excited excitons in the monolayer and in the substrate dissociate generating an excess of electrons in the GaAs and of holes in the WSe\({}_{2}\). (e) In the WSe\({}_{2}\)/hBN/GaAs region, the hBN prevents the charge transfer, allowing a more dominant role to radiative recombination processes.
This type-II band alignment implies a dissociation of photo-excited carriers with a charge transfer between the two materials. When the junction - monolayer and substrate - is excited, electrons at the conduction band will accumulate in the GaAs substrate, while the holes in the valence band will concentrate in the WSe\({}_{2}\) monolayer (Figure 1.d). As a result, longer lifetimes of the excited states can be linked to a larger role of non-radiative scattering processes and a lack of available states in the valence (conduction) band for electrons (holes) to relax radiatively. In contrast, when considering the case of a WSe\({}_{2}\) isolated by hBN, charge transfer is restrained, and as a result, radiative exciton recombination is again the faster pathway for the relaxation of carriers (see Figure 1.e) Therefore, our results point towards a photoexcited carrier transfer between GaAs and WSe\({}_{2}\).
In Figure 1b, we also observe a fast decay time (\(\tau_{1}\)) of 4 ps for the WSe\({}_{2}\) in direct contact with GaAs and 5 ps for the isolated region. We attribute this fast process in part to a stimulated emission, related to our single-color pump-probe excitation, as well as to exciton recombination out of thermal equilibrium [24; 25; 26; 34]. Despite the resolution of our measurements, we cannot associate the small difference in the \(\tau_{1}\) relaxation times as arising exclusively from the interaction with the substrate, as stress or defects in the monolayer can modify the charge dynamics within the observed difference. Finally, we observed an intermediate decay time (\(\tau_{2}\)) of 90 ps for the monolayer in direct contact and 50 ps in the isolated region, consistent with previous measurements of trion recombination lifetime [25; 27]. We associate the difference in the relaxation times \(\tau_{2}\) with an increase in the density of one type of charge carrier in the WSe\({}_{2}\) that protects the trion from fast recombination. In particular, in our sample, two phenomena can give origin to this imbalance of carriers: the reduction of the Fermi level of the WSe\({}_{2}\) due to the formation of the joint-Fermi level of the heterojunction with the GaAs, and the dissociation and charge transfer from the photo-excited carriers [20]. Moreover, the interaction of the WSe\({}_{2}\) with the GaAs can increase the dielectric disorder, which can result in an increase of recombination centers, such as defects and localized states [13; 35].
In order to gain further insight into the properties of our 2D/3D semiconductor junction, we study the dependence of the dynamics with the excitation wavelength. Figure 2 shows the intensity of the TRDR signal as a function of the laser wavelength at 2, 10, and 50 ps in the two regions of our system, direct contact and hBN separated. We observe that optical resonance is different for the two regions: 705 nm for WSe\({}_{2}\)/GaAs and 708 nm for the WSe\({}_{2}\)/hBN/GaAs region, indicating a blue-shift on the signal of the WSe\({}_{2}\) exciton in direct contact with the GaAs. We associate this effect with a combination of the interaction of the WSe\({}_{2}\) with a different dielectric environment and a possible effect of strain induced by the transfer onto GaAs, which should be reduced in the hBN region due to its higher smoothness and lack of dangling bonds. We also observed that the transient reflectivity of the WSe\({}_{2}\) in contact with the GaAs is smaller, almost half, at the wavelengths of resonance of the free exciton in WSe\({}_{2}\) when compared to the intensity of the hBN isolated region, indicating a higher absorption of the TMD in direct contact. This response can be related to the change of the Fermi level due to the formation of the heterojunction, which reduces the electron density in the TMD. Moreover, the charge transfer at the junction allows for the presence of free states in the WSe\({}_{2}\) conduction band, which can be accessed by the photo-excited electrons, thereby enhancing the absorption of the region. In contrast, in the hBN-isolated area, stimulated emission and photobleaching will play an important role in reducing the absorption of the flake and increasing the reflectivity. For the wavelengths in resonance with the free excitons in GaAs (800 nm - 830 nm), we observe a higher, negative, reflectivity in the sample in direct contact when compared to the isolated one. This observation is consistent with an increment of the photoinduced absorption of GaAs, produced by the larger density of electrons in the substrate resulting from a shift of the bands in the heterojunction.
By fitting the TRDR of the measurements for differ
Figure 2: (a) TRDR intensity as a function of the excitation and probing wavelengths at 2 ps, 10 ps, and 50 ps pump-probe delay time in WSe\({}_{2}\)/GaAs and (b) WSe\({}_{2}\)/hBN/GaAs. For easier comparison the intensity values are presented as twice their real value in (a) and in the large wavelength region in (b).
ent wavelengths, we extract the energy-dependence of the decay lifetimes of the two regions of interest, which are presented in Figure 3. We did not observe any clear trend with the wavelength for the fast decay (\(\tau_{1}\)) other than a slightly faster decay in the sample in direct contact as described previously. On the other hand, the results for the second decay time (\(\tau_{2}\)) present a clearer trend, revealing one maximum lifetime at 705 nm for the TMD in direct contact with the GaAs substrate and at 708 nm in the isolated region, which matches with the resonances of WSe\({}_{2}\) exciton recombination of each area. Furthermore, we observe another maximum, and the highest \(\tau_{2}\) value, when exciting with a wavelength of 715 nm, which is related to the signal coming from the negative, less intense, differential reflectivity in Figure 2.b. Data points for 720 nm in the direct contact region were discarded as the signal-to-noise ratio was too low to allow fitting. Although negative signals are commonly associated with photoinduced absorption, it has also been observed that in TMDs, bandgap renormalization plays an important role in this effect [36]. Therefore, we associate the different lifetimes obtained at this wavelength with the different origins of the relaxation path involved. Lastly, the obtained long lifetimes (\(\tau_{3}\)) make clear the longer-lived character of the photo-excited carriers in the TMD in direct contact with the GaAs, close to the resonance.
To determine the role of thermal effects, we measured the TRDR at room temperature (Figure 4), in the two regions of interest, exciting at the WSe\({}_{2}\) exciton resonance, \(\lambda=740\) nm. Our results show an overall shorter lifetime of the generated excitations when compared with the measurements at low temperature. We observe a similar behavior for both regions of the sample with just two clear decay processes: a fast decay of 9 ps and a slower decay of around 40 ps. If compared with our previous analysis at low temperature, we obtain a longer decay time \(\tau_{1}\), a shorter decay time \(\tau_{2}\) and the total absence of the presence of \(\tau_{3}\) decay process at room temperature. These findings are in agreement with earlier studies reporting longer decay times \(\tau_{1}\) of monolayer WSe\({}_{2}\) when increasing temperature [24; 37]. One possible explanation for this phenomenon is the important role of dark states in tungsten-based TMD monolayers which are observed for instance in the enhancement of the photoluminescence when increasing the temperature [38]. In our experiments at low temperature, the excitation in resonance with low laser power results in a reduced source for the electrons in dark states to transit into bright states. At high temperatures, electron-phonon interactions mediate the transition and cause an increase in the population and lifetimes of the fastest process \(\tau_{1}\) in both regions of the sample. At the same time, this relaxation path, as well as other intralayer processes, becomes preferred over the charge transfer to the substrate, effectively eliminating at high temperatures the long-lived component \(\tau_{3}\) of the dynamics which is observed at low temperatures. Another possible relaxation channel is a change in the band alignment with the temperature. In this case, the small difference in the valence band maximum considered in Figure 1c could be enough for them to switch positions with the change in the temperature. Under this hypothesis, WSe\({}_{2}\)/GaAs would have a type-I band offset at room temperature and switch to a type-II band alignment when reducing the temperature.
Our observation of a type-II band alignment and
Figure 4: Normalized TRDR for the WSe\({}_{2}\)/GaAs (green) and WSe\({}_{2}\)/hBN/GaAs (blue) regions at room temperature. The laser was tuned to be resonant with the WSe\({}_{2}\) exciton, at \(\lambda=740\) nm.
Figure 3: Lifetimes of the TRDR signals as a function of the wavelength extracted from the three exponential decay processes described in the main text. When not shown, the error bars, obtained by the fit, are smaller than the point size.
charge transfer between the prototypical 2D/3D semiconductors, WSe\({}_{2}\) and GaAs, indicates the promise of using such junctions in future optical and optoelectronic devices [15; 16; 18; 19]. The long-lived (3.5 ns) opto-excited carriers observed here should allow for a long enough time for these carriers to be transported away from the junction region, and used in photovoltaic devices. Additionally, a long decay time is a crucial element for lasers. Therefore, the combination of a long carrier lifetime with the unique spintronic properties of both WSe\({}_{2}\) and GaAs such as long spin lifetimes and electric control over the spin information [39; 40; 41; 42], makes these junctions particularly appealing for lasers which make use of the spin degree-of-freedom, i.e. spin lasers [43], which have been shown to be able to operate at much higher modulation frequencies than conventional lasers [44]. We envision that such junctions, as the one shown here, using novel 2D semiconductors in combination with well-established and industrially-proved 3D systems, can lead to an easier uptake of 2D materials in industrial settings, leading to new device architectures.
We thank J. G. Holstein, H. de Vries, F. van der Velde, H. Adema, and A. Joshua for their technical support. This work was supported by the Dutch Research Council (NWO -- STU.019.014), the Zernike Institute for Advanced Materials, and the Brazilian funding agencies CNPq, FAPEMIG and the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Project code 88887.476316/2020-00. Sample fabrication was performed using NanoLabNL facilities.
|
2309.13708 | Three-component Bose-Einstein condensates and wetting without walls | From Gross-Pitaevskii (GP) theory for ultracold gases it is predicted that
phase-segregated three-component Bose-Einstein condensates (BEC) feature a
wetting phase diagram that depends only on atomic masses and scattering
lengths. This is unique in theories of surface and interfacial phase
transitions and provides a new opportunity for experimental observation of
wetting phenomena in BEC mixtures. Previous GP theory for two-component BEC
relied on an {\it ad hoc} optical wall boundary condition, on which the
character and location of the wetting phase transitions depend sensitively.
This boundary condition dependence is eliminated by adding a third component
and treating the three phases on equal footing. An unequivocal wetting phase
diagram is captured, with phase boundaries calculated analytically using an
extension of the established double-parabola approximation. | Joseph O. Indekeu, Nguyen Van Thu, Jonas Berx | 2023-09-24T17:41:42Z | http://arxiv.org/abs/2309.13708v1 | # Three-component Bose-Einstein condensates
###### Abstract
From Gross-Pitaevskii (GP) theory for ultracold gases it is predicted that phase-segregated three-component Bose-Einstein condensates (BEC) feature a wetting phase diagram that depends only on atomic masses and scattering lengths. This is unique in theories of surface and interfacial phase transitions and provides a new opportunity for experimental observation of wetting phenomena in BEC mixtures. Previous GP theory for two-component BEC relied on an _ad hoc_ optical wall boundary condition, on which the character and location of the wetting phase transitions depend sensitively. This boundary condition dependence is eliminated by adding a third component and treating the three phases on equal footing. An unequivocal wetting phase diagram is captured, with phase boundaries calculated analytically using an extension of the established double-parabola approximation.
pacs: 03.75.-a, 03.75.-a, 03.75.-b, 03.75.Hk, 03.75.Hk Ultracold gases provide an arena in which the laws of atomic quantum physics are at work in their theoretically most fundamental and experimentally most accessible manifestations [1; 2]. Interatomic forces are tunable over many orders of magnitude in strength employing Feshbach resonances [3; 4; 5] and, at ultralow temperature, dilute gases display a panoply of cooperative effects [6; 7; 8]. A fascinating role herein is played by multi-component Bose-Einstein condensates (BEC), which can be manipulated directly and precisely at the atomic level to demonstrate surface and interface physics in a way that is impossible in classical "thermal" fluid mixtures, in which also thermodynamic fields and densities must be controlled.
Among interfacial phenomena _wetting_ is a very intriguing one [9]. The discovery of wetting phase transitions [10; 11; 12] provided a plethora of theoretical and experimental challenges [9; 13; 14; 15], phenomenologically connecting utterly diverse domains in surface and interfacial physics. In classical liquid mixtures, theoretically subtle and experimentally elusive _critical wetting_ transitions were observed in 1996 [16] and 1999 [17]. In type-I superconductors, the observation of a first-order interface delocalization (i.e., "wetting") transition [18] came about 12 years after its theoretical prediction [19]. In BEC mixtures, wetting phase transitions were predicted in 2004 [20], but, remarkably, their experimental verification has to our knowledge hitherto not been undertaken.
In this Letter we ask, and provide answers to, the following questions. **i**) Which conceptual leap is needed in the theory in order to make experimental verification of wetting phase transitions in BEC mixtures more compelling? **ii**) Can GP theory provide an unequivocal BEC wetting phase diagram that is independent of wall boundary conditions, and what is its structure (order of transitions, their location, their universality)?
In 2004 first-order wetting phase transitions were predicted for two-component BEC at an optical hard wall [20]. Subsequent extension of the theory, with more general wall boundary conditions, predicted a richer phase diagram with both first-order and critical wetting transitions [21]. Experimentally, wall boundary conditions can be realized using surface traps [22], with, ideally, square-well and flat-bottom confinement of the atoms [7; 8]. However, the need for a wall represents a weakness in this research because theory predicts that details of the boundary condition have an impact on the surface phase equilibria and render the wetting phenomena equivocal. For example, in the phase diagram predicted in [21] the order (first-order or critical) of the wetting transitions depends strongly on the "relative trap displacement", a parameter not accessible in experiment. In order to obtain an unequivocal wetting phase diagram, in a space in which all variables are experimentally accessible, we propose to omit the optical wall and replace it by a third BEC component that is treated on equal footing with the other two. This conceptual leap has been guided by insights from wetting theory in classical fluid mixtures [23; 24].
It has been thoroughly demonstrated, theoretically [25; 26; 27; 28; 29; 30; 31; 32; 33] and experimentally [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50], that binary BEC mixtures display fascinating phase behavior and dynamical instabilities. Yet, the new physics featured in BEC with more than two components has only recently spurred broad interest [51; 52; 53; 54; 55; 56] and poses new experimental challenges. A timely connection for our proposal is the GP theoretical study of interfacial phenomena in three-component BEC by Jimbo and Saito [54]. A third component, 3, adsorbed at the interface between condensates 1 and 2, can act as a surfactant and lower the 1-2 interfacial tension. Or, when the adsorbed layer is unstable droplets of 3 form dynamically. Our present investigation of wetting phase transitions is complementary to their study of surfactant behavior.
In the following we adopt the mean-field GP theory at \(T=0\), which captures the physics of experimental inter
est at ultralow \(T\). In Fig.1 characteristic configurations are depicted. In a nonwet state, three coexisting pure-component bulk phases and their mutual interfaces meet at a common line of contact. Condensates 1, 2 and 3 subtend the dihedral angles \(\hat{1}\), \(\hat{2}\) and \(\hat{3}\). A simple criterion for wetting is "Antonov's rule". For example, the 1-2 interface is **nonwet** by 3 when the following inequality is strictly satisfied, and the 1-2 interface is **wet** by 3 when the equality, aka Antonov's rule, holds [23],
\[\gamma_{12(3)}\leq\gamma_{13}+\gamma_{23}. \tag{1}\]
Here, \(\gamma_{ij}\) is the \(i\)-\(j\) interfacial tension in a two-component BEC [26], and \(\gamma_{12(3)}\) is the _three-component_ 1-2 interfacial tension, allowing for the presence of a thin film of 3 adsorbed at the 1-2 interface. This film is stable if and only if its presence lowers the 1-2 interfacial tension, in which case 3 behaves as a surfactant [54].
For our purposes, the GP theory is cast as follows. The simple-harmonic-oscillator characteristic length of the conventional magnetic trap is assumed to be 5 \(\mu\)m or longer and therefore the confining potential is taken to be constant across the BEC interfaces of interest. In the grand canonical ensemble particle numbers are conveniently controlled by chemical potentials. Three pure-component condensates \(i=1,2,3\) are present in a volume \(V\), with atomic masses \(m_{i}\), chemical potentials \(\mu_{i}\), order parameters \(\psi_{i}\) and (local) mean densities \(n_{i}(\mathbf{r})\equiv|\psi_{i}(\mathbf{r})|^{2}\). The grand potential reads,
\[\Omega =\sum_{i=1}^{3}\int_{V}d\mathbf{r}\,|\psi_{i}^{*}(\mathbf{r}) \left[\frac{\hbar^{2}}{2m_{i}}\nabla^{2}-\mu_{i}\right]\psi_{i}(\mathbf{r})+ \frac{G_{ii}}{2}|\psi_{i}(\mathbf{r})|^{4}]\] \[+\sum_{i<j}G_{ij}\int_{V}d\mathbf{r}\,|\psi_{i}(\mathbf{r})|^{2}| \psi_{j}(\mathbf{r})|^{2}\ +\text{const.} \tag{2}\]
The coupling constants \(G_{ij}=2\pi\hbar^{2}a_{ij}(1/m_{i}+1/m_{j})\) are linear in the atomic s-wave scattering lengths \(a_{ij}\). In the absence of flow, one may choose the \(\psi_{i}\) to be real-valued.
For pure and homogeneous phase \(i\), the pressure and density are \(P_{i}=\mu_{i}^{2}/2G_{ii}\) and \(n_{i}=\psi_{i}^{2}=\mu_{i}/G_{ii}\), respectively. The relative inter-species (repulsive) interaction strength is
\[K_{ij}\equiv G_{ij}/\sqrt{G_{ii}G_{jj}}=\frac{m_{i}+m_{j}}{2\sqrt{m_{i}m_{j}} }\frac{a_{ij}}{\sqrt{a_{ii}a_{jj}}}\,. \tag{3}\]
Experimentally, using magnetic Feshbach resonance a scattering length, e.g., \(a_{ij}\), can be varied over several orders of magnitude [3; 4; 5]. For sufficiently repulsive interactions, \(K_{ij}>1\), condensates \(i\) and \(j\) demix and phase segregate [26; 56] and we consider the completely immiscible case (cf. \(\xi_{3}^{im}\) in Fig.1 of [56]).
We presuppose two-phase equilibrium of condensates 1 and 2, \(P_{1}=P_{2}\equiv P\), so that a stable 1-2 interface exists. Condensate 3 is either metastable in bulk, \(P_{3}<P\), or coexists with 1 and 2 in a three-phase equilibrium, \(P_{3}=P\). The latter permits the study of wetting transitions, which is our focus here, while the former is suitable for investigating prewetting phenomena [20; 21]. The healing length of condensate \(i\) is \(\xi_{i}=\hbar/\sqrt{2m_{i}\mu_{i}}\). At two-phase coexistence of \(i\) and \(j\), their healing length ratio depends on atomic parameters alone, \(\xi_{i}/\xi_{j}=(m_{j}\,a_{jj}/m_{i}\,a_{ii})^{1/4}\).
To facilitate a transition in which the 1-2 interface is wet by 3, we consider a nonwet state in which condensates 1 and 2 are strongly segregated (\(K_{12}\gg K_{13},K_{23}\) ). Suppose the 1-2 interface has no adsorbed film of 3. Its interfacial tension then equals \(\gamma_{12}\) and is higher than either \(\gamma_{13}\) or \(\gamma_{23}\) but lower than their sum, \(\gamma_{12}<\gamma_{13}+\gamma_{23}\). In other words, there is "preferential adsorption" of 3 but no "wetting" by 3. Previous experience with wetting in BEC [20] then suggests that, when we decrease \(K_{13}\) and/or \(K_{23}\) (towards unity), thereby lowering \(\gamma_{13}\) and/or \(\gamma_{23}\) (towards zero), we may reach a state in which \(\gamma_{12}=\gamma_{13}+\gamma_{23}\). This could signify a transition to a 1-2 interface wet by 3, and if so, it would typically be a wetting transition of first order. However, if a surfactant film of 3 develops at the nonwet 1-2 interface, its interfacial tension will decrease, i.e., \(\gamma_{12(3)}<\gamma_{12}\) and consequently \(K_{13}\) and/or \(K_{23}\) must be further lowered in order to satisfy the condition for wetting, \(\gamma_{12(3)}=\gamma_{13}+\gamma_{23}\). In that case, the possibility of a weakly first-order, or, more interestingly, a continuous or "critical" wetting transition, arises. Both scenario's were predicted in GP theory for a two-component BEC adsorbed at an optical wall [20; 21].
Figure 1: Nonwet and wet three-component BEC configurations. Shown are sketches, on a scale of typically 1 \(\mu\)m, of the contact zone where three coexisting phases meet. On a larger scale (\(>10\mu\)m) interfaces drawn straight here, may curve to follow the trap geometry (see, e.g., Fig.7 in [54] and Figs.1 and 2 in [55]). (a) Nonwet: Condensates 1, 2 and 3 meet pairwise at their mutual interfaces, displaying dihedral angles \(\hat{1}\), \(\hat{2}\) and \(\hat{3}\) at a common line of contact. (b) Nonwet, with a microscopically thin film of 3 adsorbed at the 1-2 interface. (c) Wet: Contact angle \(\hat{3}\) is zero and a wetting layer of 3 intrudes between 1 and 2. In (a)-(c), the \(z\)-axis defines the direction of inhomogeneity along which the order parameters vary in the calculation of the interfacial tensions.
To calculate the interfacial tensions it suffices to consider a one-dimensional inhomogeneity, say along \(z\), and to assume translational invariance along \(x\) and \(y\). Condensates 1 and 2 are imposed as the bulk phases at \(z\to-\infty\) and \(z\to\infty\), respectively. The candidate wetting phase is condensate 3. If we perform the rescalings \(\psi_{i}\equiv\sqrt{n_{i}}\,\tilde{\psi}_{i}\), \(z\equiv\xi_{2}\,\tilde{z}\), we arrive at the three coupled GP "equations of motion", with \(i,j\in\{1,2,3\}\),
\[\left(\frac{\xi_{i}}{\xi_{2}}\right)^{2}\frac{d^{2}\tilde{\psi}_{i}}{d\tilde{z }^{2}}=-\tilde{\psi}_{i}+\tilde{\psi}_{i}^{3}+\Sigma_{j\neq i}\,K_{ij}\,\tilde{ \psi}_{j}^{2}\,\tilde{\psi}_{i}, \tag{4}\]
with boundary conditions \(\tilde{\psi}_{1}\to 1,\ \tilde{\psi}_{j\neq 1}\to 0,\ \ \text{for}\ \tilde{z}\to-\infty\), and \(\tilde{\psi}_{2}\to 1,\ \tilde{\psi}_{j\neq 2}\to 0,\ \ \text{for}\ \tilde{z}\to\infty\).
The interfacial tension is the surface excess grand potential of the inhomogeneous state that arises when we fix the bulk states to be two different condensates. For our boundary conditions, invoking the first integral of the GP equations, one derives
\[\frac{\gamma_{12(3)}}{4P\xi_{2}}\equiv\int\limits_{-\infty}^{\infty}d\tilde{z }\;\{\,(\frac{\xi_{1}}{\xi_{2}}\frac{d\tilde{\psi}_{1}}{d\tilde{z}})^{2}+( \frac{d\tilde{\psi}_{2}}{d\tilde{z}})^{2}+(\frac{\xi_{3}}{\xi_{2}}\frac{d \tilde{\psi}_{3}}{d\tilde{z}})^{2}\}. \tag{5}\]
Virtually exact expressions have been derived for two-component \(\gamma_{ij}\)[58; 59]. High-precision numerical computations provide \(\gamma_{12(3)}\) as well as the \(\gamma_{ij}\). However, we can capture the same physics by a simple analytic calculation, an extension to three components of the double-parabola approximation (DPA), which has proven to be reliable for two-component BEC [60]. The error in the DPA wetting phase boundary, as compared with the exact one in GP theory, is less than 10% (see Fig.5 in [60]). Furthermore, from a comparison with precise GP computations for the pair-wise two-component quantity \(\gamma_{13}+\gamma_{23}-\gamma_{12}\) (see Fig.4c in [54]) we infer that the error in our DPA wetting phase boundaries for three-component BEC is less than 10% as well.
Since we assume strong segregation between 1 and 2, we consider the limit \(K_{12}\to\infty\), in which 1 and 2 are mutually impenetrable. This does not curtail the panoply of wetting phenomena since 1 and 3, and also 2 and 3 are mutually penetrable. The resulting nonwet and wet order parameter profiles are illustrated in Fig.2.
The DPA consists of defining a piecewise harmonic approximation to the energy density and solving piecewise linear GP equations in adjacent domains. The nonlinear nature of the theory remains present through weak singularities at the domain junctions \(\tilde{z}^{-}(\leq 0)\) and \(\tilde{z}^{+}(\geq 0)\), where order parameters and their first derivatives are continuous. The equilibrium wetting layer thickness, the order parameter associated with wetting, is \(\tilde{L}\equiv\tilde{z}^{+}-\tilde{z}^{-}\). We obtain the following analytic solutions at three-phase coexistence (and their extensions, not given here, for condensate 3 off of three-phase coexistence). In the leftmost domain (\(-\infty<\tilde{z}<\tilde{z}^{-}\)), \(\tilde{\psi}_{2}=0\) and
\[\tilde{\psi}_{1}=1-A_{1}\,e^{\sqrt{2}\frac{6}{\xi_{1}}\tilde{z}}\,,\qquad \tilde{\psi}_{3}=A_{3}\,e^{\sqrt{K_{13}-1}\frac{6}{\xi_{3}}\tilde{z}}. \tag{6}\]
In the rightmost domain (\(\tilde{z}^{+}<\tilde{z}<\infty\)), \(\tilde{\psi}_{1}=0\) and
\[\tilde{\psi}_{2}=1-D_{2}\,e^{-\sqrt{2}\,\tilde{z}}\,,\qquad\tilde{\psi}_{3}=D_ {3}\,e^{-\sqrt{K_{23}-1}\frac{6}{\xi_{3}}\tilde{z}}. \tag{7}\]
In the middle domain (\(\tilde{z}^{-}<\tilde{z}<\tilde{z}^{+}\)), we have \(\tilde{\psi}_{1}(\tilde{\psi}_{2})=0\) for \(\tilde{z}>0\,(<0)\), and
\[\tilde{\psi}_{1}=2B_{1}\,\sinh(\sqrt{K_{13}-1}\,\frac{\xi_{2}}{ \xi_{1}}\tilde{z}),\qquad\quad\text{for}\ \tilde{z}<0, \tag{8}\] \[\tilde{\psi}_{2}=-2C_{2}\,\sinh(\sqrt{K_{23}-1}\,\tilde{z}), \qquad\quad\text{for}\ \tilde{z}>0,\] (9) \[\tilde{\psi}_{3}=1+B_{3}\,e^{\sqrt{2}\frac{6}{\xi_{3}}\tilde{z}}+ C_{3}\,e^{-\sqrt{2}\frac{6}{\xi_{3}}\tilde{z}}. \tag{10}\]
In order to illustrate a rich variety of predicted wetting phenomena we vary the interspecies scattering lengths so that the control parameters are \(K_{13}\) and \(K_{23}\), and fix the healing length ratios asymmetrically, e.g., \(\xi_{2}/\xi_{1}=2\) and \(\xi_{3}/\xi_{1}=1\). The wetting phase transitions and critical phenomena so uncovered belong to three distinct classes: first-order wetting with an energy barrier, critical wetting, and a borderline case of degenerate first-order wetting (without energy barrier). The global wetting phase diagram is shown in Fig.3.
Figure 2: Interfacial order parameter profiles \(\tilde{\psi}_{i}\), \(i=1,2,3\), for \(\xi_{2}/\xi_{1}=2\), \(\xi_{3}/\xi_{1}=1\) and \(K_{12}=\infty\). The variations of the order parameters are shown along the \(z\)-axis of Fig.1(a), (b), and (c). (a) (Nonwet) Stable 1-2 interface for \(K_{13}=5\) and \(K_{23}=2K_{13}\). (b) (Nonwet) Stable 1-2 interface with an adsorbed film of 3, for \(K_{13}=3.698\) and \(K_{23}=2K_{13}\). The matching points (open circles) of the two DPAs lie at \(\tilde{z}^{-}=-0.27\) for 1-3 and \(\tilde{z}^{+}=0.41\) for 2-3. (c) (Wet) Stable 1-2 interface wet by 3, for \(K_{13}=3\) and \(K_{23}=2K_{13}\).
In two outer sectors, \(K_{23}<K_{13}\) and \(K_{23}>3K_{13}\), the wetting transition is of first-order. The equilibrium wetting layer thickness \(L\) jumps from zero to a macroscopic ("infinite") value. Using \(L\) as a constraint, the surface excess grand potential of a (non-equilibrium) configuration with fixed \(L\) defines the "interface potential" \(V(L)\)[21]. The minimum of \(V(L)\) provides the value of \(\Omega\) in equilibrium. At first-order wetting \(V(0)\) and \(V(\infty)\) are equal minima of \(V(L)\) with an energy barrier in between. The slope of the equilibrium \(\Omega\) versus \(K_{13}\) is discontinuous at the wetting transition, where the equilibrium \(\gamma_{12(3)}\) crosses over from \(\gamma_{12}\) to \(\gamma_{13}+\gamma_{23}\). This is illustrated in Fig.4a for a path at constant ratio \(K_{23}/K_{13}\).
In contrast, in the inner sector of the phase diagram (Fig.3), for \(K_{13}<K_{23}<3K_{13}\), _en route_ to the wetting transition, a wetting layer of finite thickness \(L\) develops. It originates at a nucleation transition, which is a quantum phenomenon. Decreasing the interspecies atomic repulsive forces, \(L\) increases to a macroscopic value and, theoretically, diverges at the wetting point. This divergence is logarithmic as expected for systems with exponentially decaying surface forces [13; 61]. Plotting the surface excess grand potential as a function of \(K_{13}\) at constant \(K_{23}/K_{13}\) leads to Fig.4b. The slope of the equilibrium \(\Omega\) is continuous at W, whence the name continuous wetting or "critical" wetting.
At the special points D and D' in the phase diagram nucleation and wetting coincide. This renders the wetting transition degenerate: the grand potential is independent of the wetting layer thickness. This extraordinary wetting transition, first predicted for two-component BEC at a hard optical wall [20], is of first order but without energy barrier. The interface potential \(V(L)\) is a constant [62].
The novel global wetting phase diagram of Fig.3 is our main result. Its variables depend only on atomic masses and scattering lengths and its phase boundaries are unequivocal because there are no wall boundary conditions. The nucleation line, found by studying the onset of stability of an infinitesimal film of 3 at the 1-2 interface, satisfies
\[\xi_{1}+\xi_{2}=\left(\sqrt{K_{13}-1}+\sqrt{K_{23}-1}\right)\frac{\xi_{3}}{ \sqrt{2}}\,. \tag{11}\]
The first-order wetting phase boundary, obtained by requiring \(\gamma_{12}=\gamma_{13}+\gamma_{23}\) (no surfactant), reads
\[\xi_{1}+\xi_{2}=\frac{\sqrt{K_{13}-1}\;(\xi_{1}+\xi_{3})}{\sqrt{2}+\sqrt{K_{13 }-1}}+\frac{\sqrt{K_{23}-1}\;(\xi_{2}+\xi_{3})}{\sqrt{2}+\sqrt{K_{23}-1}}\,. \tag{12}\]
The critical wetting phase boundary, derived by asymptotic analysis, for \(L\rightarrow\infty\), of \(\gamma_{12(3)}\) and by imposing the equality in (1), obeys
\[\frac{\xi_{1}}{\sqrt{K_{13}-1}}+\frac{\xi_{2}}{\sqrt{K_{23}-1}}=\sqrt{2}\;\xi_ {3} \tag{13}\]
The unanticipated central role of critical wetting in the global phase diagram is of outstanding interest, because **i**) experimental observation of critical wetting in classical liquid mixtures has been a veritable challenge [16; 17], and **ii**) theoretically, critical wetting features fascinating singularities in the surface excess quantities. Non-universal critical exponents are predicted, which vary continuously with a ratio of lengths [63]. In vector models of magnets this ratio depends on the anisotropy [64]. In type-I superconductors the length ratio is that of the magnetic penetration depth and the superconducting coherence length [65]. Here, in quantum gas mixtures, the ratio involves healing lengths and penetration depths.
In conclusion, possible experimental verification of wetting phase transitions in BEC mixtures is made more compelling by omitting the optical wall but adding a third component in GP theory. This conceptual change provides a global wetting phase diagram in which the control parameters are tunable interatomic scattering lengths, and in which the phase boundaries are unequivocal due to the absence of any wall boundary conditions. To our knowledge we present the first wetting phase diagram that only depends on intrinsic atomic parameters (masses and s-wave scattering lengths). A rich diversity of interface phase transitions, including degenerate first-order wetting, first-order wetting and, notably,
Figure 3: Global wetting phase diagram in the \((K_{13},K_{23})\)-plane for fixed \(\xi_{2}/\xi_{1}=2\) and \(\xi_{3}/\xi_{1}=1\), and for strong segregation between condensates 1 and 2. For strong (weak) interspecies repulsion the nonwet (wet) configuration is stable. The wetting phase transition (thick solid line) is of first-order for \(K_{23}/K_{13}>3\) and \(K_{23}/K_{13}<1\), whereas critical wetting takes place for \(1<K_{23}/K_{13}<3\). Critical wetting is preceded by the nucleation (thin solid line) of a film of condensate 3. Mathematical extensions (dashed and dotted lines) indicate that the wetting phase boundary displays corner singularities at the degenerate first-order wetting transitions at D and D’. The three points marked + locate, for descending \(K_{13}\) and fixed \(K_{23}/K_{13}=2\), the calculated interface configurations shown in Fig.2a-c.
(non-)universal critical wetting, are realized in the three-component GP theory without wall boundary conditions.
J.I. gratefully acknowledges the plentiful hospitality of Hanoi Pedagogical University 2, and a sabbatical bench fee (K802422N) from the Research Foundation-Flanders (FWO).
|
2309.04274 | Data-Flow-Based Normalization Generation Algorithm of R1CS for
Zero-Knowledge Proof | The communities of blockchains and distributed ledgers have been stirred up
by the introduction of zero-knowledge proofs (ZKPs). Originally designed to
solve privacy issues, ZKPs have now evolved into an effective remedy for
scalability concerns and are applied in Zcash (internet money like Bitcoin). To
enable ZKPs, Rank-1 Constraint Systems (R1CS) offer a verifier for bi-linear
equations. To accurately and efficiently represent R1CS, several language tools
like Circom, Noir, and Snarky have been proposed to automate the compilation of
advanced programs into R1CS. However, due to the flexible nature of R1CS
representation, there can be significant differences in the compiled R1CS forms
generated from circuit language programs with the same underlying semantics. To
address this issue, this paper uses a data-flow-based R1CS paradigm algorithm,
which produces a standardized format for different R1CS instances with
identical semantics. By using the normalized R1CS format circuits, the
complexity of circuits' verification can be reduced. In addition, this paper
presents an R1CS normalization algorithm benchmark, and our experimental
evaluation demonstrates the effectiveness and correctness of our methods. | Chenhao Shi, Hao Chen, Ruibang Liu, Guoqiang Li | 2023-09-08T11:52:11Z | http://arxiv.org/abs/2309.04274v2 | # Data-Flow-Based Normalization Generation Algorithm of R1CS for Zero-Knowledge Proof
###### Abstract
The communities of blockchains and distributed ledgers have been stirred up by the introduction of zero-knowledge proofs (ZKPs). Originally designed to solve privacy issues, ZKPs have now evolved into an effective remedy for scalability concerns and are applied in Zcash (internet money like Bitcoin). To enable ZKPs, Rank-1 Constraint Systems (R1CS) offer a verifier for bi-linear equations. To accurately and efficiently represent R1CS, several language tools like Circum, Noir, and Smarky have been proposed to automate the compilation of advanced programs into R1CS. However, due to the flexible nature of R1CS representation, there can be significant differences in the compiled R1CS forms generated from circuit language programs with the same underlying semantics. To address this issue, this paper uses a data-flow-based R1CS paradigm algorithm, which produces a standardized format for different R1CS instances with identical semantics. By using the normalized R1CS format circuits, the complexity of circuits' verification can be reduced. In addition, this paper presents an R1CS normalization algorithm benchmark, and our experimental evaluation demonstrates the effectiveness and correctness of our methods.
Zero-knowledge proof; Rank-1 constraint systems; Data flow graph; ZKP programming; Normalization
## I Introduction
_Zero-knowledge proofs (ZKPs)_ are increasingly recognized for their importance in modern cryptography [1], as one and more cryptographic communities seek to address some of the blockchain's most significant challenges: privacy and scalability. It is also the essential technique in Zcash [2, 3]. From both user's and developer's perspectives, the heightened emphasis on information privacy and security has led to a greater appreciation for the privacy advantages offered by zero-knowledge proofs. As decentralized finance (DeFi) usage grows, zero-knowledge applications that provide scalability and privacy advantages will have more opportunities to increase industry-wide adoption. However, not all computational problems can be directly addressed using zero-knowledge proofs. Instead, we must transform the issue into the correct form of computation. The _rank-1 constraint system (R1CS)_ describes the execution of statements written in high-level programming languages and is used by many ZKP applications, but there is no standard way of representing them [4]. Circum is a novel domain-specific language for transforming computational problems into R1CS format circuits.[5] In the specific process of a first-order zero-knowledge proof, we first convert the problem into a computational problem in Circum, then into R1CS format circuits.
Due to the flexible nature of R1CS representation and variation of program organizations and compiler optimization levels, there can be significant differences in the compiled R1CS forms generated from circuit language programs with the same underlying semantics, which leads to difficulties in further ZKP program analysis and verification.
This paper proposes a data-flow-based algorithm for generating normalization of R1CS, enabling the conversion of different R1CS constraints into a normal form, facilitating the determination of equivalence and correctness. To achieve this, the algorithm starts by transforming an R1CS into a data flow graph structure resembling an expression tree. It then segments and abstracts the data flow graph, eliminating differences between equivalent R1CS constraints that may arise from the generation process. Finally, sorting rules are proposed to sort the constraints and variables within R1CS, ultimately resulting in a unique normal form for equivalent R1CS.
Moreover, we classify and summarize the reasons and characteristics of the different equivalent R1CS generated, based on the constraint generation logic of mainstream compilers and the expressiveness of R1CS. In addition, based on the identified reasons for producing equivalent R1CS, we create a relatively complete benchmark. Our proposed algorithm, which can pass all test cases in the benchmark, demonstrates that equivalent R1CS can be converted into a unique and identical canonical form under various circumstances.
This work contributes to R1CS optimization by providing a novel algorithm for generating canonical forms of equivalent R1CS constraints. Our algorithm can eliminate unnecessary redundancy and normalize representation, thus it can improve existing methods and facilitates the analysis of equivalence and correctness. Furthermore, the effectiveness and practicality of the proposed algorithm are demonstrated through our comprehensive benchmark.
**Related Work** Eli et al. design, implement, and evaluate a zero-knowledge succinct non-interactive argument (SNARG) R1CS [6]. Historically, research investigating the factors associated with R1CS has focused on satisfiability. In paper[7], where the prover incurs finite field operations to prove the satisfiability of an n-sized R1CS instance [7]. Alexander et al. introduce Brakedown, the first built system that provides linear-time SNARKs for NP [8].
Considering Circum and R1CS format circuits as two languages before and after compilation, research on the generation of the R1CS paradigm is more akin to research on
semantic consistency in the compilation. Currently, the patent applications and research papers propose ideas and solutions for generating compilation paradigms in other languages, mainly exploring data flow [9], syntax tree [10], or semantic mapping [11] aspects. These studies offer crucial insights into the fundamental information semantically identical programs entail in the compilation process. However, due to the inherent constraints embedded within the R1CS form, this paper ultimately elects to use data flow as a starting point for research.
**Paper Organization** The paper is organized as follows: In the next section, a brief preliminary review of zero-knowledge proof and related tools. Section 3 provides the process of the proposed algorithm in this paper. The technical exposition in Section 4 explains in detail the logic of the critical steps and their formal description. Section 5 presents the specific categories of benchmarks and their corresponding experimental results. Lastly, Section 6 concludes the present study.
## II Preliminaries
This section introduces the basic concepts and principles of zero-knowledge proof and discusses the role of R1CS and Circum in zero-knowledge-proof systems. It also explores the limitations of existing normalization techniques for R1CS and presents the proposed data-flow-based normalization generation algorithm, which is motivated by these limitations.
### _Zk-snarks_
Zk-snark, which stands for Zero-Knowledge Succinct Non-interactive Argument of Knowledge, is a type of zero-knowledge proof introduced in a 2014 paper [12]. The objective of zk-snark is to enable one party to prove to another that they possess specific knowledge without revealing the knowledge itself, and to do so concisely and efficiently. The working principle of Zk-snark can be simplified into several steps. First, users convert the information they wish to verify into a mathematical problem called a computation. This computation can be implemented using any high-level programming language, such as OCaml, C++, Rust, or hardware description languages like Circum.
Next, the computation result is usually transformed into an arithmetic circuit, a computational model for computing polynomials. An arithmetic circuit consists of inputs and multiple gates, performing an essential arithmetic operation such as addition or multiplication. The entire arithmetic circuit can be used to generate a specific format (R1CS), which takes the form of a constraint-based formula system (rank-1 constraint system) expressing the constraints of the arithmetic circuit. A verifiable arithmetic circuit is one of the inputs of the zk-snark's algorithm.
The Quadratic Arithmetic Program (QAP) is a variant of linear pcp and plays a crucial role in the verifiable arithmetic circuit's conversion into the QAP format. QAP is a formula system that employs polynomials to represent the behavior of the arithmetic circuit. By utilizing QAP, the security of the zk-snark's algorithm is enhanced, along with the improvement of its implementation efficiency [13].
Finally, there is the zk-snark stage, where the verifiable arithmetic circuit and QAP are used to generate a proof. Zk-snark's are powerful privacy protection protocols that can be utilized in digital payments, blockchain technology, and other fields. They can verify the authenticity of information while protecting the user's privacy. Despite its relatively complex working principle, zk-snark' technology has found widespread application, bringing higher security and privacy protection to the digital world.
### _Circuit Language_
ZKP technology can address several fundamental issues in the modern digital world, including identity user's verification without compromising their private information and safeguarding privacy data from unauthorized exploitation. Within ZKP, arithmetic circuits play a vital role in describing and computing complex operations. These circuits consist of a series of logic gates that can perform various arithmetic operations, such as addition, multiplication, and division. By combining these basic operations, the complex arithmetic circuits can be constructed to execute diverse computational tasks.
The circuit referred to here is a theoretical computational model, not an actual electronic circuit.
This is the formal arithmetic circuit definition in theoretical computer science [14].
**Definition II.1**.: A finite field field \(\mathbb{F}\) is a field that contains a finite number of integer elements.
\[\mathbb{F}=\{0,\ldots,p-1\}\text{ for some prime }p>2\]
The operations \(+,\times,=\) on \(\mathbb{F}\) should \((\bmod\ p)\) after calculation.
**Definition II.2**.: A circuit is a triple \((M,L,G)\), where
* \(M\) is a set of values,
* \(L\) is a set of gate labels, each of which is a function from \(M^{i}\) to \(M\) for some non-negative integer \(i\) (where \(i\) represents the number of inputs to the gate), and
* \(G\) is a labelled directed acyclic graph with labels from \(L\).
**Definition II.3**.: A arithmetic circuit is a map \(C:\mathbb{F}^{n}\rightarrow\mathbb{F}\), where \(\mathbb{F}\) is a finite field.
1. It is a directed acyclic graph (DAG) where internal nodes are labeled \(+,-,\text{or}\times\) and inputs are labeled \(1,x_{1},\ldots,x_{n}\), the edges are wires or connections.
Fig. 1: The pipeline diagram for zk-snark’s.
2. It defines an n-variable polynomial with an evaluation recipe.
3. Where \(|C|=\#\) gates in C.
This chapter mainly focuses on the Circum language, employed at the core step of zk-SNARK protocols for describing arithmetic circuits. Within the framework of ZKP systems, several commonly used arithmetic circuit description languages exist, including Arithmetica, libsnark DSL, and Circum. These languages are typically employed for building and verifying ZKP systems and can be used to describe various arithmetic circuits, such as linear constraint systems (LCS), bilinear pairings, and quadratic circuits.
By representing an arithmetic circuit as a constraint system, Circum's core idea involves describing inputs, outputs, and computation processes as linear equations and inequalities. This approach allows developers to define complex computation processes and generate the corresponding R1CS constraint system. Circum provides a set of high-level abstract concepts, enabling developers to focus more on the algorithm without being overwhelmed by low-level implementation details.
Due to its comprehensive range of features, Circum has been widely used in cryptography, blockchain, and other security-critical applications. Its efficient and streamlined development of complex computational structures gives developers a chance to quickly implement various privacy-preserving protocols and zero-knowledge-proof techniques. Circum provides a simple, declarative way of defining constraint systems and describing real-world puzzles, which is of great importance to the protection of user's privacy and data security.
### _Rank-1 Constraint Systems (R1CS)_
R1CS, a common Arithmetic Circuit format that underlies real-world systems [2] and an important part of the zk-SNARKS algorithm groth16[15], represents computations as a set of constraint conditions, namely linear equations and inequalities. Each equation has its own set of coefficients, while each variable represents an input or output value. These equations and inequalities describe the limiting conditions of the computation, implying that satisfying these conditions correctly calculates the corresponding output result for the given input sequence. R1CS includes a formal definition of constraint-based computation rules, which can be verified using a set of public parameters and a private input sequence. For a more detailed understanding of the formal definition of R1CS, refer to Vitalik's blog [13].
**Definition II.4**.: R1CS is a format for ZKP ACs. An R1CS is a conjunction of constraints, each of the form:
\[(\vec{a}\cdot\vec{x})\times(\vec{b}\cdot\vec{x})=(\vec{b}\cdot\vec{x})\]
where \(\vec{a},\vec{b}\) and \(\vec{c}\) are vectors of coefficients (elements of the prime field consisting of the integers modulo some prime), and \(vec{x}\) is a vector of distinct "pseudo-variables". Each pseudo-variable is either a variable, representing an element of the field, or the special symbol 1, representing the field element 1. A represents taking the dot product of two vectors except that all additions and multiplications are done \(\pmod{p}\), and \(\times\) represents the product of 2 scalars \(\pmod{p}\). Using a pseudo-variable of 1 allows a constant add to be represented in a dot product.
In Circum, each triplet of vectors in R1CS represents a mathematical constraint. These vectors consist of coefficients of variables found at corresponding positions in the solution vector \(\vec{s}\). The solution vector \(\vec{s}\) includes assignments for all variables present in the R1CS equations.
For example, a satisfied R1CS is shown in Fig.2:
This constitutes a first-order constraint corresponding to a circuit multiplication gate. If we combine all constraints, we obtain a first-order constraint system.
R1CS is widely used in practical applications as a powerful computational model. It serves as an integral component of the groth16 algorithm, which is a popular version of zk-SNARK algorithms. R1CS plays a crucial role in improving developers' understanding of computer science and cryptography. Additionally, it offers crucial support for various privacy protection measures.54
In this paper, we propose the R1CS paradigm for constraint groups. It imposes constraints on the form and ordering of variable constraints.
**Definition II.5**.: R1CS paradigm is an R1CS satisfies the following requirements:
1. If a constraint in the R1CS paradigm contains multiplication between variables, it cannot have any other operators.
2. If a constraint in the R1CS paradigm does not contain multiplication between variables, it cannot contain intermediate variables generated by other linear constraints.
3. The ordering of constraints and variables (defined in **Definition II.4**) in the R1CS paradigm must be consistent with the ordering method (\(\times\) is in front of \(+\)) in this paper.
The specific adjustments required for converting a general constraint system into an R1CS paradigm will be explained through examples with concrete constraints.
Requirement 1 suggests that complex quadratic constraints in the R1CS should be split into simpler forms.
Fig. 2: A satisfied R1CS.
For instance.
\[a\times b+c+d=f\Longrightarrow a\times b=r,r+c+d=f\] \[5\times a\times b=c\Longrightarrow 5\times a=r,r\times b=c\]
Requirement 2 indicates that linear constraints in the R1CS system must be eliminated by removing intermediate variables defined by other linear constraints. For example,
\[a+b=c,c+d=e\Longrightarrow a+b+d=e\]
The specific sorting methods in requirement three will be discussed in later sections outlining the algorithm's steps.
### _Data Flow Graph_
In a bipartite-directed graph, known as a data flow graph, there are two types of nodes: links and actors. Actors are utilized to represent various operations, while links serve as the means by which data is received by actors. Additionally, arcs allow links to transmit values to actors. The formal definition of this concept can be found in Dennis' paper [16].
**Definition II.6**.: A data flow graph is a bipartite labeled graph where the two types of nodes are called actors and links.
\[G=\langle A\cup L,E\rangle \tag{1}\]
where
\[A=a_{1},a_{2},\ldots,a_{n} \text{is the set of actors}\] \[L=l_{1},l_{2},\ldots,l_{m} \text{is the set of links}\] \[E\subseteq(A\times L)\cup(L\times A) \text{is the set of edges}.\]
A more detailed description can be found in [17].
### _Weighted Pagerank Algorithm_
In this paper, we adopt the weighted PageRank algorithm to compute the weight of each node in the data flow graph [18].
Pagerank algorithm is a method used for computing the ranking of web pages in search engine results. It was initially proposed by Larry Page and Sergey Brin, co-founders of Google, in 1998 and has since become one of the most essential algorithms in the field of search engines.[19]
The algorithm assesses online web pages to determine their weight values, which it then utilizes to rank search results. PageRank is based on the notion that the weight of a web page is influenced by both the quantity and quality of the other web pages that link to it.
The main steps of the Pagerank algorithm are as follows:
1. Building the graph structure: First, the web pages and links on the internet must be converted into a graph structure. In this structure, each web page corresponds to a node and each link to a directed edge that points to the linked web page.
2. Computing the initial scores of each page: In Pagerank, the initial score of each page is set to 1. This means that initially, each node has an equal score.
3. Iteratively computing the scores of each page: Each node's score is iteratively calculated based on its incoming links and averaged onto its outgoing links at each iteration.
4. Considering the number and quality of links: In addition to the relationships between nodes, Pagerank considers the number and quality of links pointing to a web page. Links from high-quality websites may carry more value than those from low-quality sites. Therefore, when computing scores, the algorithm weights links according to their number and quality.
5. Iterating until convergence: When the score of a node stabilizes, the algorithm stops iterating. This indicates that the final scores of all nodes have been determined and can be used to rank search results.
The Weighted PageRank algorithm differs from the standard PageRank algorithm in that it incorporates the weight of each link as a factor, resulting in a more precise evaluation of a webpage's importance. Considering the importance of pages, the original PageRank formula is modified as
\[PR(u)=(1-d)+d\sum_{v\in B(u)}PR(v)W^{in}_{(v,u)}W^{out}_{(v,u)} \tag{2}\]
In this equation, \(W^{in}_{(v,u)}\) and \(W^{out}_{(v,u)}\) are the weight of \(link(v,u)\) calculated based on the number of inlinks and
Fig. 4: The interation of Pagerank algorithm.
Fig. 3: The initial state of Pagerank algorithm.
outlinks of page \(u\) and the number of inlinks of all reference pages of page \(v\).
In this paper, we aim to use this algorithm to obtain more accurate weight values for each node in the data flow graph.
sectionOverview In this section, we will introduce the procedure of normalization through the process of converting R1CS introduced in Vitalik's blog [13] in the algorithm:
Constraint Set:
\[A=\begin{pmatrix}0&1&0&0&0&0\\ 0&0&0&1&0&0\\ 0&1&0&0&1&0\\ 5&0&0&0&0&1\end{pmatrix}B=\begin{pmatrix}0&1&0&0&0&0\\ 0&1&0&0&0&0\\ 1&0&0&0&0&0\\ 1&0&0&0&0&0\end{pmatrix}\] \[C=\begin{pmatrix}0&0&0&1&0&0\\ 0&0&0&0&1&0\\ 0&0&0&0&0&1\\ 0&0&1&0&0&0\end{pmatrix}\]
Firstly, the arithmetic tree generation process involves creating an arithmetic tree for each constraint within the input R1CS constraint group, which is subsequently merged. The resulting arithmetic tree comprises common subformulas stored in a _DAG_. The constructed data flow graph's structure is shown in figure 5, which illustrates how the arithmetic trees are combined to form the data flow graph.
Subsequently, a tile selection algorithm is implemented based on the data flow graph, which divides the graph into tiles. The division of the entire graph into tiles is illustrated in figure 6, depicting the overall procedure of the tile selection algorithm. The specifics of the tile selection process, including the form and selection logic, will be elaborated upon in subsequent chapters.
Next, the data flow graph is abstracted further with the selection of tiles as a reference. A new abstracted node in the data flow graph replaces linear constraints represented by tiles. The abstracted node can be represented as an affine mapping, which preserves the linear relationship between the variables, enabling faster computation of the intermediate values during the proof generation process. This abstraction procedure streamlines the proof generation process and reduces the computational cost of generating the proof.
We calculate the weight of each node with coefficients of the constraint. Then we calculate the weights of the selected individual tiles using the improved Weighted PageRank algorithm. The convergence process of the PageRank values of four nodes in the abstract graph is depicted in figure 8.
Finally, constraints in the paradigm of R1CS are generated separately for each tile. And the constraints and variables are ranked by the node weights computed in the previous steps.
Now we convert the input R1CS to its paradigm:
\[A=\begin{pmatrix}0&1&0&0&0\\ 0&1&0&0&0\\ 5&1&0&1&-1\end{pmatrix}B=\begin{pmatrix}0&0&1&0&0\\ 0&1&0&0&0\\ 1&0&0&0&0\end{pmatrix}\] \[C=\begin{pmatrix}0&0&0&1&0\\ 0&0&1&0&0\\ 0&0&0&0&0\end{pmatrix}\]
## III Normalization Algorithm
In this section, we formally introduce various steps of the normalization generation algorithm and several data structures defined within the algorithm.
Fig. 5: The procedure of constructing the data flow graph.
Fig. 6: The procedure of tile selection.
Fig. 7: The procedure of constructing the data flow graph.
### _Construction of RNode Graph_
Our study presents a novel data structure, RNode, that represents variable relationships within an R1CS arithmetic circuit. This structure, _RNode_, facilitates efficient problem-solving by tracking interconnections among variables, which is formally defined as follows,
**Definition III.1**.: An RNode is a node of two types in the data flow graph constructed in this normalization algorithm.
\[RNode =ConstNode\cup VarNode\] \[ConstNode =\{ConstValue,Operation,Father,Child\}\] \[VarNode =\{Operation,Father,Child\} \tag{3}\]
where
\[\forall c\text{ is a }\text{ConstNode}\cap c.Operation=Null,c.child=\emptyset\] \[\forall c\text{ is a }\text{ConstNode}\cap c.Operation\in\{Add,Mul\},c.father=0\]
RNodes can be categorized into two types based on the variables they represent. The first category represents the original variables in the solution vector of the R1CS and the intermediate variables produced during the construction of the arithmetic circuit. The second category represents the constants in the data flow graph. Each RNode includes both an operator and a computed result, which store the calculation method between its two parent nodes and represent the calculated result of the subtree rooted in itself.
The generation of the RNode Graph involves 3 main stages:
1. Transform each constraint into an equation of \(a*b=c\), as required by the R1CS constraints.
2. Convert each constraint in the original R1CS constraint into an equation.
3. Organize the resulting equations containing common sub-expressions into a DAG-structured expression tree.
The core logic of the RNode Graph generation algorithm is similar to the procedure we previously mentioned for constructing the RNode Graph.
The RNode is a data structure used to store information about the variables in an R1CS during the construction of the RNode Graph. Unlike typical nodes in an expression tree, each RNode stores information about both the variable and operator involved in a given operation, allowing each operator's output to be considered an intermediate variable and making it more closely aligned with the properties of an R1CS constraint set.
In practice, the merging and splitting of constraints poses a challenge in determining the equivalence of R1CS constraint sets. However, our RNode graph generation algorithm has observed that this merging or splitting does not lead to substantial differences. When constraints are merged, one variable is subtracted from the original constraint set. For instance, the merging process can be exemplified by the following example.
\[\begin{pmatrix}1&1&0&0\\ 0&1&1&0\end{pmatrix}\cdot\begin{pmatrix}1&0&0&0\\ 1&0&0&0\end{pmatrix}=\begin{pmatrix}0&0&1&0\\ 0&0&0&1\end{pmatrix}\] \[\downarrow\] \[\begin{pmatrix}1&2&0&0\end{pmatrix}\cdot\begin{pmatrix}1&0&0&0 \end{pmatrix}=\begin{pmatrix}0&0&0&1\end{pmatrix}\] \[\downarrow\] \[\begin{pmatrix}1&2&0\end{pmatrix}\cdot\begin{pmatrix}1&0&0\end{pmatrix} =\begin{pmatrix}0&0&1\end{pmatrix}\]
However, the subtracted variable will be added back into the RNode Graph as an intermediate node in the sum-product expression during the construction of the RNode Graph. The reverse is also true.
Further abstraction of the RNode Graph is required to eliminate the difference observed in the graphs generated by equivalent R1CS due to the different variable ordering in the constraint set. The main difference is observed in the order of addition in constructing the expression of continuous addition. At this stage, we do not have sufficient information to determine the sequential execution order. During the algorithm execution flow, two variables are randomly selected and added together, resulting in a different structure in the graph.
### _Tile Selection_
Here, we categorize tiles into three types and give the formal definition of **tile**.
**Definition III.2**.: Tile is a tree-like subgraph of the RNode Graph, representing a constraint in R1CS.
1. Quadratic: Tiles with the form \(x*y=z\), where \(x\), \(y\), and \(z\) are variables.
2. MulLinear: Tiles whose root is obtained by multiplying its two parents, with at least one of the parents being a constant.
3. AddLinear: Tiles whose root is obtained by adding its two parents.
Tile is essentially a set of linear equations generated by applying certain constraints to variables in the R1CS. We can
Fig. 8: The convergence process of each node.
use tiles as building blocks to construct a normalized R1CS that is both correct and scalable.
During tile selection, we divide the data flow graph from the previous step into 3 types. While AddLinear and MulLinear tiles are generated by linear constraints and are essentially linear tiles, their logical processing differs significantly. Hence, we discuss them as two separate types.
1. We temporarily put aside the constraint merging step until we obtain more information about the tree in subsequent steps.
2. If there is a need to generate merged formulas later, it can be achieved simply by applying a fixed algorithm to the unmerged formulas.
3. The implementation of the tile selection algorithm is relatively simple.
When the RNode graph is generated, we select a node with no successors and use it as the root to partition a subgraph from the RNode Graph as a tile. We then remove this tile from the RNode Graph and repeat this loop until the entire graph is partitioned. Detailed code will posted in github.
The difference between data flow graphs generated by equivalent R1CS constraint systems lies in the order of node additions when processing linear tiles. However, the nodes added within a linear constraint, when considered as a set, are equivalent. This implies that the different addition order only affects the traversal order of the nodes. Therefore, if we consider the selected linear tiles as the products of the selected nodes and their respective coefficients, the chosen sets of linear tiles from equivalent R1CS constraint systems are the same. In other words, there is no distinction between the selected sets of tiles for equivalent R1CS constraint systems. Revised paragraph: The difference between data flow graphs generated by equivalent R1CS constraint systems lies in the order of node additions when processing linear tiles. However, the nodes added within a linear constraint, when considered as a set, are equivalent. This implies that the different addition order only affects the traversal order of the nodes. Therefore, the selected sets of linear tiles from equivalent R1CS constraint systems are the same if we consider the selected linear tiles as the products of the selected nodes and their respective coefficients. In other words, there is no distinction between the selected sets of tiles for equivalent R1CS constraint systems.
### _Graph Abstraction_
We further abstract the data flow graph based on the selected tiles to eliminate the differences between various equivalent R1CS constraint systems. Specifically, we abstract linear tiles using a new node. By doing so, we mask the difference in addition order within linear tiles in the RNode Graph and transform the relationship between external nodes and specific nodes within the linear tile into a relationship between external nodes and the tile to which the particular node belongs. In this abstracted data flow graph, the types of edges are as follows:
1. Non-linear tile abstract node to non-linear tile abstract node: The two vertices already existed in the original RNode graph. This edge type remains consistent with the original RNode graph.
2. Non-linear tile abstract node to linear tile abstract node: This edge type exists only if there are non-abstract nodes in the linear tile represented by the abstract node.
3. Linear tile abstract node to linear tile abstract node: This edge type exists only if the two abstract nodes represent linear tiles that share common non-abstract nodes.
### _Tile Weight Calculation_
In this step, we use the Weighted PageRank algorithm to calculate the scores of each vertex in the data flow graph.
The previous steps eliminated the differences in the data flow graphs generated by equivalent R1CS through the abstraction of linear tiles. In the following step, constraints are generated on a tile-by-tile basis, and a criterion for tile order is proposed to sort the generated constraints.
In the algorithm proposed in this paper, the Weighted PageRank algorithm is used to calculate the weights of each node in the abstracted data flow graph, which are then used as the basis for calculating the weights of the corresponding constraints for each tile. Compared to the traditional PageRank algorithm, this algorithm assigns weights to every edge in the graph and adjusts the iterative formula for node weights. In the Weighted PageRank algorithm, as mentioned in the former section, the formula for calculating node scores is defined as follows:
\[PR(u)=(1-d)+d\sum_{v\in B(u)}PR(v)W^{in}_{(v,u)}W^{out}_{(v,u)} \tag{4}\]
In this algorithm, the primary purpose of using the Weighted PageRank algorithm is to reduce the symmetry of the abstracted data flow graph. The structure of the graph is significantly simplified in the previous step through the simplification of linear tiles. However, some structures in the graph still contain symmetric nodes. If general algorithms are used to calculate the weights of nodes, these symmetric nodes may be assigned the same weight, which can lead to problems in subsequent constraint generation and sorting. To address this issue, the Weighted PageRank algorithm is employed to calculate weights for different nodes. This helps increase the asymmetry of the graph and minimize the occurrence of nodes with the same score.
In this algorithm, the iterative formula for scores in the Weighted PageRank algorithm is further adjusted. The node weights retained in the original data flow graph are set to 1, while for nodes abstracted from linear constraints, their weights are calculated through a series of steps.
First, for a linear constraint:
\[\sum_{i=1}^{n}a_{i}b_{i}=c \tag{5}\]
Convert it to:
\[\sum_{i=1}^{n}a_{i}b_{i}-c=0 \tag{6}\]
Finally, the variance of the normalized coefficients is utilized as the weight of the abstract node representing the linear constraint.
\[W=\frac{\sum_{i=1}^{n}\left(a_{i}-\frac{\sum_{i=1}^{n}a_{i}-1}{n+1}\right)^{2}+ \left(-1-\frac{\sum_{i=1}^{n}a_{i}-1}{n+1}\right)^{2}}{\left(\frac{\sum_{i=1}^{ n}a_{i}-1}{n+1}\right)^{2}} \tag{7}\]
In this algorithm, the iterative formula for the node score in the Weighted PageRank algorithm is given by:
\[PR(u)=(1-d)+d\sum_{v\in B(u)}PR(v)W^{u}W^{v} \tag{8}\]
The scores for each node are calculated to determine the ranking of each constraint in the R1CS form.
### _Constraint Generation_
In this step, we generate constraints in descending order of weight, with tiles as the unit, starting with quadratic constraints and followed by linear constraints.
In this phase, a portion of the variable ordering was determined through the quadratic tile. In the tripartite matrix group of the R1CS paradigm, the three-row vectors corresponding to quadratic constraints each have only one non-zero coefficient. Still, in matrices A and B, the two-row vectors corresponding to the same constraint are equivalent in the constraint representation, consistent with the commutativity of multiplication. This results in two equivalent expressions for the same quadratic constraint. Figure 9 illustrates an example of equivalent terms for a quadratic constraint.
In the abstract data flow graph, all vertices representing variables appearing in quadratic constraints are retained, making it possible to determine the choice of non-zero coefficients in the corresponding row vectors of matrices A and B in the tripartite matrix group of the R1CS paradigm based on the weights of each variable. The variables with higher weights are assigned to matrix A and given smaller indices in the variable mapping.
The sorting rules for the ordering of variables that appear in quadratic constraints can be summarized as follows:
1. Sort variables based on the highest weight value among all quadratic constraints in which they appear, such that variables with higher weight values have smaller indices in the variable mapping.
2. For variables that appear in the same constraint and have the same highest weight value sort them based on their scores in the Weighted PageRank algorithm for the nodes in the data flow graph corresponding to the variables. Variables with higher node scores will have smaller indices in the variable mapping and be assigned to the corresponding row vector in matrix A of the tripartite matrix group.
### _Adjustment of Linear Constraints_
At this stage, the partition and ordering of constraints within a constraint group have been established, and the ordering of variables that appeared in quadratic constraints has also been determined. However, it is necessary to adjust the order of newly introduced variables in linear tiles in this step. As the example below demonstrates, introducing multiple new variables within a linear tile can result in disorderly sorting. This is because, in the previous actions, the specific structures of linear tile constraints were abstracted to eliminate differences in the RNode Graph.
\[A =\begin{pmatrix}0&1&0&0&0&0\\ 0&1&0&0&0&0\\ 5&1&0&1&-1&1\end{pmatrix}B=\begin{pmatrix}0&0&1&0&0&0\\ 0&1&0&0&0&0\\ 1&0&0&0&0&0\end{pmatrix}\] \[C =\begin{pmatrix}0&0&0&1&0&0\\ 0&0&1&0&0&0\\ 0&0&0&0&0&0\end{pmatrix}\]
\[A =\begin{pmatrix}0&1&0&0&0&0\\ 0&1&0&0&0&0\\ 5&1&0&1&1&-1\end{pmatrix}B=\begin{pmatrix}0&0&1&0&0&0\\ 0&1&0&0&0&0\\ 1&0&0&0&0&0\end{pmatrix}\] \[C =\begin{pmatrix}0&0&0&1&0&0\\ 0&0&1&0&0&0\\ 0&0&0&0&0\end{pmatrix}\]
Therefore, we introduce a new method to order the newly introduced variables in linear tiles. For each new variable introduced in a linear tile, its weight is calculated as
\[weight=\sum_{other\ linear\ tiles}|field*weight\ of\ linear\ tile| \tag{9}\]
The new variables are sorted based on their weights. If the weights are the same, the coefficients of the variable in its linear tile are considered for comparison. The appearance of new variables in other linear tiles, to some extent, reflects their importance in the entire constraint group. Additionally, suppose certain new variables only appear in their constraints. In that case, their weights will be zero, and their ordering will only affect the constraints generated by their linear tile without changing the ordering of other constraints. Therefore, they can be sorted in descending order based on their coefficients alone.
## IV Evaluation
In this section, we introduced the self-designed benchmark used in this paper. We evaluated the effectiveness of the paradigm generation algorithm by analyzing the test results and the intermediate outputs.
Fig. 9: Two equivalent expressions of a quadratic constraint.
### _Benchmark Design_
To evaluate the proposed algorithm in this paper, we implemented the entire process of paradigm generation explained in the former section using Python to verify its results.
Due to the lack of related research, this field has no comprehensive benchmark. Therefore, we summarized some rules for generating equivalent R1CS constraint groups based on the logic the mainstream Circom compiler used to create R1CS and designed a more comprehensive benchmark based on these rules. The benchmark includes the following main categories depending on the reflected situation:
1. Replacement of variable order in R1CS.
2. Transformation of constraint order in R1CS.
3. Introduction of multiple new variables in a single linear constraint in R1CS.
4. Introduction of new variables in multiple linear constraints in R1CS, with shared new variables.
5. Merging and splitting of constraints in R1CS.
The different categories in the benchmark correspond to the other reasons for generating equivalent R1CS. Each category contains 2-3 basic R1CS constraints. To comprehensively test the robustness and correctness of the algorithm, 5-6 equivalent R1CS constraint groups are generated for each R1CS based on the respective reasons. The equivalent constraint groups of each constraint group are paired and inputted into the algorithm to verify whether the algorithm can generate consistent and R1CS-compliant output results defined in definition II.5, when processing different equivalent constraint groups.
The publicly available data set used in this study can be found at the following GitHub repository: [https://github.com/Ash1sc/R1CS_normalization_benchmark](https://github.com/Ash1sc/R1CS_normalization_benchmark). This repository contains the raw data that was utilized for testing purposes. It is important to note that the data set is licensed under the GNU General Public License version 3.0 (GPL-3.0). This license allows for the data set and scripts to be freely distributed, modified, and used, with the condition that any derived works are also licensed under the GPL-3.0 and that the original copyright and license information is retained. If you would like to learn more about the details of the GPL-3.0, please visit [https://www.gnu.org/licenses/gpl-3.0.en.html](https://www.gnu.org/licenses/gpl-3.0.en.html).
### _Result Evaluation_
Table I shows the result of the experiments.
Observation shows that the generated paradigms meet the requirements of the R1CS paradigm mentioned above and have equal semantics as the original R1CS constraint groups.
Through analysis of the intermediate outputs at each stage of the conversion process, it was found that for equivalent R1CS constraint groups generated by reordering constraints, the only difference in the resulting data flow graphs lies in the order in which nodes representing intermediate variables are created. This is due to different processing orders of each constraint during traversal, leading to other orders of introducing intermediate variables in each constraint.
For equivalent R1CS constraint groups generated by variable replacement, the differences lie in the order in which RNR-odes representing initial variables in the R1CS are developed and in the order of addition in the summation chain structure caused by differences in variable order in linear constraints. However, these changes do not affect the selected tile set, and the same data flow graph is obtained after abstraction.
In the final step of the algorithm, we proposed a novel variable ordering method to solve the sorting confusion issue when multiple variables are introduced to a linear constraint.Experimental results demonstrate that our algorithm is capable of correctly identifying these variables and produces a variable mapping sequence that conforms to the definition.
After the abstraction of the data flow graph, the splitting and merging of linear constraints can lead to changes in the order of addition in the summation chain. However, this issue is resolved. When equivalent R1CS constraint groups are created through the merging and splitting of constraints, the only discrepancy in the resulting data flow graphs is with regards to the vertices representing intermediate variables. These vertices can either represent existing variables or intermediate variables introduced during the data flow graph's creation. Nevertheless, the structure of the data flow graph remains unaffected.
## V Conclusion
In this paper, we propose an algorithm based on data flow analysis to construct a paradigm for R1CS. The correctness and equivalence of R1CS have long been challenging to study due to the diversity and flexibility of constraint construction methods. Our algorithm aims to eliminate the differences between equivalent R1CS constraint systems through a series of abstraction processes. We introduce ordering methods for |
2309.04512 | Elastic bounds for anisotropic layers | The complete set of bounds for the technical constants of an elastic layer,
plate or laminate is given. The bounds are valid in general, also for
completely anisotropic bodies. They are obtained transforming the polar bounds
previously found. These bounds complete the knowledge of classical elasticity
at least in the two-dimensional case and are useful in several situations,
e.g., for determining the correct feasibility domain in design problems or as
necessary conditions for accepting the results of laboratory tests on
anisotropic layers. | Paolo Vannucci | 2023-09-08T14:37:19Z | http://arxiv.org/abs/2309.04512v2 | # Elastic bounds for anisotropic layers
###### Abstract
The complete set of bounds for the technical constants of an elastic layer, plate or laminate is given. The bounds are valid in general, also for completely anisotropic bodies. They are obtained transforming the polar bounds previously found. These bounds complete the knowledge of classical elasticity at least in the two-dimensional case and are useful in several situations, e.g., for determining the correct feasibility domain in design problems or as necessary conditions for accepting the results of laboratory tests on anisotropic layers.
**Key words:** anisotropy, planar elasticity, elastic bounds, polar formalism
## 1 Introduction
The existence and determination of bounds for the moduli of a material is a well known topic in the theory of elasticity. As well known, such bounds are the necessary result of the physical condition imposing the positiveness of the work done by the applied forces on an elastic body for deforming this one.
This problem can be solved using either purely mathematical, cf. [1], or more directly mechanically inspired approaches, cf. [2]. For isotropic materials, this leads to the well known bounds for either the Lame's parameters
\[3\lambda+2\mu>0,\ \ \mu>0, \tag{1}\]
or for the more commonly used technical parameters, i.e. the Young's modulus and the Poisson's ratio,
\[E>0,\ \ -1<\nu<\frac{1}{2}. \tag{2}\]
For anisotropic materials, a, basically mathematic, approach, based upon a somewhat forgotten theorem, cf. [1], page 340, allows to give a general form to the bounds of the components of the stiffness (or alternatively of the compliance) matrix \([C]\),
\[\{\sigma\}=[C]\{\varepsilon\}, \tag{3}\]
that traduces, through the Kelvin's notation, [3, 4], the components of the fourth-order elastic tensor \(\mathbb{E}\) into the components of a \(6\times 6\) symmetric matrix, according to the rule
\[[C]{=}\begin{bmatrix}C_{11}=\mathbb{E}_{1111}&C_{12}=\mathbb{E}_{1122}&C_{13}= \mathbb{E}_{1133}&C_{14}=\sqrt{2}\mathbb{E}_{1123}&C_{15}=\sqrt{2}\mathbb{E}_{1 131}&C_{16}=\sqrt{2}\mathbb{E}_{1112}\\ &C_{22}=\mathbb{E}_{2222}&C_{23}=\mathbb{E}_{2233}&C_{24}=\sqrt{2}\mathbb{E}_{2 223}&C_{25}=\sqrt{2}\mathbb{E}_{2231}&C_{26}=\sqrt{2}\mathbb{E}_{2212}\\ &&C_{33}=\mathbb{E}_{333}&C_{34}=\sqrt{2}\mathbb{E}_{3323}&C_{35}=\sqrt{2} \mathbb{E}_{3331}&C_{36}=\sqrt{2}\mathbb{E}_{3312}\\ &&C_{44}=2\mathbb{E}_{2323}&C_{45}=2\mathbb{E}_{2331}&C_{46}=2\mathbb{E}_{2312 }\\ &&C_{55}=2\mathbb{E}_{3131}&C_{56}=2\mathbb{E}_{3112}\\ &&C_{66}=2\mathbb{E}_{1212}\end{bmatrix}. \tag{4}\]
By this theorem, the positiveness of \([C]\) is get imposing that the six leading principal minors of \([C]\) are all positive. This approach has the advantage of clearly fixing the number, six, of bounds to be written in the most general case; in the presence of some material symmetry, these bounds are less than six and can be written in an explicit form, cf. [5].
However, and rather surprisingly, for three-dimensional anisotropic bodies the determination of the bounds for the technical moduli is still an open problem, only partial results are known in the literature. In particular, cf. [6] or [7], the following conditions are normally given in the literature:
\[\begin{split}&\forall i,j\in\{1,2,3\},\ E_{i}>0,\ \ G_{ij}>0,\\ &\frac{1-2\nu_{12}}{E_{1}}+\frac{1-2\nu_{23}}{E_{2}}+\frac{1-2 \nu_{31}}{E_{3}}>0,\end{split} \tag{5}\]
Another, rougher, bound is given by Lekhnitskii, [6],
\[\nu_{12}+\nu_{23}+\nu_{31}<\frac{3}{2}. \tag{6}\]
These are the only bounds valid for any elastic body, regardless from its syngony, i.e. they are valid also for triclinic materials. Some other bounds are known but uniquely for bodies that are at least orthotropic and with the moduli measured in a reference frame whose axes correspond with some equivalent directions for the material:
\[\begin{split}& 1-\nu_{ij}\nu_{ji}>0\ \ \forall i,j\in\{1,2,3\},\\ & 1-\nu_{12}\nu_{21}-\nu_{23}\nu_{32}-\nu_{31}\nu_{13}-2\nu_{32} \nu_{21}\nu_{13}>0,\end{split} \tag{7}\]
conditions that can be transformed, respectively, to
\[\begin{split}&|\nu_{ij}|<\sqrt{\frac{E_{i}}{E_{j}}},\\ &\nu_{32}\nu_{21}\nu_{13}<\frac{1}{2}\left(1-\nu_{32}^{2}\frac{E_ {2}}{E_{3}}-\nu_{21}^{2}\frac{E_{1}}{E_{2}}-\nu_{13}^{2}\frac{E_{3}}{E_{1}} \right)<\frac{1}{2}.\end{split} \tag{8}\]
All these bounds on the technical constants are found imposing the positiveness of the strain energy for peculiar stress fields, e.g. pure extension or shear.
Unlike in the general three-dimensional case, in planar elasticity the problem has been completely solved in a particular set of elastic moduli, the so-called _polar parameters_.
However, the corresponding of the polar bounds for the technical constants have never been given. This is the topic of this paper, organized as follows: in the next Section, the polar bounds are recalled, then the bounds for the technical constants in planar elasticity are obtained and finally some particular cases discussed.
## 2 Polar bounds
The polar formalism was introduced in 1979 by G. Verchery, [8]; a complete account of the method can be found in [5, 9], here only some elements of this method, necessary to the developments, are recalled.
By the polar method, the Cartesian components at a direction \(\theta\) of a matrix, e.g. \([C]\), representing, in the Kelvin notation, a plane elastic tensor, e.g. \(\mathbb{E}\), are expressed as
\[\begin{split} C_{11}(\theta)&=T_{0}+2T_{1}+R_{0} \cos 4\left(\varPhi_{0}-\theta\right)+4R_{1}\cos 2\left(\varPhi_{1}-\theta \right),\\ C_{12}(\theta)&=-T_{0}+2T_{1}-R_{0}\cos 4\left( \varPhi_{0}-\theta\right),\\ C_{16}(\theta)&=\sqrt{2}\left[R_{0}\sin 4\left( \varPhi_{0}-\theta\right)+2R_{1}\sin 2\left(\varPhi_{1}-\theta\right)\right],\\ C_{22}(\theta)&=T_{0}+2T_{1}+R_{0}\cos 4\left( \varPhi_{0}-\theta\right)-4R_{1}\cos 2\left(\varPhi_{1}-\theta\right),\\ C_{26}(\theta)&=\sqrt{2}\left[-R_{0}\sin 4\left( \varPhi_{0}-\theta\right)+2R_{1}\sin 2\left(\varPhi_{1}-\theta\right)\right],\\ C_{66}(\theta)&=2\left[T_{0}-R_{0}\cos 4\left( \varPhi_{0}-\theta\right)\right].\end{split} \tag{9}\]
The moduli \(T_{0},T_{1},R_{0},R_{1}\) as well as the difference of the angles \(\varPhi_{0}-\varPhi_{1}\) are tensor invariants. The choice of one of the two polar angles fixes the frame; usually \(\varPhi_{1}=0\).
The converse of the previous equations are
\[\begin{split} T_{0}&=\frac{1}{8}(C_{11}-2C_{12}+2 C_{66}+C_{22}),\\ T_{1}&=\frac{1}{8}(C_{11}+2C_{12}+C_{22}),\\ R_{0}&=\frac{1}{8}\sqrt{(C_{11}-2C_{12}-2C_{66}+C_ {22})^{2}+8(C_{16}-C_{26})^{2}},\\ R_{1}&=\frac{1}{8}\sqrt{(C_{11}-C_{22})^{2}+2(C_{16 }+C_{26})^{2}},\\ \tan 4\varPhi_{0}&=\frac{2\sqrt{2}(C_{16}-C_{26})}{C _{11}-2C_{12}-2C_{66}+C_{22}},\\ \tan 2\varPhi_{1}&=\frac{2\sqrt{2}(C_{16}+C_{26})}{C _{11}-C_{22}}.\end{split} \tag{10}\]
The elastic symmetries are determined by the following conditions on the invariants:
* ordinary orthotropy: \(\varPhi_{0}-\varPhi_{1}=K\frac{\pi}{4},\ K\in\{0,1\}\);
* \(R_{0}\)-orthotropy: \(R_{0}=0\), [10];
* square symmetry: \(R_{1}=0\);
* isotropy: \(R_{0}=R_{1}=0\).
So we see that \(T_{0}\) and \(T_{1}\) are the _isotropy invariants_, while \(R_{0},R_{1}\) and \(\varPhi_{0}-\varPhi_{1}\) are the _anisotropy invariants_. The above relations are valid for any matrix of the elastic type, hence for the compliance matrix \([S]=[C]^{-1}\) too; we will indicate by \(t_{0},t_{1},r_{0},r_{1},\varphi_{0}\) and \(\varphi_{1}\) the polar parameters of \([S]\).
For the components of the vector, say \(\{L\}\), representing in the Kelvin formalism a second-rank symmetric tensor \(\mathbf{L}\), the polar formalism gives
\[\begin{split} L_{1}(\theta)&=T+R\cos 2(\varPhi- \theta),\\ L_{2}(\theta)&=T-R\cos 2(\varPhi-\theta),\\ L_{6}(\theta)&=\sqrt{2}R\sin 2(\varPhi-\theta), \end{split} \tag{11}\]
with \(T,R\) two invariants, representing respectively the _isotropic_ and the _anisotropic_ phases of \(\mathbf{L}\); \(\varPhi\) is an angle determined by the choice of the frame.
The polar bounds can be found in the following way, [5]: the strain energy density per unit volume,
\[V_{\varepsilon}=\frac{1}{2}\{\sigma\}^{\top}\{\varepsilon\}=\frac{1}{2}\{ \varepsilon\}^{\top}[C]\{\varepsilon\}, \tag{12}\]
can be written using the polar components of \([C]\) and \(\{\varepsilon\}\) through eqs. (9) and (11):
\[V_{\varepsilon}=4T_{1}\ t^{2}+8R_{1}\cos 2(\varPhi_{1}-\varphi)r\ t+2[T_{0}+R_ {0}\cos 4(\varPhi_{0}-\varphi)]r^{2}, \tag{13}\]
where \(t,r\) and \(\varphi\) are the polar parameters of \(\{\varepsilon\}\). This quantity can be rewritten as
\[V_{\varepsilon}=\{r,t\}\left[\begin{array}{cc}2[T_{0}+R_{0}\cos 4(\varPhi_{0} -\varphi)]&4R_{1}\cos 2(\varPhi_{1}-\varphi)\\ 4R_{1}\cos 2(\varPhi_{1}-\varphi)&4T_{1}\end{array}\right]\left\{\begin{array} []{c}r\\ t\end{array}\right\}. \tag{14}\]
The positivity of \(V_{\varepsilon}\)\(\forall\{r,t\}\), stating the physical condition of a positive work done by the applied forces, is ensured if and only if the matrix in the previous equation is positive definite; by the already mentioned theorem on the leading principal minors, [1], this happens if and only if the following two conditions are satisfied:
\[\left\{\begin{array}{ll}T_{0}+R_{0}\cos 4(\varPhi_{0}-\varphi)&\forall \varphi.\\ T_{1}[T_{0}+R_{0}\cos 4(\varPhi_{0}-\varphi)]-2R_{1}^{2}\cos^{2}2(\varPhi_{1}- \varphi)>0&\end{array}\right. \tag{15}\]
These two conditions can be transformed to three other inequalities and it can be proved that one of them is redundant (the complete, technical, proof is omitted here, the reader is addressed to [11] or to [5]). In the end, one gets the bounds
\[\begin{split}& T_{0}-R_{0}>0,\\ & T_{1}\left(T_{0}^{2}-R_{0}^{2}\right)-2R_{1}^{2}[T_{0}-R_{0}\cos 4( \varPhi_{0}-\varPhi_{1})]>0.\end{split} \tag{16}\]
To remark that, being moduli of complex numbers, \(R_{0}\geq 0,R_{1}\geq 0\), which necessarily implies, through eqs. (15) and (16)\({}_{1},T_{0}>0,T_{1}>0\).
The above bounds are general, i.e. valid for any type of layer, and are written in terms of polar invariant, so they are _intrinsic bounds_, i.e. frame independent. They can be applied as well to the polar parameters of \([C]\) or of \([S]\):
\[\begin{split}& t_{0}-r_{0}>0,\\ & t_{1}\left(t_{0}^{2}-r_{0}^{2}\right)-2r_{1}^{2}[t_{0}-r_{0} \cos 4(\varphi_{0}-\varphi_{1})]>0.\end{split} \tag{17}\]
These last bounds can be obtained following the same procedure but starting from the stress energy
\[V_{\sigma}=\frac{1}{2}\{\sigma\}^{\top}[S]\{\sigma\}. \tag{18}\]
Of course, the same bounds can be written also for any other elastic-type planar tensor, like, for instance, the extension and bending stiffness, and compliance, tensors of laminates, see e.g. [5, 7, 12].
## 3 The bounds for the technical constants
In order to find the bounds for the technical constants, first, eq. (10) is written for the components of \([S]\):
\[\begin{split} t_{0}&=\frac{1}{8}(S_{11}-2S_{12}+2S_ {66}+S_{22}),\\ t_{1}&=\frac{1}{8}(S_{11}+2S_{12}+S_{22}),\\ r_{0}&=\frac{1}{8}\sqrt{(S_{11}-2S_{12}-2S_{66}+S_ {22})^{2}+8(S_{16}-S_{26})^{2}},\\ r_{1}&=\frac{1}{8}\sqrt{(S_{11}-S_{22})^{2}+2(S_{1 6}+S_{26})^{2}},\\ \tan 4\varphi_{0}&=\frac{2\sqrt{2}(S_{16}-S_{26})}{S_ {11}-2S_{12}-2S_{66}+S_{22}},\\ \tan 2\varphi_{1}&=\frac{2\sqrt{2}(S_{16}+S_{26})}{S_ {11}-S_{22}}.\end{split} \tag{19}\]
Then, this result is used into eq. (17): the first condition becomes
\[\frac{1}{8}(S_{11}-2S_{12}+2S_{66}+S_{22})>\frac{1}{8}\sqrt{(S_{11}-2S_{12}-2 S_{66}+S_{22})^{2}+8(S_{16}-S_{26})^{2}}, \tag{20}\]
which gives the two conditions (a third one, stating that the argument of the square root at the second member must be positive, is redundant)
\[\begin{split} S_{11}-2S_{12}+2S_{66}+S_{22}>0,\\ S_{66}(S_{11}-2S_{12}+S_{22})>(S_{16}-S_{26})^{2}.\end{split} \tag{21}\]
To transform eq. (17)\({}_{2}\), it is worth to introduce the following polar invariant, [5], page 145:
\[c_{1}=8r_{1}^{2}r_{0}\cos 4(\varphi_{0}-\varphi_{1}) \tag{22}\]
which gives
\[r_{0}\cos 4(\varphi_{0}-\varphi_{1})=\frac{c_{1}}{8r_{1}^{2}}. \tag{23}\]
The Cartesian expression of \(c_{1}\) is known:
\[\begin{split} c_{1}=&\frac{1}{64}\left[(S_{11}-S_{ 22})^{2}-2(S_{16}+S_{26})^{2}\right](S_{11}-2S_{12}-2S_{66}+S_{22})+\\ &+\frac{1}{8}(S_{11}-S_{22})\left(S_{16}^{2}+S_{26}^{2}\right). \end{split} \tag{24}\]
Finally, eq. (17)\({}_{2}\) becomes first
\[t_{1}\left(t_{0}^{2}-r_{0}^{2}\right)>2r_{1}^{2}-\frac{c_{1}}{4}, \tag{25}\]
then, after some standard passages,
\[2S_{12}S_{16}S_{26}+S_{11}S_{22}S_{66}-S_{22}S_{16}^{2}-S_{11}S_{26}^{2}-S_{66}S _{12}^{2}>0. \tag{26}\]
To remark that actually
\[2S_{12}S_{16}S_{26}+S_{11}S_{22}S_{66}-S_{22}S_{16}^{2}-S_{11}S_{26}^{2}-S_{66}S _{12}^{2}=\det[S]. \tag{27}\]
We get hence the three bounds for the Cartesian components of \([S]\)
\[\begin{split}& S_{11}-2S_{12}+2S_{66}+S_{22}>0,\\ & S_{66}(S_{11}-2S_{12}+S_{22})>(S_{16}-S_{26})^{2},\\ & 2S_{12}S_{16}S_{26}+S_{11}S_{22}S_{66}-S_{22}S_{16}^{2}-S_{11}S _{26}^{2}-S_{66}S_{12}^{2}>0.\end{split} \tag{28}\]
It is worth noting that this is not the only set of independent bounds that can be found for the \(S_{ij}\)s: applying to \([S]\) the already cited theorem on the leading principal minors, one should get three other bounds:
\[\begin{split}& S_{11}>0,\\ & S_{11}S_{22}-S_{12}^{2}>0,\\ & 2S_{12}S_{16}S_{26}+S_{11}S_{22}S_{66}-S_{22}S_{16}^{2}-S_{11}S _{26}^{2}-S_{66}S_{12}^{2}>0.\end{split} \tag{29}\]
The main difference between the last two sets of bounds is that conditions (28) uses exclusively invariant quantities, which is not the case for conditions (29). Of course, imposing the positivity of the strain energy one can get similar relations for the \(C_{ij}\)s.
The passage to the technical constants can be made recalling that, by definition,
\[\begin{split}& S_{11}=\frac{1}{E_{1}},\ \ S_{12}=-\frac{\nu_{12}}{E_{1}},\ \ S_{16}=\frac{\eta_{12,1}}{\sqrt{2}E_{1}},\\ & S_{22}=\frac{1}{E_{2}},\ \ S_{66}=\frac{1}{2G_{12}},\ \ S_{26}= \frac{\eta_{12,2}}{\sqrt{2}E_{2}},\end{split} \tag{30}\]
with \(E_{1},E_{2}\) the Young's moduli in the directions of the two frame axes, \(G_{12}\) the in-plane shear modulus, \(\nu_{12}\) the in-plane Poisson's ratio and \(\eta_{12,1},\eta_{12,2}\) two coefficients of mutual influence of the second type, [5, 7]. Alternatively, the coefficients of mutual influence of the first type \(\eta_{k,ij}\) can be used, they are linked to the \(\eta_{ij,k}\)s by the reciprocity relations
\[\frac{\eta_{ij,k}}{E_{k}}=\frac{\eta_{k,ij}}{G_{ij}},\ \ i,j,k\in\{1,2\},\ i\neq j. \tag{31}\]
Injecting eq. (30) into eq. (28) gives finally the bounds
\[\begin{split}&\frac{1+2\nu_{12}}{E_{1}}+\frac{1}{E_{2}}+\frac{1}{G _{12}}>0,\\ &\frac{1}{E_{1}^{2}E_{2}^{2}G_{12}}\left\{E_{1}E_{2}[E_{1}+E_{2}( 1+2\nu_{12})]-G_{12}(E_{2}\eta_{12,1}-E_{1}\eta_{12,2})^{2}\right\}>0,\\ &\frac{1}{E_{1}^{2}E_{2}^{2}G_{12}}\left\{E_{1}(E_{2}-G_{12}\eta _{12,2}^{2})-E_{2}[E_{2}\nu_{12}^{2}+G_{12}\eta_{12,1}(\eta_{12,1}+2\eta_{12, 2}\nu_{12})]\right\}>0.\end{split} \tag{32}\]
These three bounds are a set of necessary and sufficient conditions for the elastic energy density is positive for each stress/strain state. They use invariant quantities, which is undoubtedly an advantage in anisotropic elasticity.
If, in place of eq. (28) we inject eq. (30) into eq. (29) we will get
\[\begin{split}&\frac{1}{E_{1}}>0,\\ &\frac{1}{E_{1}E_{2}}-\frac{\nu_{12}^{2}}{E_{1}^{2}}>0,\\ &\frac{1}{E_{1}^{2}E_{2}^{2}G_{12}}\left\{E_{1}(E_{2}-G_{12}\eta _{12,2}^{2})-E_{2}[E_{2}\nu_{12}^{2}+G_{12}\eta_{12,1}(\eta_{12,1}+2\eta_{12,2 }\nu_{12})]\right\}>0.\end{split} \tag{33}\]
The third condition remains the same, while the two first ones can be rewritten as
\[E_{1}>0,\ \ |\nu_{12}|<\sqrt{\frac{E_{1}}{E_{2}}}. \tag{34}\]
The first one is the well-known condition of positiveness of the Young's moduli, while the second one corresponds to the bound (8)\({}_{1}\). Also, these two bounds, or alternatively the fact that \(E_{1}>0\) for any direction, imply that it is \(E_{2}>0\) too. Finally, also \(G_{12}>0\), which can be proved, classically, with the mechanical procedure imposing a pure shear stress state or, using a purely mathematical approach, still using the theorem on leading principal minors once reordered the vector \(\{\sigma\}\) representing, in the Kelvin notation, the stress tensor \(\boldsymbol{\sigma}\) as \(\{\sigma\}=\{\sigma_{6},\sigma_{1},\sigma_{2}\}^{\top}\).
Though eq. (32) are a minimal set of necessary and sufficient conditions for ensuring the positivity of the elastic energy, it is perhaps better, from a practical point of view, to dispose of more direct bounds, concerning, if possible, some quantities easy to be measured in laboratory tests. This can be done transforming eq. (32) and taking into account for the positivity of the Young's and shear moduli. Some short passages lead to the set of conditions
\[\begin{split}& E_{1}>0,\\ & E_{2}>0,\\ & G_{12}>0,\\ & E_{1}E_{2}+G_{12}[E_{1}+E_{2}(1+2\nu_{12})]>0,\\ & E_{1}E_{2}[E_{1}+E_{2}(1+2\nu_{12})]-G_{12}(E_{2}\eta_{12,1}- E_{1}\eta_{12,2})^{2}>0,\\ & E_{1}(E_{2}-G_{12}\eta_{12,2}^{2})-E_{2}[E_{2}\nu_{12}^{2}+G_{ 12}\eta_{12,1}(\eta_{12,1}+2\eta_{12,2}\nu_{12})]>0.\end{split} \tag{35}\]
The above bounds, however, do not use invariant quantities. Though formed by a redundant number of bounds, this set of conditions is perhaps more interesting for practical applications, namely for checking the results of laboratory tests used for characterizing an anisotropic layer or plate. To remark that in this set of bounds the coefficients of mutual influence enter the problem; in all the bounds known in the literature, these coefficients were absent.
Bounds for layers with material symmetries
Let us consider now how bounds (35) change where some kind of material symmetry is present.
**Isotropy.** In this case \(r_{0}=R_{0}=r_{1}=R_{1}=0\), so the polar bounds reduce simply to
\[t_{0}>0,\ \ t_{1}>0, \tag{36}\]
while for the Cartesian components we have
\[S_{11}=S_{22}=t_{0}+2t_{1},\ \ S_{12}=-t_{0}+2t_{1},\ \ S_{66}=2t_{0}=S_{11}-S_{12},\ \ S_{16}=S_{26}=0, \tag{37}\]
which gives
\[t_{0}=\frac{S_{11}-S_{12}}{4},\ \ t_{1}=\frac{S_{11}+S_{12}}{4}, \tag{38}\]
so the Cartesian bounds are simply
\[S_{11}-S_{12}>0,\ \ S_{11}+S_{12}>0. \tag{39}\]
Then, because for isotropy
\[E_{1}=E_{2}:=E,\ \ \nu_{12}:=\nu,\ \ G_{12}:=G=\frac{E}{2(1+\nu)},\ \ \eta_{12,1}=\eta_{12,2}=0, \tag{40}\]
bounds (35) reduce to
\[E>0,\ \ G>0,\ \ 2E^{2}>0,\ \ 2E^{3}(1+\nu)>0,\ \ E^{2}(1-\nu^{2})>0, \tag{41}\]
which of course give the three well known bounds for isotropic planar bodies
\[E>0,\ \ -1<\nu<1, \tag{42}\]
or
\[\lambda+\mu>0,\ \ \mu>0 \tag{43}\]
with the Lame's parameters.
**Ordinary orthotropy.** The polar condition for orthotropy is
\[\varphi_{0}-\varphi_{1}=k\frac{\pi}{4},\ \ k\in\{0,1\}. \tag{44}\]
The discussion of the mechanical and mathematical differences between the two types of ordinary orthotropy, \(k=0\) or \(k=1\), can be found in [5, 9]. However, for the orthotropic case,
\[r_{0}\cos 4(\varphi_{0}-\varphi_{1})=(-1)^{k}r_{0}, \tag{45}\]
but eq. (23) does not change, so bounds (35) remain the same: there is no difference for the two ordinarily orthotropic cases. To remark that bounds (35) are written in a generic
frame, not necessarily in the orthotropic one. If the axes coincide with the orthotropy directions, then \(\eta_{12,1}=\eta_{12,2}=0\) and bounds (35) become
\[\begin{split}& E_{1}>0,\\ & E_{2}>0,\\ & G_{12}>0,\\ & E_{1}+E_{2}(1+2\nu_{12})>0,\\ & E_{2}(E_{1}-E_{2}\nu_{12}^{2})>0,\end{split} \tag{46}\]
the condition (35)\({}_{4}\) becoming now redundant.
**Square-symmetry.** This special type of orthotropy is determined by the polar condition \(r_{1}=R_{1}=0\). The polar bounds (17) become
\[\begin{split}& t_{0}-r_{0}>0,\\ & t_{1}(t_{0}^{2}-r_{0}^{2})>0,\end{split} \tag{47}\]
which, being \(r_{0}>0\), reduce to
\[t_{0}-r_{0}>0,\ \ t_{1}>0. \tag{48}\]
For the Cartesian components we have, for any direction,
\[S_{11}=S_{22},\ \ S_{26}=-S_{16}, \tag{49}\]
so conditions (28) become
\[\begin{split}& S_{11}-S_{12}+S_{66}>0,\\ & S_{66}(S_{11}-S_{12})>2S_{16}^{2},\\ & S_{11}+S_{12}>0.\end{split} \tag{50}\]
For the technical constants it is
\[E_{1}=E_{2},\ \ \eta_{12,2}=-\eta_{12,1}\frac{E_{2}}{E_{1}}, \tag{51}\]
so bounds (35) become
\[\begin{split}& E_{1}>0,\\ & G_{12}>0,\\ & E_{1}+2G_{12}(1+\nu_{12})>0,\\ & E_{1}(1+\nu_{12})-2G_{12}\eta_{12,1}^{2}>0,\\ & 1-\nu_{12}>0.\end{split} \tag{52}\]
When written in one of the two symmetry frames, shifted of \(\pi/4\), because \(\eta_{12,1}=0\) we get
\[\begin{split}& E_{1}>0,\\ & G_{12}>0,\\ & E_{1}+2G_{12}(1+\nu_{12})>0,\\ &-1<\nu_{12}<1.\end{split} \tag{53}\]
\(R_{0}\)**-orthotropy.** This special orthotropy is characterized by \(R_{0}=0\), [10]. However, unlike the case of square symmetry, this does not imply that \(r_{0}=0\), but that
\[r_{0}=\frac{r_{1}^{2}}{t_{1}},\ \ k=0, \tag{54}\]
so that bounds (17) become
\[\begin{split}& t_{0}-\frac{r_{1}^{2}}{t_{1}}>0,\\ & t_{1}\left(t_{0}+\frac{r_{1}^{2}}{t_{1}}\right)-2r_{1}^{2}>0. \end{split} \tag{55}\]
Rather surprisingly, the bounds on the \(S_{ij}\)s, eq. (28) reduce to only two, the third one being a product of the first two ones (this can be checked using eqs. (19)\({}_{1,2,4}\) into the previous equations):
\[\begin{split}& S_{11}+2S_{12}+S_{22}>0,\\ & 2(S_{11}S_{22}-S_{12}^{2})+S_{66}(S_{11}+2S_{12}+S_{22})-(S_{16} +S_{26})^{2}>0.\end{split} \tag{56}\]
Injecting eq. (30) into these conditions, the bounds for the technical constants are obtained as well:
\[\begin{split}& E_{1}>0,\\ & E_{2}>0,\\ & G_{12}>0,\\ & E_{1}+E_{2}(1-2\nu_{12})>0,\\ & E_{1}^{2}(E_{2}-G_{12}\eta_{12,2}^{2})-E_{2}^{2}G_{12}(\eta_{12,1}^{2}+4\nu_{12}^{2})+\\ &+E_{1}E_{2}[E_{2}(1-2\nu_{12})+2G_{12}(2-\eta_{12,1}\eta_{12,2} )]>0.\end{split} \tag{57}\]
\(r_{0}\)**-orthotropy.** This is the compliance dual of the previous case and it characterizes some special materials like paper, [13]. In such a case, the polar bounds (17) reduce to
\[\begin{split}& t_{0}>0,\\ & t_{0}t_{1}-2r_{1}^{2}>0.\end{split} \tag{58}\]
Concerning the components of \([S]\),
\[\begin{split}& S_{11}(\theta)=t_{0}{+}2t_{1}{+}4r_{1}\cos 2 \left(\varphi_{1}-\theta\right),\\ & S_{12}(\theta)=-t_{0}{+}2t_{1},\\ & S_{16}(\theta)=S_{26}(\theta)=2\sqrt{2}r_{1}\sin 2\left( \varphi_{1}-\theta\right),\\ & S_{22}(\theta)=t_{0}{+}2t_{1}{-}4r_{1}\cos 2\left(\varphi_{1}- \theta\right),\\ & S_{66}(\theta)=2t_{0}.\end{split} \tag{59}\]
It is worth noting that \(S_{12}\) and \(S_{66}\) are isotropic and that from the previous equations it is immediate to get
\[t_{0}=\frac{1}{2}S_{66},\ \ t_{1}=\frac{1}{2}\left(S_{12}+\frac{S_{66}}{2} \right),\ \ 2r_{1}^{2}=\frac{1}{32}\left[(S_{11}-S_{22})^{2}+8S_{16}^{2}\right], \tag{60}\]
so bounds (58) become
\[S_{66}>0,\ \ \ 4S_{66}(S_{66}+2S_{12})-(S_{11}-S_{22})^{2}-8S_{16}^{2}>0 \tag{61}\]
which for the technical parameters gives
\[E_{1}>0,\ \ E_{2}>0,\ \ G_{12}>0,\ \ E_{1}E_{2}^{2}(E_{1}-4\nu_{12}G_{12})-(E_{2}-E _{1})^{2}G_{12}^{2}-4\eta_{12,1}^{2}E_{2}^{2}G_{12}^{2}>0. \tag{62}\]
A much simpler form of these bounds can be obtained for a frame that coincides with the orthotropy axes (\(\varphi_{1}-\theta=0\)); in such a case, from eq. (59) we get \(S_{16}=0\) and
\[t_{0}=\frac{1}{4G_{12}},\ \ t_{1}=\frac{1}{2}\left(\frac{1}{4G_{12}}-\frac{ \nu_{12}}{E_{1}}\right),\ \ r_{1}=\frac{1}{4}\left(\frac{1+\nu_{12}}{E_{1}}-\frac{1}{2G_{12}}\right), \tag{63}\]
so that bounds (62) become
\[E_{2}>0,\ \ G_{12}>0,\ \ E_{1}>G_{12}(1+\nu_{12})^{2}. \tag{64}\]
## 5 Conclusion
The results found above fill the gap for what concerns the elastic bounds of the technical constants of an anisotropic layer or plate. What is important to point out is that they are written in a generic frame, so involving, and this is the first time, also the coefficients of mutual influence of the second type (through reciprocity relations, the use of the coefficients of the first type is straightforward). The interest of these bounds is first theoretical, then practical: the determination of the elastic parameters of a layer or of a plate is particularly difficult when the body is anisotropic. Disposing of bounds, serving as a necessary control for the acceptance of the experimental measures, is rather important in practice. The same can be said for correctly defining the feasibility domain in some design problem, e.g. concerning laminated plates made of composite, anisotropic layers.
|
2309.06305 | Sensitivity Analysis for Linear Estimators | We propose a novel sensitivity analysis framework for linear estimators with
identification failures that can be viewed as seeing the wrong outcome
distribution. Our approach measures the degree of identification failure
through the change in measure between the observed distribution and a
hypothetical target distribution that would identify the causal parameter of
interest. The framework yields a sensitivity analysis that generalizes existing
bounds for Average Potential Outcome (APO), Regression Discontinuity (RD), and
instrumental variables (IV) exclusion failure designs. Our partial
identification results extend results from the APO context to allow even
unbounded likelihood ratios. Our proposed sensitivity analysis consistently
estimates sharp bounds under plausible conditions and estimates valid bounds
under mild conditions. We find that our method performs well in simulations
even when targeting a discontinuous and nearly infinite bound. | Jacob Dorn, Luther Yap | 2023-09-12T15:16:23Z | http://arxiv.org/abs/2309.06305v3 | # Sensitivity Analysis for Linear Estimands
###### Abstract
We propose a novel sensitivity analysis framework for linear estimands when identification failure can be viewed as seeing the wrong distribution of outcomes. Our family of assumptions bounds the density ratio between the observed and true conditional outcome distribution. This framework links naturally to selection models, generalizes existing assumptions for the Regression Discontinuity (RD) and Inverse Propensity Weighting (IPW) estimand, and provides a novel nonparametric perspective on violations of identification assumptions for ordinary least squares (OLS). Our sharp partial identification results extend existing results for IPW to cover other estimands and assumptions that allow even unbounded likelihood ratios, yielding a simple and unified characterization of bounds under assumptions like the c-dependence assumption of Masten and Poirier (2018). The sharp bounds can be written as a simple closed form moment of the data, the nuisance functions estimated in the primary analysis, and the conditional outcome quantile function. We find our method does well in simulations even when targeting a discontinuous and nearly infinite bound.
## 1 Introduction
Many important estimators in economics are observationally weighted averages of outcomes. Ordinary least squares (OLS) weights outcomes \(Y\) by \(E[XX^{T}]^{-1}X\); regression discontinuity weights outcomes \(Y\) at a treatment discontinuity by their treatment assignment. These estimands have powerful interpretations under identifying restrictions like exogeneity or no manipulation. However, the identifying restrictions at the core of giving standard estimands meaningful interpretations often fail in empirical settings, and are often untestable. Even when these assumptions fail and treatment effects are confounded or agents sort across a treatment discontinuity, researchers often want to exploit the data to say something about the meaningful objects of interest. A large literature on sensitivity analysis has risen to the occasion (e.g., Masten and Poirier (2018); Gerard et al. (2020); Dorn and Guo (2022); Rambachan and Roth (2023)).
This paper provides a novel and tractable method for sensitivity analysis for linear estimators. The procedure in this paper nests Dorn and Guo (2022) and Masten and Poirier (2018) as special cases, and provides a new sensitivity analysis for difference-in-differences based on policy selection rather than functional form-dependent parallel trends failures (Rambachan and Roth, 2023). Our unified framework immediately generates partial identification tools and sensitivity analyses in new domains: we immediately obtain novel Conditional Average Treatment Effect (CATE) bounds for regression discontinuity (RD) when there is sorting by potential outcomes.
Our procedure for sensitivity analysis requires making an assumption on the ratio of densities between the observed distribution and a counterfactual ("true") distribution where the identifying restriction holds. With this assumption, we can characterize sharp bounds in terms of optimization problems. Similar assumptions have been made in the treatment effect literature and interpreted as a limit on treatment selection (Tan, 2006; Masten and Poirier, 2018; Zhao et al., 2019; Dorn and Guo, 2022)
A further feature of our setup is that these optimization problems can be solved explicitly, so the sharp bounds can be written in closed form as a moment of the observed data. This feature makes the estimation straightforward to implement once we are able to write sensitivity analysis problems in terms of densities. We show our result naturally and simply extends to sensitivity assumptions that feature unbounded likelihood ratios, which is novel relative to the approaches in the existing literature that is typically limited to finite likelihood ratio bounds (Frauen et al., 2023) or Manski-type infinite likelihood ratio bounds (Gerard et al., 2020).
In the RD literature, we conduct sensitivity analysis to violations to the no-manipulation assumption on either side of the cutoff. McCrary (2008) propose an influential test for manipulation. We study what researchers can learn from the RD design when McCrary's test fails. Gerard et al. (2020) proposed a procedure to bound the CATE for the subpopulation that does not manipulate: a Conditional Local Average Treatment Effect (CLATE). A conditional local average treatment effect is likely to be less interesting to researchers than the original CATE object. Consequently, there is an open question of whether it is possible to partially identify the CATE. The CATE requires incorporating both manipulators and non-manipulators, which is challenging since we never observe non-manipulators directly.1 This paper aims to fill this gap by showing how nontrivial bounds can be derived for the CATE. We partially identify the CATE by making a sensitivity assumption on the extent observations can manipulate given their potential outcomes. Our sensitivity assumption generalizes both Gerard et al. (2020)'s assumption and an exogeneity assumption that would allow researchers to interpret the RD estimand as a CATE and is always compatible with the identified manipulation from the McCrary test.
Footnote 1: We assume that manipulators only manipulate in one direction. For narrative clarity, we also assume that manipulators select in the direction of treatment.
We build on the existing Inverse Propensity Weighting (IPW) literature by offering a simple framework to calculate valid and sharp bounds that also accommodate unbounded likelihood ratios. A literature began by Tan (2006), sparked by Zhao et al. (2019), and sharpened by Dorn and Guo (2022), considers bounds on average treatment effect estimands under bounds on how much confounding can shift the odds of treatment, or equivalently the likelihood ratio between unobserved and observed outcomes. The IPW problem has then received much recent interest in the statistical literature (Bonvini et al., 2022; Tan, 2023; Huang et al., 2023; Zhang and Zhao, 2022; Soriano et al., 2021; Yin et al., 2022; Bruns-Smith and Zhou, 2023; Jesson et al., 2022). In an econometrics context, Masten and Poirier (2018) proposed similar bounds under an assumption bounding how much confounding can shift the probability of treatment. Frauen et al. (2023) analyze a generalization of Tan (2006)'s model that excludes large values of Masten and Poirier (2018)'s sensitivity parameter. We show that all values of Masten and Poirier (2018)'s sensitivity parameter fit within our even more general framework. A simple simulation under the c-dependence finds that a simple percentile bootstrap generates valid confidence intervals.
In the OLS literature, we provide a sensitivity assumption in terms of likelihood ratio bounds. Cinelli and Hazlett (2020) provide a sensitivity assumption in OLS exclusion based on \(R^{2}\) values and Rambachan and Roth (2023) provided a sensitivity assumption for Difference in Differences (DD) parallel trends based on slopes. These approaches often depend on the functional form of outcomes (e.g., Roth and Sant'Anna
(2023)). Our likelihood ratio measure is invariant to monotonic transformations and so uses assumptions that researchers can form a view on across functional forms.
## 2 Framework for Sensitivity Analysis
### General Setting
We observe data of an outcome \(Y\) and observable quantities \(R\) that do not include \(Y\). These \(R\) could refer to regressors, for instance. We are interested in linear estimands that take the form
\[E_{True}\left[\lambda(R)Y\right]=\int\lambda(R)YdP_{True} \tag{1}\]
where the integral is taken over some "true" distribution \(P_{True}\), whose support is the same as the observed outcome. The true distribution is defined as the distribution where the model is correctly specified and the observable quantities \(R\) are drawn from the observed distribution of \(R\). A prominent example is OLS. Suppose we are interested in the regression coefficient \(\beta\) of \(Y\) on \(R\) under the identifying restriction that \(E_{True}[Y-R^{\prime}\beta|R]=0\). Then, \(\lambda(R)=E[RR^{\prime}]^{-1}R\). Many other estimators commonplace in econometrics may also be written in this form: some examples explored in this paper include regression discontinuity designs, and inverse propensity weighting.
We observe data \(P_{Obs}\) of \((R,Y)\) that may fail to satisfy the identifying restrictions. In the OLS case, we may have endogeneity in the sense that \(E[X(Y-X^{\prime}\beta)]\) is non-zero. We are interested in a distribution where the conditional mean of \(Y\mid R\) is \(X^{\prime}\beta\), but endogeneity may lead to spurious conclusions about the parameter value.
We consider partial identification of meaningful parameters under bounded violations of the identifying restriction. We use a family of assumptions that can place meaningful bounds on our object of interest \(E_{True}\left[\lambda(R)Y\right]\) without claiming to exactly observe the true estimand. The rest of this section provides a simple and precise way to calculate sharp bounds on this object.
We can often view identification failures as a failure to weight outcomes correctly. We first observe that the object of interest can be written as an expectation over the observable distribution.
\[E_{True}\left[\lambda(R)Y\right]=\int\lambda(R)Y\frac{dP_{True}}{dP_{Obs}}dP _{Obs}=E\left[W\lambda(R)Y\right], \tag{2}\]
where \(W:=dP_{True}/dP_{Obs}\) is assumed to exist and be finite (though not necessarily bounded). An identical way to frame the problem is that the object of interest is \(E\left[\lambda_{True}(R)Y\right]\), where \(\lambda_{True}(R)=W\lambda(R)\). If the support of \(Y\mid R\) is the same under the observed and true distribution, then standard arguments imply \(E\left[W|R\right]=1\) almost surely.
We limit violations of the identifying restrictions in terms of the likelihood ratio between the true and observed conditional outcome distributions. In particular, we assume \(W\) satisfies \(E\left[W|R\right]=1\) and \(W\in[\underline{w}(R),\bar{w}(R)]\) for some likelihood ratio bounding functions satisfying \(0\leq\underline{w}(R)\leq 1\leq\bar{w}(R)\). These assumptions can be interpreted as the fact that the true distribution cannot be too different from the observed distribution, as \([\underline{w}(R),\bar{w}(R)]\) limit the extent of this difference. When \(\underline{w},\bar{w}=1\), we recover the standard empirical assumption that \(E_{True}[\lambda(R)Y]=E[\lambda(R)Y]\). As \(\underline{w}\) and \(\bar{w}\) get further from one, stronger identification failures are allowed.
The interpretation of our sensitivity assumption will be context-dependent. In the RD context, the
likelihood ratio bounds can equivalently be phrased as bounds on the degree to which manipulation can be selected on potential outcomes. In the IPW context, the likelihood ratio bounds can be interpreted as bounds on the degree to which treatment is selection on potential outcomes. In the OLS context, the bounds are more opaque for interpretation.
The sharp bounds are the set of estimands that can be achieved by distributions in our class.
**Definition 1**.: The sharp bounds in our general setting are the set of estimands \(E_{Q}[\lambda(R)Y]\) under distributions \(Q\) satisfying (i) the distribution of \(R\) is the same under \(Q\) and the observed \(P_{Obs}\), (ii) the support of \(Y\mid R\) is the same under \(Q\) and the observed \(P_{Obs}\), and (iii) \(\frac{dQ}{dP_{Obs}}\in[\underline{w}(R),\bar{w}(R)]\) with probability one.
To find the upper and lower bounds of \(E_{True}\left[\lambda(R)Y\right]\) under this setting, we solve
\[sup_{W}E\left[W\lambda(R)Y\right]s.t.W\in\left[\underline{w}\left(R\right), \bar{w}(R)\right],E[W|R]=1 \tag{3}\]
This setup implies that the bound attained by Equation (3) must be sharp, because the upper bound can be arbitrarily well approximated by some \(W\) that is in the class of likelihood ratios that we allow in the model and cannot be exceeded by any \(W\) that is in the class of likelihood ratios we allow. The lower bound is written analogously by using inf in place of sup. This paper will focus on the supremum problem, as the infimum problem is entirely analogous. By solving the optimization problems, we obtain sharp bounds on the object of interest.
### Identification Result
The general problem in Equation (3) turns out to have a simple closed-form solution. Let \(Q_{\tau(R)}(\lambda(R)Y|R)\) denote the \(\tau\)th quantile of \(\lambda(R)Y\) given \(R\). When \(\bar{w}(R)\) is finite and \(P(\lambda(R)Y=Q_{\tau(R)}(\lambda(R)Y\mid R))=0\), the sharp bounds are:
\[W^{*}_{sup}=\begin{cases}\bar{w}(R)&if\quad\lambda(R)Y>Q_{\tau(R)}\left(\lambda (R)Y|R\right)\\ \underline{w}(R)&if\quad\lambda(R)Y\leq Q_{\tau(R)}\left(\lambda(R)Y|R\right) \end{cases},\tau(R)=\frac{\bar{w}(R)-1}{\bar{w}(R)-\underline{w}(R)} \tag{4}\]
\[W^{*}_{inf}=\begin{cases}\underline{w}(R)&if\quad\lambda(R)Y>Q_{\tau(R)}\left( \lambda(R)Y|R\right)\\ \bar{w}(R)&if\quad\lambda(R)Y\leq Q_{\tau(R)}\left(\lambda(R)Y|R\right)\end{cases}, \tau(R)=\frac{1-\underline{w}(R)}{\bar{w}(R)-\underline{w}(R)} \tag{5}\]
Applying this formula to the formula \(E_{True}[\lambda(R)Y]=E[W\lambda(R)Y]\) yields the following bound characterization.
**Theorem 1**.: _When \(\bar{w}(R)\) is finite and \(P(\lambda(R)Y=Q_{\tau(R)}(\lambda(R)Y\mid R))=0\), the problem in (3) is solved by (4), and the analogous infimum problem is solved by (5). Hence, the upper bound is:_
\[E\left[W^{*}_{sup}\lambda(R)Y\right]=E\left[\lambda(R)Y\right]+E\left[\lambda (R)Ya\left(\underline{w}(R),\bar{w}(R),\lambda(R)Y,Q_{\tau(R)}(\lambda(R)Y \mid R)\right)\right], \tag{6}\]
_where the adversarial reweighting effect is \(a(\underline{w},\bar{w},\lambda y,q)\equiv(\bar{w}-\underline{w})\,1\left\{ \lambda y>q\right\}-(1-\underline{w})\). The expression for the lower bound is analogous._
Theorem 1 tells us that the solution can be written in closed form, and the upper bound on the object of interest can be written simply as a moment. The infimum has a similar solution. These results allow estimation to be simple and fast. The solution is also highly intuitive: we have a quantile balancing object
and a corresponding \(\tau(R)\) to ensure that \(E[W|R]=1\) is satisfied. In the supremum problem, once \(\lambda(R)Y\) is above a quantile threshold, we want to place the largest weights possible on those observations, and for \(\lambda(R)Y\) below that threshold, we place the lowest weights possible. The expressions for \(a(.)\) are also immediate from the \(W^{*}\) solutions. In the supremum problem, when \(\lambda y>q\), \(a(.)=\bar{w}-1\); when \(\lambda y<q\), \(a(.)=\underline{w}-1\). We do the opposite for the infimum.
The proof proceeds by using a less constrained quantile balancing problem than (3) that is easier to solve. Once we have shown that \(W^{*}\) solves the less constrained problem, and it is feasible in (3), then the \(W^{*}\) must also be the solution to (3). Details are in Appendix A.2.
There may be situations where researchers are not willing to bound the likelihood ratio above. Since \(W\) corresponds to a ratio of densities, this one-sided phenomenon occurs when the support of \(P_{Obs}\) is strictly contained in the support of \(P_{True}\). In particular, we may be interested in upper bounds on the object of interest that take the following form:
\[\sup_{W}E\left[W\lambda(R)Y\right]s.t.W\in\left[\underline{w}\left(R\right), \infty\right),E[W|R]=1 \tag{7}\]
The characterization in Theorem 1 is insufficient. For example, if \(\bar{w}(R)\) is infinite and \(\lambda(R)Y\) is always positive, then \(\tau(R)=1\) and the formula from Equation (6) is \(E[\lambda(R)Y\underline{w}(R)]<E[\lambda(R)Y]\). Is there a way to nest the identification results in the finite-\(\bar{w}\) case and the infinite-\(\bar{w}\) case?
We show that there is a convenient characterization of sharp bounds that holds even if the likelihood ratio is unbounded. This characterization also slightly generalizes the environment of Theorem 1 to hold even when \(Y\) contains mass points.
To motivate the generalization, observe that it is innocuous to subtract \(E[Q_{\tau}a(\cdot)]=0\) from the Equation (6) because \(E[a(\cdot)\mid R]=0\). While it is unnecessary when \(\bar{w}\) is finite and \(Y\) is continuously distributed, this modified characterization is useful when we allow \(\bar{w}\) to be infinite.
**Theorem 2**.: _Whether \(\bar{w}(R)\) is finite or infinite, the upper bound of (3) is given by_
\[E\left[\lambda(R)Y+\left(\lambda(R)Y-Q_{\tau(R)}\left(\lambda(R)Y|R\right) \right)a(\underline{w}(R),\bar{w}(R),\lambda(R)Y,Q_{\tau(R)}(\lambda(R)Y\mid R ))\right]. \tag{8}\]
_and an analogous expression holds for the lower bound._
Equation (8) is a slight but useful generalization of Equation (6). When the assumptions of Theorem 1 hold, \(E[a(\cdot)\mid R]=0\) and the expressions will be identical. Further, the solution \(W^{*}\) will be identical in both problems. Theorem 2 is more general in that, by subtracting \(Q_{\tau}\), we can now accommodate unbounded outcomes. When \(\lambda(R)Y\) has a point mass at \(Q_{\tau(R)}(\cdot)\), the adversarial reweighting effect \(a\) is not conditionally mean-zero and the the characterization (6) is not valid. On that event where \(a\) is incorrectly defined, the generalized characterization (8) multiplies \(a\) by zero and so remains the sharp bound. A similar characterization is used in the supplement of Dorn and Guo (2022) in considerably less generality.
To intuit the potentially finite bounds when the likelihood ratio bound \(\bar{w}\) is infinite, the adversarial distribution can put unbounded weight on large outcomes, but it must put at least a likelihood ratio of \(\underline{w}(R)\) on observed outcomes. If \(\bar{w}(R)\) is infinite, then \(\tau(R)=1\) and we never observe \(\lambda(R)Y>Q_{1}\). Instead, the term \((\lambda(R)Y-Q_{1})a\) is always equal to \((Q_{1}-\lambda(R)Y)(1-\underline{w}(R))\). Hence, if \(\underline{w}(R)<1\) and \(Q_{1}\) is infinite, then the upper bound is always infinite. If \(\underline{w}(R)=1\) but \(\bar{w}(R)\) and \(Q_{1}\) are infinite, then the upper bound is defined awkwardly but the claim holds under the notation \(-\infty*0=0\).
Applications
We provide new identification results using our framework for three settings: regression discontinuity with one-sided manipulation, inverse propensity weighting with treatment selected on unobservables, and ordinarily least squares with confounding. Our analysis is immediate after appropriate definition of variables. Our measure can naturally be interpreted as a selection model in the RD and IPW cases.
### Sharp Regression Discontinuity
We substantially generalize the state of the art sensitivity analysis for sharp RD using our framework and immediately obtain bounds on more standard causal estimands.
We are interested in treatment effects \(Y(1)-Y(0)\), but we only observe \(Y=DY(1)+(1-D)Y(0)\). We assume that observations are drawn independently from a full distribution over \((M,X(1),X(0),Y(1),Y(0),D)\), where \(M\) is a binary manipulation variable \(X(m)\) is the potential running variable when \(M=m\), \(D\in\{0,1\}\) is the treatment status, and \(Y(d)\) is the potential outcome when \(D=d\). Observations who are manipulators (i.e., \(M=1\)) endogenously manipulate their \(X\) to sort into treatment. However, we face the fundamental problem of causal inference and only observe independent tuples of the coarsening random variables \((X,Y,D)\). \(X=X(M)\) is the running variable, \(Y=Y(D)\) the outcome, and \(D\) the treatment. We use the notation "\(X\approx c\)" to mean that we take the limit as \(\varepsilon\to 0\) of \(X\in[c-\varepsilon,c+\varepsilon]\). Further, \(X=c^{+}\) denotes \(X\in[c,c+\varepsilon]\) and \(X=c^{-}\) denotes \(X\in[c-\varepsilon,c]\).
We focus on sharp RD designs. The main assumption in sharp RD is the following:
**Assumption 1** (Sharp RD).: \(P(D=1|X>c)=1,P(D=1|X<c)=0\)__
Assumption 1 is that we study a sharp RD setup: treatment \(D\) is assigned for all observations on one side of a cutoff \(c\) but not on the other. It mimics assumption (RD) of Hahn et al. (2001). It is observationally testable and often obvious in applications: if \(X\) is the net vote share of an election candidate, then the candidate wins if and only if \(X>0\).
We assume that every observation can be partitioned into manipulators and non-manipulators. Non-manipulator (\(M=0\)) running variables are as good as random around the discontinuity.
**Assumption 2** (Non-manipulator exogeneity).: \((Y(1),Y(0))\rotatebox[origin={c}]{$\models$}\hskip-10.0ptX\mid M=0,X\approx c\)_. \(F_{X|M=0}(x)\) is differentiable in \(x\) at \(c\) with a positive derivative._
In the standard RD setup, Assumption 2 is equivalent to assuming that the conditional independence assumption holds for all observations. In our more general context, this conditional independence is only required for the non-manipulators -- no such assumption is imposed on the manipulators so far. Differentiability and exogeneity implies that the change in non-manipulator average outcomes across the cutoff \(c\) is the non-manipulator average treatment effect. However, we cannot observe that average when \(\tau>0\) and there are manipulators.
We assume that manipulation occurs in one direction.
**Assumption 3** (One-sided manipulation).: \(F_{X|M=1}(c)=0\)_. \(F_{X|M=1}(x)\) is right-differentiable in \(x\) at \(c\)._
Assumption 3 requires manipulators to manipulate only in the direction of treatment. The notation that manipulators are treated is for convenience, since \(D\) and \(1-D\) can be interchanged to remain consistent with our assumption. In the classic RD setup with no manipulation across the boundary at a sufficiently fine
level, the conditional density \(F_{X|M=1}\) is zero to the right of the cutoff. Settings where manipulation could plausibly occur in either direction is an interesting direction for future analysis.
The setup so far is identical to Gerard et al. (2020) and allows for one-sided manipulation. Using their notation, of the people just above the cutoff, the proportion who have manipulated is:
\[\tau:=\mathbb{P}(M=1\mid X=c^{+})\]
\(\tau\) is point identified. To avoid ambiguity with Gerard et al.'s notation, we use "\(\tau(R)\)" to refer to percentiles in our identification results applied here. Estimands that we may be interested in include the conditional average treatment effect (CATE), the conditional local average treatment effect (CLATE), and the conditional average treatment effect on the treated (CATT).
\[\psi^{CATE} :=\mathbb{E}\left[Y(1)-Y(0)\mid X\approx c\right]\] \[\psi^{CLATE} :=\mathbb{E}\left[Y(1)-Y(0)\mid X\approx c,M=0\right]\] \[\psi^{CATT} :=\mathbb{E}\left[Y(1)-Y(0)\mid X\approx c,X>c\right]\]
We call \(\mathbb{E}\left[Y(1)-Y(0)\mid X\approx c,M=0\right]\) the CLATE because it is an average treatment effect at the cutoff among the population for whom the treatment is randomly assigned at the cutoff.
We write the causal estimands of interest in terms of conditional expectations of observed outcomes and potential outcomes as follows:
**Lemma 1**.: _Under Assumptions 1-3, our main estimands of interest have the following expressions:_
\[\psi_{CATE} =\frac{1}{2-\tau}\mathbb{E}\left[Y\mid X=c^{+}\right]+\frac{1- \tau}{1-2\tau}\mathbb{E}\left[Y(1)\mid M=0,X\approx c\right]\] \[\qquad-\frac{\tau}{2-\tau}\mathbb{E}\left[Y(0)\mid M=1,X\approx c \right]-(1-\frac{\tau}{2-\tau})\mathbb{E}\left[Y\mid X=c^{-}\right]\] \[\psi_{CATT} =\frac{\mathbb{E}\left[(2D-1)Y\mid X\approx c\right]-\frac{\tau} {2-\tau}\mathbb{E}_{\mathbb{P}}[Y(0)\mid X\approx c,M=1]}{\frac{\tau}{2-\tau} +(1-\frac{\tau}{2-\tau})/2}\] \[\psi_{CLATE} =\mathbb{E}_{\mathbb{P}}[Y(1)\mid X\approx c,M=0]-\mathbb{E}[Y \mid X=c^{-}]\]
Most of the quantities in the expressions above are observable from the data. The only unobservable objects are \(\mathbb{E}\left[Y(1)\mid M=0,X\approx c\right]\) and \(\mathbb{E}\left[Y(0)\mid M=1,X\approx c\right]\). We will show how these objects can be written as observable expectations of outcomes weighted by unobserved likelihood ratios and how our procedure can place bounds on them.
Lemma 2 shows that the relevant distributions can be written in terms of confounded probabilities of manipulation.
**Lemma 2**.: _Suppose \(\mathbb{Q}\) is a distribution over \((Y(1),Y(0),M,D(1),D(0))\) that satisfies the GRR assumptions and define \(\mathbb{Q}\)'s manipulation selection functions \(q_{1}(y_{1})\equiv\mathbb{Q}(M=1|Y(1)=y_{1})\) and \(q_{0}(y_{0})\equiv\mathbb{Q}(M=1|Y(0)=y_{0})\). Then these translate the observed outcome likelihoods to unobserved likelihoods as follows:_
\[d\mathbb{Q}(Y(1)\mid X\approx c,M=0) =\frac{1}{1-\tau}\frac{1-q_{1}(Y(1))}{1+q_{1}(Y(1))}d\mathbb{P}(Y (1)\mid X=c^{+})\] \[d\mathbb{Q}(Y(0)\mid X\approx c,M=1) =\frac{2(1-\tau)}{\tau}\frac{q_{0}(Y(0))}{1-q_{0}(Y(0))}d\mathbb{ P}(Y(0)\mid X=c^{-})\]
Due to Lemma 2,
\[E\left[Y(1)|X\approx c,M=0\right]=\frac{1}{1-\tau}E\left[Y\frac{1-q_{1}}{1+q_{1}} \frac{1\left\{X=c^{+}\right\}}{P\left(X=c^{+}\right)}\right] \tag{9}\]
\[E\left[Y(0)|X\approx c,M=1\right]=\frac{2\left(1-\tau\right)}{\tau}E\left[Y \frac{q_{0}}{1-q_{0}}\frac{1\left\{X=c^{-}\right\}}{P\left(X=c^{-}\right)}\right] \tag{10}\]
To place bounds on CLATE, we simply have to place bounds on \(E\left[Y(1)|X\approx c,M=0\right]\), which has the form above that is amenable to using Theorem 1. In particular,
**Proposition 1**.: _Suppose Assumptions 1-3 hold and the sensitivity assumption is_
\[\frac{\mathbb{P}(M=1\mid Y(1),X\approx c)}{\mathbb{P}(M=0\mid Y(1),X\approx c )}\bigg{/}\frac{\tau}{2(1-\tau)}\in[\Lambda_{1}^{-},\Lambda_{1}^{+}].\]
_Then the upper bound for \(E\left[Y(1)|X\approx c,M=0\right]\) is identical to solving (3) with_
\[\lambda_{True}(R) =\frac{1\left\{X=c^{+}\right\}}{P\left(X=c^{+}\right)}\frac{1-q_ {1}}{1+q_{1}}\frac{1}{1-\tau}\] \[\lambda(R) =\frac{1\left\{X=c^{+}\right\}}{P\left(X=c^{+}\right)},\qquad W =\frac{1-q_{1}}{1+q_{1}}\frac{1}{1-\tau}\] \[\underline{w}(R) =\frac{1}{1-\tau+\tau\Lambda_{1}^{+}},\qquad\bar{w}(R)=\frac{1}{ 1-\tau+\tau\Lambda_{1}^{-}}\]
The factor of two is necessary to write the restriction in terms of \(\tau\) and a parameter which corresponds to exogeneity at \(\Lambda^{-}=\Lambda^{+}=1\). Gerard et al. (2020) placed bounds on \(\psi^{CLATE}\) but not \(\psi^{CATE}\) when \(\Lambda^{+}=\infty\) and \(\Lambda^{-}=0\).
Similarly, to place bounds on CLTT, we simply place bounds on \(E\left[Y(0)|X\approx c,M=1\right]\),
**Proposition 2**.: _Suppose Assumptions 1-3 hold and the sensitivity assumption is_
\[\frac{\mathbb{P}(M=1\mid Y(0),X\approx c)}{\mathbb{P}(M=0\mid Y(0),X\approx c )}\bigg{/}\frac{\tau}{2(1-\tau)}\in[\Lambda_{0}^{-},\Lambda_{0}^{+}].\]
_Then the upper bound for \(E\left[Y(0)|X\approx c,M=1\right]\) is identical to solving (3) with_
\[\lambda_{True}(R) =\frac{q_{0}}{1-q_{0}}\frac{1\left\{X=c^{-}\right\}}{P\left(X=c^ {-}\right)}\frac{2\left(1-\tau\right)}{\tau}\] \[\lambda(R) =\frac{1\left\{X=c^{-}\right\}}{P\left(X=c^{-}\right)},\qquad W= \frac{q_{0}}{1-q_{0}}\frac{2\left(1-\tau\right)}{\tau}\] \[\underline{w}(R) =\Lambda_{0}^{-},\qquad\bar{w}(R)=\Lambda_{0}^{+}\]
For the CATE, we want to place bounds on
\[\psi=\frac{1-\tau}{1-2\tau}E\left[Y(1)|M=0,X\approx c\right]-\frac{\tau}{2- \tau}E\left[Y(0)|M=1,X\approx c\right] \tag{11}\]
**Proposition 3**.: _Suppose the assumptions from Proposition 1 and 2 hold. Finding the upper bound for \(\psi\) is identical to solving (3) with_
\[\lambda_{True}(R)=\frac{1}{1-2\tau}\frac{1-q_{1}}{1+q_{1}}\frac{1\left\{X=c^ {+}\right\}}{P\left(X=c^{+}\right)}-\frac{2\left(1-\tau\right)}{2-\tau}\frac{ q_{0}}{1-q_{0}}\frac{1}{P\left(X=c^{-}\right)}\]
\[\lambda(R) =\frac{1\left\{X=c^{+}\right\}\left(2-\tau\right)\left(1-\tau\right) P\left(X=c^{-}\right)-\tau 1\left\{X=c^{-}\right\}\left(1-2\tau\right)P\left(X=c^{+}\right)}{\left(1-2 \tau\right)\left(2-\tau\right)P\left(X=c^{+}\right)P\left(X=c^{-}\right)}\] \[W =\frac{1-q_{1}}{1+q_{1}}1\left\{X=c^{+}\right\}\frac{1}{\left(1- \tau\right)}+2\left(1-\tau\right)\frac{q_{0}}{1-q_{0}}1\left\{X=c^{-}\right\} \frac{1}{\tau}\] \[\underline{w}(R) =1\left\{X=c^{+}\right\}\frac{1}{1-\tau+\tau\Lambda_{1}^{+}}+1 \left\{X=c^{-}\right\}\Lambda_{0}^{-},\qquad\bar{w}(R)=1\left\{X=c^{+}\right\} \frac{1}{1-\tau+\tau\Lambda_{1}^{-}}+1\left\{X=c^{-}\right\}\Lambda_{0}^{+}.\]
As before, sharp bounds can be obtained in closed form by applying Theorem 1. Our approach tightens bounds from Gerard et al. (2020), and allows meaningful bounds to be placed on CATT and CATE that were previously unachievable.
### Inverse Propensity Weighting
We substantially generalize existing IPW identification results using our framework.
We are interested in expected potential outcomes or treatment effects of a binary treatment \(Z\) with observed controls \(X\); the observable quantities are \(R=(X,Z)\). We assume that there is a full distribution \(P\) over covariates, potential outcomes \(Y(1)\) and \(Y(0)\), and treatment \(Z\) generating the tuple \((X,Y(1),Y(0),Z,U)\). Unconfoundedness conditional on potential outcomes is trivial: \(Y(1)\rotatebox[origin={c}]{$\models$}Z\mid X,Y(1)\). However, we only observe the coarsened distribution over \((X,Y,Z)\) where \(Y=Y(Z)\).
We measure failures of unconfoundedness in terms of treatment selection. Unconfoundedness is the assumption that \(Y(0),Y(1)\perp Z|X\). Define the propensity for treatment given controls as \(e(X)=P(Z=1\mid X)\) and the unobserved propensity for treatment given controls and potential outcome as \(e_{z}(x,y)=P(Z=1\mid x,Y(z)=y)\). If unconfoundedness holds,
\[\frac{\left(e_{z}\left(x,y\right)\right)/\left(1-e_{z}\left(x,y\right)\right) }{e(X)/\left(1-e(X)\right)}=1.\]
If unconfoundedness fails, then \(\frac{\left(e_{z}\left(x,y\right)\right)/\left(1-e_{z}\left(x,y\right)\right) }{e(X)/\left(1-e(X)\right)}\) is not equal to one.
We derive meaningful bounds on causal objects of interest under limited violations of unconfoundedness. In particular, we consider functions \(l_{0}(X)\) and \(u_{0}(X)\) that bound the odds ratio shift from observing the potential outcomes. Formally, we assume that \(e_{z}(X,Y(z))\in(0,1)\) almost surely for both treatment assignments \(z\) and:
\[l_{z}(X)\leq\frac{e_{z}(X,Y(z))/\left(1-e_{z}(X,Y(z))\right)}{e(X)/\left(1-e(X )\right)}\leq u_{z}(X). \tag{12}\]
It is immediately clear that Equation (12) can equivalently be viewed in terms of how far \(e_{z}\) and \(e\) can differ, as proposed by Masten and Poirier (2018). However, this odds ratio-based parameterization adapted from Tan (2006) (where \(l_{z}=\Lambda^{-1}\) and \(u_{z}=\Lambda\) uniformly) can be viewed as a bound on the likelihood ratio between unobserved and observed potential outcomes and as a result is more convenient for empirical analysis.
We begin in the simpler case of bounding \(E[Y(1)]\). If \(e_{0}(X,U)>0\) almost surely, then \(E[Y(1)]=E[YZ/e_{1}(X,Y(1))]\). Hence, bounds can be placed by using Theorem 1 applied to the objects defined in the proposition below.
**Proposition 4**.: _Finding the upper bound for \(E[Y(1)]\) under Equation (12) is identical to solving (3) with_
\[\lambda_{True}(R) =\frac{Z}{e_{1}(Z,Y(1))},\qquad\lambda(R)=\frac{Z}{e(X)},\qquad W =\frac{e(X)}{e_{1}(Z,Y(1))}\] \[\underline{w}(R) =\left(1-e(X)\right)\frac{1}{u_{1}(X)}+e(X),\qquad\bar{w}(R)=\left( 1-e(X)\right)\frac{1}{l_{1}(X)}+e(X)\]
The result is intuitive. The \(w\) bounds would be \(1\) when \(l_{1}=u_{1}=1\). As \(e(X)\) gets closer to one, we see a greater share of \(Y(1)\) and the \(w\) bounds get closer to one since we are interested in the \(Y(1)\) only. We can add the other \(1-e(X)\) to varying degrees to fit the sensitivity assumption. An analogous approach can be used to obtain \(E[Y(0)]=E[Y(1-Z)/(1-e_{0}(X,Y(0)))]\).
Researchers may also be interested in the average treatment effect (ATE) \(E[Y(1)-Y(0)]\). There is no restriction in the model that \(e_{1}=e_{0}\), but we require both unobserved propensities to satisfy (12). We are allowed to use different latent objects for \(Z\) and \(1-Z\) because we do not ask for the same unobserved confounder \(U\) for different partitions of the data. Namely, we can use \(e_{1}=P(Z=1|X,Y(1))\) and \(e_{0}=P(Z=1|X,Y(0))\), and the sensitivity assumption holds for both \(e_{z}\). In the respective problems, it is integrating over the respective potential outcomes that matters. Our analysis proceeds similarly.
**Proposition 5**.: _For (12), finding the upper bound for \(E[Y(1)-Y(0)]\) is identical to solving (3) with_
\[\lambda_{True}(R) =\frac{Z}{e_{1}(X,Y(1))}-\frac{1-Z}{1-e_{0}(X,Y(0))},\qquad \lambda(R)=\frac{Z}{e(X)}-\frac{1-Z}{1-e(X)},\qquad W=\frac{Ze(X)}{e_{1}(Z,Y(1 ))}+\frac{(1-Z)(1-e(X))}{1-e_{0}(X,Y(0))}\] \[\underline{w}(R) =Z\left(e(X)+\frac{1-e(X)}{u_{1}}\right)+(1-Z)\left(1-e(X)+e(X)l _{0}\right)\] \[\bar{w}(R) =Z\left(e(X)+\frac{1-e(X)}{l_{1}}\right)+(1-Z)\left(1-e(X)+e(X)u_ {0}\right)\]
This analysis generalizes existing results. Our analysis in Proposition 4 extends the analysis of Frauen et al. (2023) to allow \(e_{1}\) arbitrarily small. Our analysis in Proposition 5 extends the analysis of Dorn and Guo (2022) to cover bounds besides Tan's marginal sensitivity model. Our framework also offers a more direct way of obtaining sharp bounds, because the ATE bounds can be solved in a single step once we know that the bounds \([\underline{w}(R),\bar{w}(R)]\) are attainable by the model.
The c-dependence sensitivity assumption of Masten and Poirier (2018) can be expressed as \(e_{z}(X,Y(z))\in[e(X)-c,e(X)+c]\cap[0,1]\). Then, using our existing notation, if \(e(X)\in(c,1-c)\), we have:
\[u_{z} =\frac{(e(X)+c)/(1-(e(X)+c))}{e(X)/(1-e(X))}\] \[l_{z} =\frac{(e(X)-c)/(1-(e(X)-c))}{e(X)/(1-e(X))}.\]
Then, bounds on the ATE can be obtained analogously using our framework.
### Ordinary Least Squares
We illustrate novel bounds for Ordinary Least Squares using our framework. We are interested in a linear combination of coefficients \(\beta\) from a hypothetical linear model:
\[Y=X\beta+u.\]
We are interested in \(\delta^{\prime}\beta\), where \(X\) includes an intercept but \(\delta\) puts no weight on the intercept term. We believe without loss of generality that \(E[u]=0\). However, there is confouding in the sense that \(E[X\varepsilon]\neq 0\). If we re-weighted the data by \(W=\frac{dP(u)}{dP^{\prime}(u|X)}\), we would obtain \(E[W(Y-X\beta)\mid X]=E[uW\mid X]=0\) and the reweighted least squares would obtain the correct coefficients.
The sensitivity assumption is then on \(W=\frac{dP_{True}}{dP_{Obs}}\). For instance, we may have:
\[\underline{w}\leq W=\frac{dP_{True}}{dP_{Obs}}\leq\bar{w}\]
Using the notation of our general framework,
\[\lambda(R) =\delta^{\prime}E[X^{\prime}X]^{-1}X\] \[\lambda_{True}(R) =\delta^{\prime}E[X^{\prime}X]^{-1}X\frac{dP_{True}}{dP_{Obs}}\]
Then, Theorem 1 or Theorem 2 can be applied to obtain bounds on the object of interest.
This result has implications on any OLS-based design, including event studies and difference in differences (DD). The identifying assumption in DD and event studies is implied by the parallel trends assumption, and there may be concern that parallel trends need not be empirically credible.
There are existing ways to do sensitivity analysis to the failure of parallel trends. For example, Rambachan and Roth (2023) places an explicit assumption on the extent that the slopes are not parallel. The approach we present here is calibrated in terms of likelihood ratios rather than model coefficients. As a result, our unobserved confounding measure is invariant to taking transformations like logarithms of outcomes but may be less interpretable for practitioners.
## 4 Implementation
We illustrate the procedure using a simulation in the c-dependence context of Section 3.2.
Our distribution of data is \(X\sim U[-\eta,\eta]\), \(Z\mid X\sim Bern(1/(1+exp(-X)))\), and \(Y\mid X,Z\sim\mathcal{N}((2+X)(Z-1),1)\), where \(\eta\) is chosen so that the support of \(e(X)\) is \([0.1,0.9]\).
We consider \(c=0,0.01,...,0.1\). When \(c>0.1\), the identified set is unbounded. The case \(c=0.1\) allows unbounded propensities, but we show in Corollary 1 (Page 15) that the identified set remains uniformly bounded for all \(c\leq 0.1\).
We obtain bounds numerically. In particular, we average the closed-form identified set for \(E[Y(1)-Y(0)\mid X]\) over one million draws of \(X\). With the true \(\psi^{+}\) and \(\psi^{-}\) essentially known, we can assess the validity of our estimation and inference procedures.
The bounds in the general program are estimated by plugging in the sample analog of Equation (8). We estimate \(\lambda\), \(\tau\), \(\underline{w}\), and \(\bar{w}\) by plugging in propensity estimates \(\hat{e}(X)\). The propensities are estimated using logistic regression of \(Z\) on \(X\). We estimate the quantile function using quantile regression on 101 grid points (\(\tau=0,0.01,...,1.00\)). The quantile regression regresses \(\hat{\lambda}Y\) on \(Z\) interacted with both \(X\) and \(\hat{\lambda}(X)\). We estimate quantiles at 101 grid points (\(\tau=0,0.01,...,1.00\)). For each observation, we take the quantile regression corresponding to the closest grid point to the estimated \(\hat{\tau}\).
Inference proceeds by a standard percentile bootstrap. For a given dataset, we can redraw observations with replacement. For every bootstrap draw \(b\), we re-estimate propensities and weight bounds with propensities updated via one-step updating and estimate bootstrap upper and lower bounds \(\hat{\psi}_{b}^{+}\) and \(\hat{\psi}_{b}^{-}\). We do not re-estimate the quantile regression grid in bootstraps because, as we plan to show in the next version of this work, the quantile estimates have a second-order effect on the estimates. The 95% confidence interval for the identified set is the set bounded by the 2.5th quantile of the \(\hat{\psi}_{b}^{-}\) draws and the 97.5th quantile of the \(\hat{\psi}_{b}^{+}\) draws. The quantiles of the estimated bounds \(\{\psi_{b}^{-}\},\{\psi_{b}^{+}\}\) then form the confidence interval for the identified
set. We also calculate two-sided 95% confidence intervals for the lower and upper bound analogously.
For a given sensitivity parameter \(c\), we run 1,000 simulations of the data with 2,000 observations. Within each simulation, we take 1,000 bootstrap draws and calculate the bounds for each bootstrap draw.
We present our mean and median bound estimates in Figure 1. Our median bound estimates generally track the true bounds. Our mean bound estimates roughly track the identified set until \(c\) gets close enough to 0.1 that some simulations produce infinite estimated bounds (0.5% of simulations at \(c=0.07\)). As \(c\) gets close to 0.1 and the most extreme \(\tau(R)\) values get close to one, our median estimates become slightly too wide. We expect to show in future work that this reflects robustness of our characterization with respect to quantile errors, which are especially likely when applying our discrete grid to extreme \(\bar{w}\) values.
Our coverage results are reported in Table 1. Our coverage rates are generally within the 95% exact coverage interval of 93.6% to 96.3%. As \(c\) gets higher, the bounds become slightly more conservative but still remain valid. As \(c\) gets close to 0.1, there is an increasing chance of estimating at least one observation's propensity \(\hat{e}(X_{i})\not\in(c,1-c)\) and potentially generating an infinite estimated bound. When \(c=0.1\), the identified set rests on a knife edge between \([1.5,2.5]\) and \((-\infty,\infty)\). We find that in this case the confidence intervals cover the true (finite) identified set in 98.7% of simulations and are unbounded in 94.5% of simulations. (20.6% of bound estimates are unbounded at \(c=0.10\).)
## 5 Conclusion
This paper has proposed a novel sensitivity analysis framework for linear estimand identification failures. By placing bounds on the density ratio between the observed and true conditional outcome distributions, we obtain sharp and tractable analytic bounds. This framework generalizes existing sensitivity models in RD and IPW, generates a new sensitivity analysis for OLS, and provides new results for unbounded likelihood ratios. As a result of our general setting, we now have a procedure for sensitivity analysis of the CATE in RD that has previously remained an open issue; we have a simpler method of deriving bounds under c-dependence of Masten and Poirier (2018) in IPW; and we have a novel sensitivity framework in OLS that is based on likelihood ratios rather than functional form-dependent relationships.
|
2309.11679 | The Real Time Analysis framework of the Cherenkov Telescope Array's
Large-Sized Telescope | The Large-Sized Telescopes (LSTs) of the Cherenkov Telescope Array
Observatory (CTAO) will play a crucial role in the study of transient gamma-ray
sources, such as gamma-ray bursts and flaring active galactic nuclei. The low
energy threshold of LSTs makes them particularly well suited for the detection
of these phenomena. The ability to detect and analyze gamma-ray transients in
real-time is essential for quickly identifying and studying these rare and
fleeting events. In this conference, we will present recent advances in the
real-time analysis of data from the LST-1, the first prototype of LST located
in the Canary island of La Palma. We will discuss in particular the development
of new algorithms for event reconstruction and background rejection. These
advances will enable rapid identification and follow-up observation of
transient gamma-ray sources, making the LST-1 a powerful tool for the study of
the dynamic universe. The implementation of this framework in the future Array
Control and Data Acquisition System (ACADA) of CTAO will be discussed as well,
based on the experience with LST. | Sami Caroff, Pierre Aubert, Enrique Garcia, Gilles Maurin, Vincent Pollet, Thomas Vuillaume | 2023-09-20T23:21:28Z | http://arxiv.org/abs/2309.11679v1 | # The Real Time Analysis framework of the Cherenkov Telescope Array's Large-Sized Telescope
###### Abstract:
The Large-Sized Telescopes (LSTs) of the Cherenkov Telescope Array Observatory (CTAO) will play a crucial role in the study of transient gamma-ray sources, such as gamma-ray bursts and flaring active galactic nuclei. The low energy threshold of LSTs makes them particularly well suited for the detection of these phenomena. The ability to detect and analyze gamma-ray transients in real-time is essential for quickly identifying and studying these rare and fleeting events. In this conference, we will present recent advances in the real-time analysis of data from the LST-1, the first prototype of LST located in the Canary island of La Palma. We will discuss in particular the development of new algorithms for event reconstruction and background rejection. These advances will enable rapid identification and follow-up observation of transient gamma-ray sources, making the LST-1 a powerful tool for the study of the dynamic universe. The implementation of this framework in the future Array Control and Data Acquisition System (ACADA) of CTAO will be discussed as well, based on the experience with LST.
Introduction
Gamma-ray astronomy aims to study the cosmic particle accelerators of the universe, through the analysis of their high-energy electromagnetic emissions. It covers various types of sources, from Galactic ones such as supernovae remnants and pulsar wind nebulae to extragalactic ones like active galactic nuclei. Recently, the first detections of the afterglow radiations from gamma-ray bursts [1, 2, 3] were achieved at very high energies (VHE, \(E>100\) GeV), opening a new research area around transient phenomena. Moreover, recent observations of the electromagnetic counterpart of gravitational wave emitters have motivated a fast development of the field.
The Cherenkov Telescope Array Observatory (CTAO) represents the next generation of ground-based observatories. In October 2018, the first prototype of the Large-Sized Telescope, called LST-1, was installed and inaugurated at the Roque de los Muchachos Observatory (ORM). It has been actively collecting data since November 2019. The LST-1 boasts a 23-meters diameter reflector dish, a lightweight mechanical structure, and a rapid repositioning system. Its primary purpose is to detect low-energy gamma rays, above approximately 20 GeV, and to enable swift follow-up observations of transient events. Efficiently monitoring alerts and observing transient events is only possible with a robust and fast (real-time) analysis system, which processes data immediately after they are collected during observations and provides timely feedback to operators to initiate appropriate decisions and generate alerts. The analysis process can be divided into two sequential stages: first, the reconstruction, characterization, and selection of gamma rays from the recorded images, and second, the search for sources within the telescope's field of view.
This proceeding only covers the first part which is the real-time analysis (RTA) of the LST-1, prototype of the CTAO online-reconstruction. Section 2 describes the global architecture and the technical options adopted while section 3 exposes the performance and results obtained on Markarian 421 data.
## 2 Data Analysis
The initial stage of the analysis process, known as reconstruction, focuses on processing the uncalibrated raw data (R0) to derive the parameters of each recorded event (DL3-data). These parameters include the energy, direction, and a gamma-ray like classification score (called gammaness). The LST-1 telescope R0 data consists of a sequences of 40 images, each containing 1855 pixels, for every recorded event. In its nominal mode, the telescope operates at a rate of \(\sim\)10 kHz, generating a data flow of \(\sim\)3 GB/s of the raw data but the rate can go up to 15 kHz in case of extreme astrophysical events. Looking ahead to the future CTA observatory, the expected data flow will be approximately 17 GB/s for the northern site and 27 GB/s for the southern site.
The historical Hillas method [4] serves as the reference reconstruction approach, renowned for its simplicity and robustness. Its modern version comprises three consecutive stages, implemented in three corresponding software blocks:
* The **R0->DL1** stage includes data calibration, images aggregation and cleaning for each event. It also encompasses the extraction of Hillas and timing parameters (as defined in [5]). This stage significantly reduces the data volume from several GB/s to less than 150 MB/s.
Most of the output data consists of the final calibrated images used for monitoring the data acquisition (DAQ), while the Hillas parameters correpond to only a few MB/s.
* The **DL1->DL2** stage focuses on evaluating the energy, direction, and gammaness. The input and output data sizes are approximately the same, amounting to a few MB/s.
* **DL2->DL3** stage corresponds to the selection of gamma-like events, further reducing the data flow to a few kB/s.
### Software Architecture
To ensure seamless data acquisition, the RTA is performed on dedicated servers, with communication between the DAQ system and the analysis servers established through the network. At least two servers are required to receive data from the LST-1. R0 data is streamed in the Protocol Buffer format1 handled by ZeroMQ2, an open-source universal messaging library.
Footnote 1: [https://developers.google.com/protocol-buffers/docs/overview](https://developers.google.com/protocol-buffers/docs/overview)
Footnote 2: [https://zeromq.org](https://zeromq.org)
Footnote 3: [https://slurm.schedmd.com/documentation.html](https://slurm.schedmd.com/documentation.html)
The Slurm3 workload manager plays a vital role in resources and processes management. This open-source, fault-tolerant, and highly scalable cluster management and job scheduling system is specifically designed for both large and small Linux clusters.
Footnote 4: Car flash events are induced by the illuminations of car passing by the road near the telescope.
On each server, four processes of R0->DL1, each with two threads, continuously run to process the data stream. Each thread handles an approximate event rate of 1.25 kHz, resulting in a total processing capacity of 10 kHz under nominal operation. Evaluation of the events produced by Car flash4 indicates that the maximum rate of 15 kHz (1.875 kHz) provided by the LST-1 is supported by the analysis servers. A short buffer of 100 events, equivalent to 80 ms of data, is used.
Footnote 4: Car flash events are induced by the illuminations of car passing by the road near the telescope.
Following the R0->DL1 computation, HDF5\({}^{\circ}\)6 files containing 20 000 DL1 events are generated. For each new file, the DL1->DL2->DL3 chain is immediately executed using the capabilities of the Slurm manager.
Footnote 5: [https://support.hdfgroup.org/HDF5/](https://support.hdfgroup.org/HDF5/)
### Optimization of data processing
The RTA includes several stages: image calibration, selection, integration, cleaning, and extraction of Hillas parameters. The data reduction and processing steps are computationally demanding, therefore, the C++ programming language was chosen for its fast execution and optimization capabilities.
The calibration step consists in converting electronic charge into the corresponding number of photoelectrons detected by each camera pixel by subtracting a baseline (pedestal) and applying a conversion factor (gain). The calibration algorithm selects the appropriate gains and pedestals obtained with calibration runs. The compiler automatically vectorizes this algorithm for improved performance.
Pixel-wise charge integration involves summing the calibrated signals for each pixel using a specific time window. The compiler can optimize this process by effectively vectorizing the integration calculations.
Image cleaning is performed to select relevant pixels and remove those which triggered due to background noise. A two-thresholds algorithm (thresholding on pixel and pixel neighbours charge) is applied using a temporary matrix to ensure efficient memory access for neighboring pixels. Computation is performed instead of branching, which improves computational time by enabling vectorization.
Hillas and timing parameters are computed to characterize the shape of the cleaned images and extract relevant information. High-level intrinsic functions7 are used to optimize reduction operations and barycenter computations, resulting in faster execution compared to automatic optimization methods.
Footnote 7: [https://gitlab.in2p3.fr/CTA-LAPP/PHGENIX_LIBS/IntrinsGenerator/](https://gitlab.in2p3.fr/CTA-LAPP/PHGENIX_LIBS/IntrinsGenerator/)
Overall, these optimization techniques aim to reduce the computation time and enhance the performance of the data processing pipeline.
### High level parameters computation and DL3 production
The computation of the high-level parameters is not a bottleneck in terms of computing time, therefore the offline analysis chain of the LST-1 is used in order to simplify the maintainability of the software. This offline analysis chain is described in [5] and only a brief description will be provided in this proceeding. This analysis pipeline is working with a different DL1 format (hereafter called DL1-lstchain) than the CTAO standard one. Thus to use it a conversion step is required in the pipeline.
The high-level parameters, disp, energy, and gammaness are obtained using random forests trained on Monte Carlo simulations of gamma-ray events generated with a zenith angle of 20\({}^{\circ}\). Similar results with respect to those of the offline trained random forest [5] are obtained: time gradient is the most important feature for disp reconstruction, followed by psi, and length Hillas parameters, while the length is the most important feature for energy. For gammaness classification, features ranking is more equally distributed.
The last part of the analysis consists of a gammaness selection and a conversion to a FITS format compliant with gammapy, a high-level analysis software [6].
## 3 Performance
### Computing time
As explained in section 2, the software should support an event rate up to 15 kHz. Trigger rate peaks are exploited to test this requirement. An example is shown in Figure 1: it can be observed that the DL1 reconstruction was not affected by an event rate peak of 14 kHz.
The typical time for the production of the different analysis steps, is estimated using an observation run with an average rate of \(\sim\) 8 kHz:
* R0->DL1 : 30 s. Taking that the effective rate per thread before cleaning is 1 kHz, and that 20k events need to be accumulated to create a DL1 file, the analysis time can be approximated as 10 seconds while the 20 remaining seconds are spend to accumulate events. The offline chain for the DL1 production is running at 0.03 s/event while the RTA achieve a computing speed of 0.5 ms/event.
* DL1->DL1 lstchain : 12 s are used for this step, which will disappear when lstchain data format aligns with CTAO's in future releases.
* DL1 lstchain->DL2->DL3 : 14 + 10 s.
In total, 66 seconds are needed to produce a DL3 from 20k R0 events. It is important to note that this time scales linearly with the number of events per file.
### Reconstruction
The reconstruction performance is tested using real time data obtained from observations of Markarian 421. They consist of 14 observations selected according to the weather on site, variability of event rate at the DL3 level and zenith angle lower than 30\({}^{\circ}\), amounting to 4.2 hours of observations. These data were reconstructed with both the RTA and the lstchain pipeline. We used the same Monte Carlo simulation to train the random forest used by both pipelines.
The first step consists to optimizing the gammaness threshold. This was done separately on the two chains, by performing a full multiple OFF analysis of a single run of the dataset that is then removed for the rest of the analysis. The size of the ON region is set to 0.2\({}^{\circ}\) and the exclusion region defined as 0.35\({}^{\circ}\). The same setup will be used in all this proceeding. A maximum significance of 15.3\(\sigma\) with a gammaness threshold of 0.75 for the RTA, and 31.2\(\sigma\) with 0.7 for the offline analysis are found. The effective area, energy bias and resolution, computed using the gamma Monte Carlo simulations, are displayed in figure 2. The energy resolution and bias are similar between the two analysis chains above 400 GeV, and a degradation is observed at the lowest energy for the RTA when compared to the lstchain reconstruction. The relative value of the effective area is linked to the leakage of events up to 0.2\({}^{\circ}\) and is thus related to a worse angular resolution for the RTA analysis chain.
A multiple off analysis is performed on the whole dataset, and the resulting theta square plot is presented in figure 3. A significance of 43.3\(\sigma\) for the RTA and 83.1\(\sigma\) for the offline analysis are found.
Figure 1: Rate of event received by the R0->DL1 process compared to the rate of events in DL1. The observed difference in normalisation is due to the cleaning step which rejects some of the events.
Figure 3: Theta square plot of Markarian 421 data compared between the RTA (left) and the lstchain (right).
Figure 2: Effective area, energy resolution and bias in function of energy, computed in the gamma Monte Carlo.
The spectra of the source is reconstructed with the lstchain pipeline, which has been validated for this purpose, and used in order to derive the sensitivity of both chains. Differential sensitivity is defined as the minimal flux needed to have a 5\(\sigma\) detection in 50 hours of observations. The results are presented in figure 4. The sensitivity of the RTA pipeline is roughly two times worse than the lstchain one above 0.4 TeV, while at lower energies it worsens.
Besides the differential sensitivity, an important RTA use case is to detect a source in a time period shorter than a run (20 minutes). To explore this possibility, we provide the integral sensitivity versus time, supposing a Crab nebula like spectra, based on the results obtained on the test sample. This is presented in the figure 4. The RTA is able to detect an integrated flux of 0.3 Crab units in a single run.
## 4 Conclusions and outlook
In this proceeding, we presented the software architecture and performance of the RTA of the LST-1. Production of DL3 files from 20k events takes 66 seconds, and a flux equivalent to the one
Figure 4: Top left : Differential Sensitivity for 50 hours of the RTA compared with lstchain as a function of true energy. Top right : Sensitivity ratio between the RTA and lstchain as a function of true energy. Down : Integral Sensitivity, from 20 GeV to 10 TeV, of the RTA compared to a typical flux of 1, 0.5 and 0.1 Crab, as a function of observation time. The time to detect the typical fluxes with the RTA is respectively 177, 517 and 8580 seconds. For the offline analysis, we have respectively 50, 139 and 2140 seconds.
of the Crab Nebula above 20 GeV can be detected in 177 seconds. The differential sensitivity of the RTA is two times worse than the offline analysis chain above 400 GeV.
The differences with lstchain is mainly due to two main factors. The first one is the timing parameters computation, particularly useful for the angular resolution, which is performed using a simple linear regression in the RTA for time computing reasons. Moreover, the performed calibration is simplified to gain and pedestal calibration while a full calibration chain imply as well the so-called DRS4 calibration [8].
The RTA will be provided to the Array Control and Data Acquisition System (ACADA) [9] of CTAO for its first release, with no differences with the version presented in this proceeding apart the input calibrated stream. The CTAO speed requirement of 1000 events/CPU/Tel is already fulfilled. The adaptation of this software to the full array will require modifications mostly at the DL1 level, to merge triggered events, and to DL2 production to take into account the stereoscopic reconstruction. Nevertheless, the scalability and modularity of this software was designed to fulfill this goal in the coming years for the construction of the LST2-4 telescopes and the next releases of ACADA.
## Acknowledgements
The aknowledgements for CTAO and LST can be found here and here.
|
2309.16116 | On well-posed boundary conditions and energy stable finite volume method
for the linear shallow water wave equation | We derive and analyse well-posed boundary conditions for the linear shallow
water wave equation. The analysis is based on the energy method and it
identifies the number, location and form of the boundary conditions so that the
initial boundary value problem is well-posed. A finite volume method is
developed based on the summation-by-parts framework with the boundary
conditions implemented weakly using penalties. Stability is proven by deriving
a discrete energy estimate analogous to the continuous estimate. The continuous
and discrete analysis covers all flow regimes. Numerical experiments are
presented verifying the analysis. | Rudi Prihandoko, Kenneth Duru, Stephen Roberts, Christopher Zoppou | 2023-09-28T02:47:11Z | http://arxiv.org/abs/2309.16116v1 | On well-posed boundary conditions and energy stable finite volume method for the linear shallow water wave equation
###### Abstract
We derive and analyse well-posed boundary conditions for the linear shallow water wave equation. The analysis is based on the energy method and it identifies the number, location and form of the boundary conditions so that the initial boundary value problem is well-posed. A finite volume method is developed based on the summation-by-parts framework with the boundary conditions implemented weakly using penalties. Stability is proven by deriving a discrete energy estimate analogous to the continuous estimate. The continuous and discrete analysis covers all flow regimes. Numerical experiments are presented verifying the analysis.
###### Contents
* 1 Introduction
* 2 Continuous analysis
* 3 Numerical scheme
* 3.1 The finite volume method
* 3.2 Numerical boundary treatment and stability
* 4 Numerical experiments
* 5 Conclusion
Introduction
Numerical models that solve the shallow water wave equations (SWWE) have become a common tool for modeling environmental problems. This system of nonlinear hyperbolic partial differential equations (PDE) represent the conservation of mass and momentum of unsteady free surface flow subject to gravitational forces. The SWWE assume that the fluid is inviscid, incompressible and the wavelength of the wave is much greater than its height. Typically these waves are associated with flows caused for example by tsunamis, storm surges and floods in riverine systems. The SWWE are also a fundamental component for predicting a range of aquatic processes, including sediment transport and the transport of pollutants. All these processes can have a significant impact on the environment, vulnerable communities and infrastructure. Therefore making accurate predictions using the SWWE crucial for urban, rural and environmental planners.
For practical problems, the SWWE has been solved numerically using finite difference methods [8], finite volume methods [13], discontinuous Galerkin method [12] and the method of characteristics [2]. Although, the shallow water wave equations are in common use, a rigorous theoretical investigation of boundary conditions necessary for their solution is still an area of active research [3].
In this paper, we investigate well-posed boundary conditions for the linearized SWWE using the energy method [4, 5] and develop provably stable numerical method for the model. Following Ghader and Nordstrom [3], our analysis identifies the type, location and number of boundary conditions that are required to yield a well-posed initial boundary value problem (IBVP). More importantly, we formulate the boundary conditions so that they can be readily implemented in a stable manner for numerical approximations that obey the summation-by-parts (SBP) principle [6]. We demonstrate this by deriving a stable finite volume method using the SBP framework and impose the boundary conditions weakly using the Simultaneous Approximation Term (SAT) method [1]. This SBP-SAT approach enables us to prove that the numerical scheme satisfies the discrete counterparts of energy estimates required for well-posedness of the IBVP, resulting in a provably stable and conservative numerical scheme.
The continuous and discrete analysis covers all flow regimes, namely subcritical, critical and super-critical flows. Numerical experiments are performed to verify the theoretical analysis of the continuous and discrete models.
Continuous analysis
The one dimensional SWWE are
\[\frac{\partial h}{\partial t}+\frac{\partial(uh)}{\partial x}=0,\quad\frac{ \partial(uh)}{\partial t}+\frac{\partial(u^{2}h+\frac{1}{2}gh^{2})}{\partial x}=0, \tag{1}\]
where \(x\in\mathbb{R}\) is spatial variable, \(t\geq 0\) is time, \(h(x,t)>0\) and \(u(x,t)\) are the water depth and the depth averaged fluid velocity respectively, \(g>0\) is the gravitational acceleration.
To make our analysis tractable we linearise the SWWE by substituting \(h=H+\widetilde{h}\) and \(u=U+\widetilde{u}\) into (1), where \(\widetilde{h}\) and \(\widetilde{u}\) denote perturbations of the constant water depth \(H>0\) and fluid velocity \(U\) respectively.
After simplifying, the linearised SWWEs are
\[\frac{\partial h}{\partial t}+U\frac{\partial h}{\partial x}+H\frac{\partial u }{\partial x}=0,\quad\frac{\partial u}{\partial t}+g\frac{\partial h}{\partial x }+U\frac{\partial u}{\partial x}=0, \tag{2}\]
where we have dropped the tilde on the perturbed variables.
Introducing the unknown vector field \(\mathbf{q}=\begin{bmatrix}h,&u\end{bmatrix}^{\top}\), the linear equation (2) can be rewritten in a more compact form as
\[\frac{\partial\mathbf{q}}{\partial t}=D\mathbf{q},\quad D=-M\frac{\partial}{ \partial x},\quad M=\begin{bmatrix}U&H\\ g&U\end{bmatrix}. \tag{3}\]
We will consider (3) in a bounded domain and augment it with initial and boundary conditions. Let our domain be \(\Omega=[0,1]\) and \(\Gamma=\{0,1\}\) be the boundary points. We consider the IBVP
\[\frac{\partial\mathbf{q}}{\partial t} =D\mathbf{q},\ x\in\Omega,\ t\geq 0, \tag{4a}\] \[\mathbf{q}(x,0) =\mathbf{f}(x),\ x\in\Omega,\] (4b) \[\mathcal{B}\mathbf{q} =\mathbf{b}(t),\ x\in\Gamma,\ t\geq 0, \tag{4c}\]
where \(\mathcal{B}\) is a linear boundary operator, \(\mathbf{b}\) is the boundary data and \(\mathbf{f}\in L^{2}(\Omega)\) is the initial condition. One objective of this study is to investigate the choice of \(\mathcal{B}\) which ensure that the IBVP (4) is well-posed. To simplify the coming analysis, we will consider zero boundary data \(\mathbf{b}=0\), but the results can be extended to nontrivial boundary data \(\mathbf{b}\neq 0\). Furthermore, numerical experiments performed later in this paper confirm that the analysis is valid for nonzero boundary data.
Let \(\mathbf{p}\) and \(\mathbf{q}\) be real valued functions, and define the weighted scalar product and the norm
\[\left(\mathbf{p},\mathbf{q}\right)_{W}=\int_{\Omega}\mathbf{p}^{\top}W\mathbf{q }\,\mathrm{d}x,\qquad\|\mathbf{q}\|_{W}^{2}=(\mathbf{q},\mathbf{q})_{W}, \tag{5}\]
where \(W=W^{\top}\) and \(\mathbf{q}^{\top}W\mathbf{q}>0\) for all non-zero \(\mathbf{q}\in\mathbb{R}^{2}\). If \(W=I\) we get the standard \(L_{2}\) scalar product, and we omit the subscript \(W\).
**Definition 1**.: The IBVP (4) is well-posed if a unique solution \(\mathbf{q}\) satisfies
\[\|\mathbf{q}(\cdot,t)\|_{W}\leq\kappa e^{\nu t}\|\mathbf{f}\|_{W},\quad\| \mathbf{f}\|_{W}<\infty, \tag{6}\]
for some constants \(\kappa>0\) and \(\nu\in\mathbb{R}\) independent of \(\mathbf{f}\).
The well-posedness of the IBVP (4) can be related to the boundedness of the differential operator \(D\). We introduce the function space
\[\mathbb{V}=\{\mathbf{p}|\quad\mathbf{p}(x)\in\mathbb{R}^{2},\quad\|\mathbf{p} \|_{W}<\infty,\quad 0\leq x\leq 1,\quad\{\mathcal{B}\mathbf{p}=0,\ x\in\Gamma\}\}. \tag{7}\]
The following two definitions are useful.
**Definition 2**.: The operator \(D\) is said to be **semi-bounded** in the function space \(\mathbb{V}\) if it satisfies
\[(\mathbf{q},D\mathbf{q})_{W}\leq\nu\|\mathbf{q}\|_{W}^{2},\quad\nu\in\mathbb{ R}. \tag{8}\]
**Definition 3**.: The differential operator \(D\) is **maximally semi-bounded** if it is semi-bounded in the function space \(\mathbb{V}\) but not semi-bounded in any space with fewer boundary conditions.
It is well-known that the maximally semi-boundedness of differential operator \(D\) is a necessary and sufficient condition for the well-posedness of the IBVP (4) [5]. Thus to ensure that the IBVP (4) is well-posed, we need: a) the differential operator \(D\) to be semi-bounded and; b) the minimal number of boundary conditions such that \(D\) is maximally semi-bounded.
To begin, we will show that the differential operator \(D\) is semi-bounded in a certain weighted \(L_{2}\) scalar product.
**Lemma 4**.: _Consider the differential operator \(D\) with the constant coefficients matrix \(M\) given in (3) and the weighted \(L_{2}\) scalar product defined in (5), where \(W=W^{\top}\) and \(\boldsymbol{q}^{\top}W\boldsymbol{q}>0\) for all non-zero \(\boldsymbol{q}\in\mathbb{R}^{2}\). If the matrix product \(\widetilde{M}=WM\) is symmetric, \(\widetilde{M}=\widetilde{M}^{T}\), and \(\left.\left(\boldsymbol{q}^{\top}\widetilde{M}\boldsymbol{q}\right)\right|_{0 }^{1}\geq 0\), then \(D\) is semi-bounded._
**Proof:** We consider \(\left(\mathbf{q},\mathbf{Dq}\right)_{W}\) and use integration-by-parts, we have
\[\left(\mathbf{q},\mathbf{Dq}\right)_{W}=-\int_{\Omega}\mathbf{q}^{\top}\widetilde {M}\frac{\partial\mathbf{q}}{\partial x}\,\mathrm{d}x=-\frac{1}{2}\int_{\Omega }\frac{\partial}{\partial x}\left(\mathbf{q}^{\top}\widetilde{M}\mathbf{q} \right)\,\mathrm{d}x=-\,\frac{1}{2}\left(\mathbf{q}^{\top}\widetilde{M} \mathbf{q}\right)\Big{|}_{0}^{1}.\]
Thus if the boundary term \(\left.\left(\mathbf{q}^{\top}\widetilde{M}\mathbf{q}\right)\right|_{0}^{1}\geq 0\) then \(\left(\mathbf{q},\mathbf{Dq}\right)_{\mathbf{W}}\leq 0\). In particular the lower bound \(\left(\mathbf{q},\mathbf{Dq}\right)_{\mathbf{W}}=0\) satisfies Definition 2 with \(\nu=0\). \(\spadesuit\)
The next step will be to derive boundary operators \(\left\{\mathcal{B}\mathbf{p}=0,\ x\in\Gamma\right\}\) with minimal number of boundary conditions such that the boundary term is never negative, \(\left.\left(\mathbf{q}^{\top}\widetilde{M}\mathbf{q}\right)\right|_{0}^{1}\geq 0\). We will now choose the weight matrix \(W\) such that the weighted \(L_{2}\)-norm is related to the mechanical energy in the medium. Note in particular, if
\[W=\begin{bmatrix}g&0\\ 0&H\end{bmatrix}, \tag{9}\]
then the weighted \(L_{2}\)-norm is related to the mechanical energy \(E\), that is
\[\frac{1}{2}\|\mathbf{q}\|_{W}^{2}=E:=\int_{\Omega}\frac{1}{2}(gh^{2}+Hu^{2})\, \mathrm{d}x>0,\quad\forall\mathbf{q}\in\mathbb{R}^{2}\backslash\{\mathbf{0}\}. \tag{10}\]
We introduce the boundary term
\[BT:=-\frac{1}{2gH}\,\left(\mathbf{q}^{\top}\widetilde{M}\mathbf{q}\right) \Big{|}_{0}^{1}=\frac{U}{H}\left(\frac{1}{2}h^{2}\big{|}_{1}^{0}\right)+\left( uh\big{|}_{1}^{0}\right)+\frac{U}{g}\left(\frac{1}{2}u^{2}\big{|}_{1}^{0} \right). \tag{11}\]
By using the eigen-decomposition of the symmetric matrix \(\widetilde{M}\) the boundary term can be re-written as
\[BT=\frac{1}{2}\,\left(\lambda_{1}w_{1}^{2}+\lambda_{2}w_{2}^{2}\right)\big{|} _{x=0}-\frac{1}{2}\,\left(\lambda_{1}w_{1}^{2}+\lambda_{2}w_{2}^{2}\right) \big{|}_{x=1}\,, \tag{12}\]
where
\[\left[w_{1},\ \ w_{2}\right]^{\top}=S^{\top}\mathbf{q},\qquad S=\begin{bmatrix} \frac{1}{c}\left(\lambda_{1}-\frac{U}{g}\right)&\frac{1}{d}\left(\lambda_{2}- \frac{U}{g}\right)\\ \frac{1}{c}&\frac{1}{d}\end{bmatrix}, \tag{13}\]
and \(c=\sqrt{\left(\lambda_{1}-\frac{U}{g}\right)^{2}+1},\quad d=\sqrt{\left( \lambda_{2}-\frac{U}{g}\right)^{2}+1}.\) Here, \(S\) is a matrix of orthonormal eigenvectors and so \(S^{\top}S=I\). The eigenvalues, \(\lambda_{1},\lambda_{2}\), are real and given by
\[\lambda_{1}=\frac{1}{2gH}\left(U(g+H)+\sqrt{U^{2}(g+H)^{2}+4gH(gH-U^{2})} \right), \tag{14}\]
\[\lambda_{2}=\frac{1}{2gH}\left(U(g+H)-\sqrt{U^{2}(g+H)^{2}+4gH(gH-U^{2})}\right). \tag{15}\]
The number of boundary conditions will depend on the signs of the eigenvalues \(\lambda_{1}\), \(\lambda_{2}\) which in turn depend on the magnitude of the flow \(U\) and the sign of \(gH-U^{2}\). The term \((gH-U^{2})\) plays an important role to the change of sign of the eigenvalues. That is, \((gH-U^{2})>0\) implies \(\lambda_{1}>0\) and \(\lambda_{2}<0\); \(gH-U^{2}<0\) implies both of the eigenvalues take the sign of \(U\); and \((gH-U^{2})=0\) implies one of the eigenvector equals zero, that is \(\lambda_{1}>0\), \(\lambda_{2}=0\) if \(U>0\) and \(\lambda_{1}=0\), \(\lambda_{2}<0\) if \(U<0\). We can also discriminate positive \(U>0\) and negative \(U<0\). When \(U>0\), \(x=0\) is an inflow boundary and \(x=1\) is an outflow boundary. The situation reverses when \(U<0\), that is, \(x=0\) becomes an outflow boundary and \(x=1\) is an inflow boundary.
**Sub-critical flow.** The flow is sub-critical when \(U^{2}<gH\) which implies \(\lambda_{1}>0\) and \(\lambda_{2}<0\). We need one boundary condition at \(x=0\) and one boundary condition at \(x=1\). Therefore, for sub-critical flow regime we always need an inflow boundary condition and an outflow boundary condition for any \(U\). We formulate the boundary conditions
\[\{\mathcal{B}\mathbf{p}=\mathbf{b},\ x\in\Gamma\}\equiv\{w_{1}-\gamma_{0}w_{ 2}=b_{1}(t),\ x=0;\ w_{2}-\gamma_{1}w_{1}=b_{2}(t),\ x=1\}, \tag{16}\]
where \(\gamma_{0},\gamma_{1}\in\mathbb{R}\) are boundary reflection coefficients. The following Lemma constraints the parameters \(\gamma_{0},\gamma_{1}\).
**Lemma 5.**_Consider the boundary term \(BT\) defined in (12) and the boundary condition (16) with \(\boldsymbol{b}=0\) for sub-critical flows \(U^{2}<gH\) with \(\lambda_{1}>0\) and \(\lambda_{2}<0\). If \(0\leq\gamma_{0}^{2}\leq-\lambda_{2}/\lambda_{1}\) and \(0\leq\gamma_{1}^{2}\leq-\lambda_{1}/\lambda_{2}\), then the boundary term is never positive, \(BT\leq 0\)._
**Proof:** Let \(w_{1}=\gamma_{0}w_{2}\) at \(x=0\) and \(w_{2}=\gamma_{1}w_{1}\) at \(x=1\), and consider
\[\left(\lambda_{1}w_{1}^{2}+\lambda_{2}w_{2}^{2}\right)\bigr{|}_{0}-\left. \left(\lambda_{1}w_{1}^{2}+\lambda_{2}w_{2}^{2}\right)\bigr{|}_{1}=\left.w_{2} ^{2}\left(\lambda_{1}\gamma_{0}^{2}+\lambda_{2}\right)\right|_{0}-\left.w_{1} ^{2}\left(\lambda_{1}+\lambda_{2}\gamma_{1}^{2}\right)\right|_{1}.\]
Thus if \(0\leq\gamma_{0}^{2}\leq-\lambda_{2}/\lambda_{1}\) and \(0\leq\gamma_{1}^{2}\leq-\lambda_{1}/\lambda_{2}\) then \((\lambda_{1}\gamma_{0}^{2}+\lambda_{2})\leq 0\) and \((\lambda_{1}+\lambda_{2}\gamma_{1}^{2})\geq 0\), and we have
\[BT=\frac{1}{2}\left(\left.w_{2}^{2}\left(\lambda_{1}\gamma_{0}^{2}+\lambda_{2} \right)\right|_{0}-\left.w_{1}^{2}\left(\lambda_{1}+\lambda_{2}\gamma_{1}^{2} \right)\right|_{1}\right)\leq 0.\]
\(\spadesuit\)
**Super-critical flow.** When \(U^{2}>gH\) the flow is super-critical, then \(\lambda_{1}\) and \(\lambda_{2}\) both take the sign of the average flow velocity \(U\). That is if \(U>0\) then \(\lambda_{1}>0\), \(\lambda_{2}>0\) and if \(U<0\) then \(\lambda_{1}<0\), \(\lambda_{2}<0\). Thus when \(U>0\)
we need two boundary conditions at \(x=0\) and no boundary conditions at \(x=1\). Similarly, when \(U<0\) we need two boundary conditions at \(x=1\) and no boundary conditions at \(x=0\). Therefore, for super-critical flows there are no outflow boundary conditions for any \(U\). We formulate the boundary conditions
\[\{\mathcal{B}\mathbf{q}=\mathbf{b},\ x\in\Gamma\} \equiv\{w_{1}=b_{1}(t),\ w_{2}=b_{2}(t),\ x=0;\ \text{if}\ U>0\}, \tag{17a}\] \[\{\mathcal{B}\mathbf{q}=\mathbf{b},\ x\in\Gamma\} \equiv\{w_{1}=b_{1}(t),\ w_{2}=b_{2}(t),\ x=1;\ \text{if}\ U<0\}. \tag{17b}\]
**Lemma 6**.: _Consider the boundary term \(BT\) defined in (12) and the boundary condition (17) with \(\boldsymbol{b}=0\) for super-critical flows \(U^{2}>gH\), we have \(BT\leq 0\)._
Proof:Let \(U>0\) with \(\lambda_{1}>0\), \(\lambda_{2}>0\) if \(w_{1}=0,\ w_{2}=0\), at \(x=0\), then
\[BT=\frac{1}{2}\left(\left.(\lambda_{1}w_{1}^{2}+\lambda_{2}w_{2}^{2})\right| _{0}-\left.(\lambda_{1}w_{1}^{2}+\lambda_{2}w_{2}^{2})\right|_{1}\right)=- \frac{1}{2}\left.(\lambda_{1}w_{1}^{2}+\lambda_{2}w_{2}^{2})\right|_{1}\leq 0.\]
If \(U<0\) with \(\lambda_{1}<0\), \(\lambda_{2}<0\) and \(w_{1}=0,\ w_{2}=0\), at \(x=1\), then we have
\[BT=\frac{1}{2}\left(\left.(\lambda_{1}w_{1}^{2}+\lambda_{2}w_{2}^{2})\right|_{ 0}-\left.(\lambda_{1}w_{1}^{2}+\lambda_{2}w_{2}^{2})\right|_{1}\right)=\frac{1 }{2}\left.(\lambda_{1}w^{2}+\lambda_{2}w_{2}^{2})\right|_{0}\leq 0.\]
\(\spadesuit\)
**Critical flow.** The flow is critical when \(U^{2}=gH\). Note that this case is degenerate, since there is only one nonzero eigenvalue, that is \(U>0\) implies \(\lambda_{1}>0\), \(\lambda_{2}=0\) and \(U<0\) implies \(\lambda_{1}=0\), \(\lambda_{2}<0\). However, it can also be treated by prescribing only one boundary condition for the system. The location of the boundary condition will be determined by the sign of \(U\), similar to the super-critical flow regime. We prescribe the boundary conditions
\[\{\mathcal{B}\mathbf{q}=\mathbf{b},\ x\in\Gamma\} \equiv\{w_{1}=b_{1}(t),\ x=0;\ \text{if}\ U>0\ \text{and}\ \lambda_{2}=0\}, \tag{18a}\] \[\{\mathcal{B}\mathbf{q}=0,\ x\in\Gamma\} \equiv\{w_{2}=b_{2}(t),\ x=1;\ \text{if}\ U<0\ \text{and}\ \lambda_{1}=0\}. \tag{18b}\]
**Lemma 7**.: _Consider the boundary term \(BT\) defined in (12) and the boundary condition (18) with \(\boldsymbol{b}=0\) for critical flows \(U^{2}=gH\), we have \(BT\leq 0\)._
Proof:Let \(U>0\) with \(\lambda_{1}>0\), \(\lambda_{2}=0\) if \(w_{1}=0\), at \(x=0\),
\[BT=\frac{1}{2}\left(\left.(\lambda_{1}w_{1}^{2}+\lambda_{2}w_{2}^{2})\right|_{ 0}-\left.(\lambda_{1}w_{1}^{2}+\lambda_{2}w_{2}^{2})\right|_{1}\right)=-\frac{ 1}{2}\left.\lambda_{1}w_{1}^{2}\right|_{1}\leq 0.\]
If \(U<0\) with \(\lambda_{1}=0\), \(\lambda_{2}<0\) and \(w_{2}=0\), at \(x=1\) we also have
\[BT=\frac{1}{2}\left(\left.(\lambda_{1}w_{1}^{2}+\lambda_{2}w_{2}^{2})\right|_{ 0}-\left.(\lambda_{1}w_{1}^{2}+\lambda_{2}w_{2}^{2})\right|_{1}\right)=\frac{1 }{2}\left.\lambda_{2}w_{2}^{2}\right|_{0}\leq 0.\]
We will conclude this section with the theorem which proves the well-posedness of the IBVP (4).
**Theorem 8**.: _Consider the IBVP (4) where the boundary operator \(\mathcal{B}\boldsymbol{q}=0\) is define by (16) with \(\gamma_{0}^{2}\leq-\lambda_{2}/\lambda_{1}\) and \(\gamma_{1}^{2}\leq-\lambda_{1}/\lambda_{2}\) for sub-critical flows, \(U^{2}<gH\), by (17) for the super-critical flow regime, \(U^{2}>gH\), and by (18) for critical flows, \(U^{2}=gH\), we have the energy estimate_
\[\frac{1}{2}\frac{d}{dt}\|\boldsymbol{q}\|_{W}^{2}=gH\times\mathrm{BT}\leq 0. \tag{19}\]
Proof.: We use the energy method, that is, from the left we multiply (4a) with \(\boldsymbol{q}^{\top}W\) and integrate over the domain. As above integration-by-parts gives
\[\frac{1}{2}\frac{d}{dt}\|\boldsymbol{q}\|_{W}^{2}=\left(\boldsymbol{q},\frac{ \partial\boldsymbol{q}}{\partial t}\right)_{W}=\left(\boldsymbol{q},\mathbf{D }\boldsymbol{q}\right)_{W}=gH\times\mathrm{BT}.\]
Using Lemmas 5-7 for each flow regime gives \(\mathrm{BT}\leq 0\), which completes the proof. \(\spadesuit\)
This energy estimate (19) is what a stable numerical method should emulate.
## 3 Numerical scheme
We will now derive a stable finite volume method for the IBVP (4) encapsulated in the SBP framework. We will prove numerical stability by deriving discrete energy estimates analogous to Theorem 8.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Regime & Type of Boundary & Number of Boundary condition \\ \hline sub-critical & inflow & 1 \\ & outflow & 1 \\ \hline critical & inflow & 1 \\ & outflow & 0 \\ \hline super-critical & inflow & 2 \\ & outflow & 0 \\ \hline \end{tabular}
\end{table}
Table 1: The number and location of boundary condition in all regime. The boundary at \(x=0\) (\(x=1\)) is inflow (outflow) boundary if \(U>0\) and outflow (inflow) boundary if \(U<0\).
### The finite volume method
To begin, the domain, \(\Omega=[0,1]\), is subdivided into \(N+1\) computational nodes having \(x_{i}=x_{i-1}+\Delta x_{i}\), for \(i=1,2,\cdots N\), with \(x_{0}=0\), \(\Delta x_{i}>0\) and \(\sum_{i=1}^{N}\Delta x_{i}=1\). We consider the control cell \(I_{i}=[x_{i-\frac{1}{2}},x_{i+\frac{1}{2}}]\) for each interior node \(1\leq i\leq N-1\), and for the boundary nodes \(\{x_{0},x_{N}\}\) the control cells are \(I_{0}=[x_{0},x_{1/2}]\) and \(I_{N}=[x_{N-1/2},x_{N}]\), see Figure 1. Note that \(|I_{i}|=\Delta x_{i}/2+\Delta x_{i+1}/2\) for the interior nodes \(1\leq i\leq N-1\), and for the boundary nodes \(i\in\{0,N\}\) we have \(|I_{0}|=\Delta x_{1}/2\) and \(|I_{N}|=\Delta x_{N}/2\). The control cells \(I_{i}\) are connected and do not overlap, and \(\sum_{i=0}^{N}|I_{i}|=\sum_{i=1}^{N}\Delta x_{i}=1\).
Consider the integral form of (4a) over the control cells \(I_{i}\)
\[\frac{d}{\mathrm{d}t}\int_{I_{0}}\mathbf{q}(x,t)\,\mathrm{d}x+M \mathbf{q}(x_{\frac{1}{2}},t)-M\mathbf{q}(x_{0},t)=0, \tag{20a}\] \[\frac{d}{\mathrm{d}t}\int_{I_{i}}\mathbf{q}(x,t)\,\mathrm{d}x+M \mathbf{q}(x_{i+\frac{1}{2}},t)-M\mathbf{q}(x_{i-\frac{1}{2}},t)=0,\quad 1\leq i \leq N-1,\] (20b) \[\frac{d}{\mathrm{d}t}\int_{I_{N}}\mathbf{q}(x,t)\,\mathrm{d}x+M \mathbf{q}(x_{N},t)-M\mathbf{q}(x_{N-1/2},t)=0. \tag{20c}\]
Introduce the cell-average
\[\bar{\mathbf{q}}_{i}=\frac{1}{|I_{i}|}\int_{I_{i}}\mathbf{q}(x,t)\mathrm{d}x, \tag{21}\]
and approximate the PDE flux \(M\mathbf{q}\) with the local Lax-Friedrich flux
\[M\mathbf{q}(x_{i+\frac{1}{2}},t)\approx\frac{M\bar{\mathbf{q}}_{i+1}+M\bar{ \mathbf{q}}_{i}}{2}-\frac{\alpha}{2}\left(\bar{\mathbf{q}}_{i+1}-\bar{ \mathbf{q}}_{i}\right),\quad\alpha\geq 0, \tag{22}\]
and \(M\mathbf{q}(x_{0},t)\approx M\bar{\mathbf{q}}_{0},\quad M\mathbf{q}(x_{N},t) \approx M\bar{\mathbf{q}}_{N}.\) The evolution of the cell-average is governed by the semi-discrete system
\[|I_{0}|\frac{d\bar{\mathbf{q}}_{0}}{dt}+M\frac{\bar{\mathbf{q}}_{ 1}-\bar{\mathbf{q}}_{0}}{2}-\frac{\alpha}{2}(\bar{\mathbf{q}}_{1}-\bar{ \mathbf{q}}_{0})=0, \tag{23a}\] \[|I_{i}|\frac{d\bar{\mathbf{q}}_{i}}{dt}+M\frac{\bar{\mathbf{q}}_{ i+1}-\bar{\mathbf{q}}_{i-1}}{2}-\frac{\alpha}{2}(\bar{\mathbf{q}}_{i+1}-2\bar{ \mathbf{q}}_{i}+\bar{\mathbf{q}}_{i-1})=0,\ 1\leq i\leq N-1,\] (23b) \[|I_{N}|\frac{d\bar{\mathbf{q}}_{N}}{dt}+M\frac{\bar{\mathbf{q}}_{ N}-\bar{\mathbf{q}}_{N-1}}{2}-\frac{\alpha}{2}(\bar{\mathbf{q}}_{N-1}-\bar{ \mathbf{q}}_{N})=0. \tag{23c}\]
Figure 1: Finite volume nodes \(x_{i}\) and control cells \(I_{i}\).
### Numerical scheme
Introducing the discrete solution vector \(\bar{\mathbf{q}}=[\bar{\mathbf{q}}_{0},\bar{\mathbf{q}}_{1},\cdots,\bar{\mathbf{q} }_{N}]^{\top}\) and rewriting (23) in a more compact form, we have
\[\left(I\otimes P\right)\frac{d\bar{\mathbf{q}}}{dt}+\left(M\otimes Q\right) \bar{\mathbf{q}}-\frac{\alpha}{2}\left(I\otimes A\right)\bar{\mathbf{q}}=0, \tag{24}\]
where \(\otimes\) denotes the Kronecker product and
\[Q=\begin{pmatrix}-\frac{1}{2}&\frac{1}{2}&0&\cdots&0&0&0\\ -\frac{1}{2}&0&\frac{1}{2}&\cdots&0&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&-\frac{1}{2}&0&\frac{1}{2}\\ 0&0&0&\cdots&0&-\frac{1}{2}&\frac{1}{2}\end{pmatrix},\ A=\begin{pmatrix}-1&1 &0&\cdots&0&0&0\\ 1&-2&1&\cdots&0&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\ 0&0&0&\cdots&1&-2&1\\ 0&0&0&\cdots&0&1&-1\end{pmatrix},\]
and \(P=\mathrm{diag}\left([|I_{0}|,|I_{1}|,\cdots,|I_{N}|]\right)\). The matrix \(Q\) is related to the spatial derivative operator, \(A\) is a numerical dissipation operator, and \(\alpha\geq 0\) controls the amount of numerical dissipation applied. Note that \(A\) is symmetric and negative semi-definite, that is \(A=A^{\top}\) and \(\mathbf{q}^{\top}A\mathbf{q}\leq 0\) for all \(\mathbf{q}\in\mathbb{R}^{N+1}\). The important stability property of the semi-discrete approximation (24) is that the associated discrete derivative operator satisfies the SBP property. To see this, we rewrite equation (24) as
\[\frac{d\bar{\mathbf{q}}}{dt}+\left(M\otimes D_{x}\right)\bar{\mathbf{q}}- \frac{\alpha}{2}\left(I\otimes P^{-1}A\right)\bar{\mathbf{q}}=0, \tag{25}\]
where \(I\) is the \(2\times 2\) identity matrix and
\[D_{x}=P^{-1}Q,\quad Q+Q^{\top}=\mathrm{diag}\left([-1,0,\ldots,0,1]\right). \tag{26}\]
The relation (26) is the so-called SBP property [6, 4] for the first derivative \(d/dx\), which can be useful in proving numerical stability of the discrete approximation (24). Note that we have not enforced any boundary condition yet, the boundary condition (4c) will be implemented weakly using penalties.
### Numerical boundary treatment and stability
We will now implement the boundary conditions and prove numerical stability. The boundary conditions are implemented using the SAT method, similar terms used as in [1], by appending the boundary operators (16)-(18) to the right hand-side of (24) with penalty weights, we have
\[\left(I\otimes\mathbf{P}\right)\frac{d\bar{\mathbf{q}}}{dt}+\left(M\otimes \mathbf{Q}\right)\bar{\mathbf{q}}-\frac{\alpha}{2}\left(I\otimes\mathbf{A} \right)\bar{\mathbf{q}}=\mathrm{SAT}. \tag{27}\]
\[\frac{1}{2}\frac{d}{dt}\|\bar{\mathbf{q}}\|_{WP}^{2}+\frac{\alpha}{2}\bar{\mathbf{ q}}^{T}\left(W\otimes A\right)\bar{\mathbf{q}}=\frac{1}{2}gH\times\mathrm{BT}_{num},\]
where
\[\mathrm{BT}_{num} =\] \[-\]
Thus, if \(\tau_{01}=\lambda_{1},\quad\tau_{02}=\gamma_{0}\lambda_{1};\qquad\tau_{N2}=- \lambda_{2},\quad\tau_{N1}=-\gamma_{1}\lambda_{2}\), then we have
\[\mathrm{BT}_{num}=\]
Since \(\lambda_{1}>0\), \(\lambda_{2}<0\) and
\[(\lambda_{2}+\lambda_{1}\gamma_{0}^{2})\leq 0\iff\gamma_{0}^{2}\leq- \lambda_{2}/\lambda_{1};\qquad(\lambda_{1}+\lambda_{2}\gamma_{1}^{2})\geq 0 \iff\gamma_{1}^{2}\leq-\lambda_{1}/\lambda_{2},\]
then we must have \(\mathrm{BT}_{num}\leq 0\). Note that for \(\alpha\geq 0\) then \(\frac{\alpha}{2}\bar{\mathbf{q}}^{T}\left(W\otimes A\right)\bar{\mathbf{q}} \leq 0\), and we have
\[\frac{1}{2}\frac{d}{dt}\|\bar{\mathbf{q}}\|_{WP}^{2}=\frac{\alpha}{2}\bar{ \mathbf{q}}^{T}\left(W\otimes A\right)\bar{\mathbf{q}}+\frac{1}{2}gH\times \mathrm{BT}_{num}\leq 0.\]
\(\spadesuit\)
The next theorem will prove the stability of the semi-discrete approximation (27) for super-critical flows.
**Theorem 10**.: _Consider the semi-discrete finite volume approximation (27) with the SAT (29) and \(\textbf{b}=0\) for super-critical flows. If the penalty parameters are chosen such that \(\tau_{01}\geq\lambda_{1},\quad\tau_{02}\geq\lambda_{2};\qquad\tau_{N1}\geq- \lambda_{1},\quad\tau_{N2}\geq-\lambda_{2},\) then_
\[\frac{1}{2}\frac{d}{dt}\|\bar{\textbf{q}}\|_{WP}^{2}\leq 0,\quad\forall\ t \geq 0.\]
**Proof:** As above the energy method with the SBP property (26) and the eigen-decomposition of \(\widetilde{M}\) yields
\[\frac{1}{2}\frac{d}{dt}\|\bar{\mathbf{q}}\|_{WP}^{2}-\frac{\alpha}{2}\bar{ \mathbf{q}}^{T}\left(W\otimes A\right)\bar{\mathbf{q}}=\frac{1}{2}gH\times \mathrm{BT}_{num},\]
where
\[\mathrm{BT}_{num} =\] \[\mathrm{BT}_{num} =\]
Therefore, if \(\tau_{01}\geq\lambda_{1},\tau_{02}\geq\lambda_{2};\tau_{N1}\geq-\lambda_{1}, \tau_{N2}\geq-\lambda_{2}\), then we have \(\mathrm{BT}_{num}\leq 0\). Noting that \(\alpha\geq 0\) and as previous, we have \(\frac{\alpha}{2}\bar{\mathbf{q}}^{T}\left(W\otimes A\right)\bar{\mathbf{q}}\leq 0\) which gives us
\[\frac{1}{2}\frac{d}{dt}\|\bar{\mathbf{q}}\|_{WP}^{2}=\frac{\alpha}{2}\bar{ \mathbf{q}}^{T}\left(W\otimes A\right)\bar{\mathbf{q}}+\frac{1}{2}gH\times \mathrm{BT}_{num}\leq 0.\]
\(\spadesuit\)
Finally, we will prove the stability of the semi-discrete approximation (27) for critical flows.
**Theorem 11**.: _Consider the semi-discrete finite volume approximation (27) with the SAT (29) and \(\boldsymbol{b}=0\) for critical flows. If the penalty parameters are chosen such that \(\tau_{01}\geq\lambda_{1},\quad\tau_{02}=0;\quad\tau_{N1}=0,\quad\tau_{N2}\geq- \lambda_{2},\) then_
\[\frac{1}{2}\frac{d}{dt}\|\bar{\boldsymbol{q}}\|_{WP}^{2}\leq 0,\quad\forall\ t \geq 0.\]
**Proof:** The zero penalties ensure consistency of the SAT, that is \(\tau_{02}=0\) and \(\tau_{N1}=0\) give
\[\text{SAT}=-\frac{1}{2}\left(W^{-1}SW\otimes\mathbf{I}\right) \begin{bmatrix}\tau_{01}H\mathbf{e}_{0}\bar{w}_{1}\\ 0\end{bmatrix},\qquad U>0,\] \[\text{SAT}=-\frac{1}{2}\left(W^{-1}SW\otimes\mathbf{I}\right) \begin{bmatrix}0\\ \tau_{N2}g\mathbf{e}_{N}\bar{w}_{2}\end{bmatrix},\qquad U<0.\]
Again the energy method with the SBP property (26) and the eigen-decomposition of \(\widetilde{M}\) yield
\[\frac{1}{2}\frac{d}{dt}\|\bar{\mathbf{q}}\|_{WP}^{2}-\frac{\alpha}{2}\bar{ \mathbf{q}}^{T}\left(W\otimes A\right)\bar{\mathbf{q}}=\frac{1}{2}gH\times \text{BT}_{num},\]
where
\[\text{BT}_{num} =\left.\left(\lambda_{1}-\tau_{01}\right)\bar{w}_{1}^{2}\right| _{i=0}-\left.\lambda_{1}\bar{w}_{1}^{2}\right|_{i=N},\ U>0,\ \lambda_{1}>0,\ \lambda_{2}=0\] \[\text{BT}_{num} =\left.\lambda_{2}\bar{w}_{2}^{2}\right|_{i=0}-\left.\left( \lambda_{2}+\tau_{N2}\right)\bar{w}_{2}^{2}\right|_{i=N},\ U<0,\ \lambda_{1}=0,\ \lambda_{2}<0.\]
Therefore, if \(\tau_{01}\geq\lambda_{1}\), and \(\tau_{N2}\geq-\lambda_{2}\), then we have \(\text{BT}_{num}\leq 0\). Using the fact that \(\alpha\geq 0\) and \(\frac{\alpha}{2}\bar{\mathbf{q}}^{T}\left(W\otimes A\right)\bar{\mathbf{q}}\leq 0\) again gives
\[\frac{1}{2}\frac{d}{dt}\|\bar{\mathbf{q}}\|_{WP}^{2}=\frac{\alpha}{2}\bar{ \mathbf{q}}^{T}\left(W\otimes A\right)\bar{\mathbf{q}}+\frac{1}{2}gH\times \text{BT}_{num}\leq 0.\]
\(\spadesuit\)
## 4 Numerical experiments
In this section, we perform numerical experiments to verify the analysis undertaken in the previous sections. Similar to the theoretical analysis, the numerical experiments cover the three flow regimes, namely sub-critical, critical and super-critical flow regimes. We used \(H=1\) m, \(g=9.8\) m/s\({}^{2}\), and \(U\in\{\frac{1}{2}\sqrt{gH},\sqrt{gH},2\sqrt{gH}\}\), which correspond to the three different flow regimes. The interval of interest is \([0,L]\) with \(L>0\). Note that \(U>0\)
so that \(x=0\) is the in-flow boundary and \(x=L\) is the outflow boundary. The locations and the number of boundary conditions required are given in Table 1, and the explicit forms of the boundary conditions considered here are given in Table 2 where \(g_{1}(t)\) and \(g_{2}(t)\) are the boundary data.
The semi-discrete system (27) is integrated in time using the classical fourth-order accurate explicit Runge-Kutta method with the time step
\[\Delta t=\operatorname{Cr}\frac{\Delta x}{|U|+\sqrt{gH}},\quad\operatorname{ Cr}=0.25,\]
where \(\operatorname{Cr}\) is the Courant-Friedrichs-Lewy number, \(\Delta x=L/N\) is the uniform cell width and \(N\) is the number of finite volume cells. We will consider a centered numerical flux with \(\alpha=0\) and the local Lax-Friedrich's numerical flux (22) with \(\alpha>0\), and verify numerical accuracy. Note that the semi-discrete approximation is energy stable for all \(\alpha\geq 0\).
Non-homogeneous boundary data.We consider zero initial conditions, that is \(u(x,0)=0\) and \(h(x,0)=0\), and send a wave into the domain through the in-flow boundary at \(x=0\). We will consider specifically \(g_{1}(t)\neq 0\) and \(g_{2}(t)=0\), for the boundary conditions given in Table 2, so that the corresponding IBVP has the exact solution
\[h(x,t)=g_{1}\left(t-\frac{x}{U+\sqrt{gH}}\right),\quad u(x,t)=\frac{1}{\sqrt{H /g}}g_{1}\left(t-\frac{x}{U+\sqrt{gH}}\right).\]
We will consider a smooth boundary data given by
\[g_{1}(t)=\begin{cases}(\sin(\pi t))^{4}&\text{ if }0\leq t\leq 1,\\ 0&\text{ otherwise,}\end{cases}\qquad g_{2}(t)=0,\quad\forall\ t\geq 0, \tag{31}\]
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Regime & \(U\) & Boundaries & Boundary conditions \\ \hline sub-critical & \(<\sqrt{gH}\) & \(x=0\) & \(\frac{1}{2}(h+\sqrt{H/g}\,u)=g_{1}\) \\ & & \(x=L\) & \(\frac{1}{2}(h-\sqrt{H/g}\,u)=g_{2}\) \\ \hline critical & \(=\sqrt{gH}\) & \(x=0\) & \(\frac{1}{2}(h+\sqrt{H/g}\,u)=g_{1}\) \\ \hline super-critical & \(>\sqrt{gH}\) & \(x=0\) & \(\frac{1}{2}(h+\sqrt{H/g}\,u)=g_{1}\) \\ & & & \(\frac{1}{2}(h-\sqrt{H/g}\,u)=g_{2}\) \\ \hline \end{tabular}
\end{table}
Table 2: Transmissive boundary conditions in all regimes with \(U>0\).
and a non-smooth boundary data given by
\[g_{1}(t)=\begin{cases}1&\text{ if }0<t\leq 1,\\ 0&\text{ otherwise},\end{cases}\qquad g_{2}(t)=0,\quad\forall\ t\geq 0. \tag{32}\]
The boundary data for the boundary conditions in Table 2 can be rewritten as \(b_{1}(t)\) and/or \(b_{2}(t)\) and in terms of \(w_{1}\) and \(w_{2}\) for the given boundary.
In the sub-critical case, by utilizing the identity (13), the boundary condition can be rewritten in the form (16) with
\[\gamma_{0}=-\frac{\frac{1}{d}\left(\!\sqrt{\frac{g}{H}}\left(\lambda_{2}- \frac{U}{g}\right)-1\right)}{\frac{1}{c}\left(\!\sqrt{\frac{g}{H}}\left( \lambda_{1}-\frac{U}{g}\right)-1\right)},\quad\gamma_{1}=-\frac{\frac{1}{d} \left(\!\sqrt{\frac{g}{H}}\left(\lambda_{2}-\frac{U}{g}\right)+1\right)}{ \frac{1}{c}\left(\!\sqrt{\frac{g}{H}}\left(\lambda_{1}-\frac{U}{g}\right)+1 \right)}.\]
We can show that \(\gamma_{0}\) and \(\gamma_{1}\) satisfy the condition of Lemma 5, that is \(\gamma_{0}^{2}\leq-\lambda_{2}/\lambda_{1}\) and \(\gamma_{1}^{2}\leq-\lambda_{1}/\lambda_{2}\) for all \(|U|<\sqrt{gH}\).
For the critical flow regime, we have \(U^{2}=gH\) and \(\lambda_{2}=0\). Only one boundary condition is imposed at the inflow, \(\bar{w}_{1}=b_{1}(t)\). This condition can be rewritten to match the condition in Theorem 11 by using the identity (13) and the fact that \(U^{2}=gH\).
For the super-critical flow regime, two boundary conditions are imposed at the inflow boundary. That is \(\bar{w}_{1}=b_{1}(t)\), \(\bar{w}_{2}=b_{2}(t)\equiv 0\) as the boundary condition at \(x=0\). This conditions is equivalent to (17a).
The boundary data will generate a pulse from the left boundary at \(x=0\), which will propagate through the domain and leave the domain through \(x=L\).
Figure 2 shows the snapshot of the sub-critical flow solutions at \(t=3.02\) s for both smooth and non-smooth boundary data, with \(\alpha=0\) and \(\alpha=0.15\times(U+\sqrt{gH})>0\). In the plots, we have scaled the horizontal axis by the wave speed \((U+\sqrt{gH})\) so that the solution is spatially invariant for all flow regimes. Note that for the smooth pulse the numerical solution matches the exact solution excellently well for \(\alpha=0\) and \(\alpha=0.15\times(U+\sqrt{gH})>0\). Although with \(\alpha=0.15\times(U+\sqrt{gH})>0\) the peak of the numerical slightly dissipated. For the non-smooth pulse, when \(\alpha=0\), the propagation speed of pulse is well approximated by the numerical solution. However, there are numerical oscillations generated by the propagating discontinuities. When \(\alpha=0.15\times(U+\sqrt{gH})>0\) the numerical solution is non-oscillatory, but the discontinuous edges of the solutions are smeared.
The evolution of the numerical solutions and the exact solutions, at all flow regimes are shown in Figure 3 for the smooth pulse and in Figure 4 for the non-smooth pulse. The pulses enter the domain through the inflow boundary at \(x=0\) and leave the domain through the out-flow at
Numerical experiments
Figure 2: The snapshots of the numerical and exact solutions with \(\Delta x=L\times 2^{-11}\) m at time \(t=3.02\) s for a sub-critical flow regime with smooth and non-smooth boundary data. For the smooth boundary data the numerical solution matches the exact solution well for \(\alpha=0\) and \(\alpha=0.15\times(U+\sqrt{gH})>0\). Note, however, with \(\alpha=0.15\times(U+\sqrt{gH})>0\) the peak of the numerical solution is slightly dissipated. For the non-smooth boundary data, when \(\alpha=0\), the propagation speed of pulse is well approximated by the numerical solution. However, there are numerical oscillations generated by the propagating discontinuities. When \(\alpha=0.15\times(U+\sqrt{gH})>0\) the numerical solution is non-oscillatory, but propagating the discontinuities are smoothed out.
\(L=(U+\sqrt{gH})\times 5\). Note that because of the re-scaling of the \(x\)-axis to \(x/(U+\sqrt{gH})\), the solutions are invariant for all three flow regimes.
Convergence test.Here, we verify the convergence properties of the numerical method. We will use the method of manufactured solution [10]. That
Figure 3: The evolution of the numerical solutions and the exact solutions, at all the three flow regimes with smooth boundary data, \(\Delta x=L\times 2^{-11}\) m and \(\alpha=0\). The solutions enter the domain through the in-flow boundary at \(x=0\) and leave the domain through the out-flow at \(x=L=(U+\sqrt{gH})\times 5\). Note that because of the re-scaling of the \(x\)-axis to \(x/(U+\sqrt{gH})\), the solutions are invariant for the all three flow regimes.
is, we force the system to have the exact smooth solution
\[h(x,t)=\cos(2\pi t)\sin(6\pi x),\qquad u(x,t)=\sin(2\pi t)\cos(4\pi x). \tag{33}\]
The initial conditions \(h(x,0)\)\(u(x,0)\) and the boundary data \(g_{1}(t)\) and \(g_{2}(t)\) are chosen to match the analytical solution (33). We compute the nu
Figure 4: The evolution of the numerical solutions and the exact solutions, at all three flow regimes with non-smooth boundary data, \(\Delta x=L\times 2^{-11}\) m and \(\alpha=0.15\times(U+\sqrt{gH})>0\). The discontinuous solutions enter the domain through the in-flow boundary at \(x=0\) and leave the domain through the out-flow at \(x=L=(U+\sqrt{gH})\times 5\). Note that because of the re-scaling of the \(x\)-axis to \(x/(U+\sqrt{gH})\), the solutions are invariant for all the three flow regimes.
merical solution on a sequence of increasing number of finite volume cells, \(N=64,128,256,512,1024,2048\). The \(L_{2}\)-error and convergence rates of the error are shown in Figure 5 and also presented in Table 3. We have performed numerical experiments with no dissipation \(\alpha=0\) and with numerical dissipation set on \(\alpha=0.05\). From Table 3 we see that the method is second order accurate \(O(\Delta x^{2})\) when \(\alpha=0\) and first order accurate \(O(\Delta x)\) when \(\alpha>0\). These are in agreement with the theory.
## 5 Conclusion
Well-posed boundary conditions are crucial for accurate numerical solutions of IBVPs. In this study we have analysed well-posed boundary conditions for the linear SWWE in 1D. The analysis is based on the energy method and prescribes the number, location and form of the boundary conditions so that the IBVP is well-posed. A summary of the result are shown in the Table 1, and covers all flow regimes. We formulate the boundary conditions such that they can readily implemented in a stable manner using the SBP-SAT method. We propose a finite volume method formulated in SBP framework and implement the boundary conditions weakly using SAT. Stable penalty parameters and prove of numerical stability derived via discrete energy estimates analogous to the continuous estimate. Numerical experiments are performed to verify the analysis. The error rates comply with the methods that we use. Our continuous and numerical analysis covers all flow regimes, and can be extended to the nonlinear problem. The next step in our study will extend the 1D theory and results to 2D, and implement our scheme in open source software [11, 9] for efficient and accurate simulations of the nonlinear shallow water equations.
AcknowledgementsThis research is conducted as part of doctoral study funded by Indonesian Endowment Fund for Education (LPDP).
## References
* [1] Mark H Carpenter, David Gottlieb, and Saul Abarbanel. "Time-stable boundary conditions for finite-difference schemes solving hyperbolic systems: methodology and application to high-order compact schemes". In: _Journal of Computational Physics_ 111.2 (1994), pp. 220-236 (cit. on pp. 2, 10).
\begin{table}
\end{table}
Table 3: The error and convergence of the error at final time \(t=0.1\) using manufactured solution for all flow regimes.
* [2] JA Cunge, FM Holly, and A Verwey. "Practical Aspects of Computational River Hydraulics, Pitman Adv". In: _Pub. Program_ (1980) (cit. on p. 2).
* [3] Sarmad Ghader and Jan Nordstrom. "Revisiting well-posed boundary conditions for the shallow water equations". In: _Dynamics of Atmospheres and Oceans_ 66 (2014), pp. 1-9. issn: 0377-0265. doi: [https://doi.org/10.1016/j.dynatmoc.2014.01.002](https://doi.org/10.1016/j.dynatmoc.2014.01.002). (Cit. on p. 2).
* [4] Bertil Gustafsson. _High order difference methods for time dependent PDE_. Vol. 38. Springer Science & Business Media, 2007 (cit. on pp. 2, 10).
* [5] Bertil Gustafsson, Heinz-Otto Kreiss, and Joseph Oliger. _Time dependent problems and difference methods_. Vol. 24. John Wiley & Sons, 1995 (cit. on pp. 2, 4).
* [6] H-O Kreiss and G Scherer. "Finite element and finite difference methods for hyperbolic partial differential equations". In: _Mathematical aspects of finite elements in partial differential equations_. Elsevier, 1974, pp. 195-212 (cit. on pp. 2, 10).
* [7] Tomas Lundquist and Jan Nordstrom. "The SBP-SAT technique for initial value problems". In: _Journal of Computational Physics_ 270 (Apr. 2014), pp. 88-104. doi: 10.1016/j.jcp.2014.03.048.
* [8] Khalid Mahmood, Vujica M Yevjevich, and William Albert Miller. _Unsteady flow in open channels_. Vol. 2. Water Resources Publications, 1975 (cit. on p. 2).
* [9] O Nielsen et al. "Hydrodynamic modelling of coastal inundation". In: _MODSIM 2005 International Congress on Modelling and Simulation_ (2005), pp. 518-523 (cit. on p. 19).
* [10] Patrick J. Roache. "Code Verification by the Method of Manufactured Solutions". In: _Journal of Fluids Engineering_ 124.1 (Nov. 12, 2001), pp. 4-10. issn: 0098-2202. doi: 10.1115/1.1436090. (Cit. on p. 17).
* [11] Stephen Roberts, Gareth Davies, and Ole Nielsen. _ANUGA Github Repository_. Version 3.1.9. June 2022. url: [https://github.com/anuga-community/anuga_core](https://github.com/anuga-community/anuga_core) (cit. on p. 19).
* [12] Andrew R. Winters and Gregor J. Gassner. "A comparison of two entropy stable discontinuous Galerkin spectral element approximations for the shallow water equations with non-constant topography". In: _Journal of Computational Physics_ 301 (2015), pp. 357-376. issn: 0021-9991. doi: [https://doi.org/10.1016/j.jcp.2015.08.034](https://doi.org/10.1016/j.jcp.2015.08.034). (Cit. on p. 2).
* [13] Christopher Zoppou and Stephen Roberts. "Explicit schemes for dam-break simulations". In: _Journal of Hydraulic Engineering_ 129.1 (2003), pp. 11-34 (cit. on p. 2).
## Author addresses
* [1]**Rudi Prihandoko**, Mathematical Science Institute, Australian National University, Australian Capital Territory 2600, Australia. mailto:[email protected] orcid:'0000-0001-6376-7952'
* [2]**Kenneth Duru**, Mathematical Science Institute, Australian National University, Australian Capital Territory 2600, Australia.
* [3]**Stephen Roberts**, Mathematical Science Institute, Australian National University, Australian Capital Territory 2600, Australia.
* [4]**Christopher Zoppou**, Mathematical Science Institute, Australian National University, Australian Capital Territory 2600, Australia. |
2309.10567 | Multimodal Modeling For Spoken Language Identification | Spoken language identification refers to the task of automatically predicting
the spoken language in a given utterance. Conventionally, it is modeled as a
speech-based language identification task. Prior techniques have been
constrained to a single modality; however in the case of video data there is a
wealth of other metadata that may be beneficial for this task. In this work, we
propose MuSeLI, a Multimodal Spoken Language Identification method, which
delves into the use of various metadata sources to enhance language
identification. Our study reveals that metadata such as video title,
description and geographic location provide substantial information to identify
the spoken language of the multimedia recording. We conduct experiments using
two diverse public datasets of YouTube videos, and obtain state-of-the-art
results on the language identification task. We additionally conduct an
ablation study that describes the distinct contribution of each modality for
language recognition. | Shikhar Bharadwaj, Min Ma, Shikhar Vashishth, Ankur Bapna, Sriram Ganapathy, Vera Axelrod, Siddharth Dalmia, Wei Han, Yu Zhang, Daan van Esch, Sandy Ritchie, Partha Talukdar, Jason Riesa | 2023-09-19T12:21:39Z | http://arxiv.org/abs/2309.10567v1 | # Multimodal Modeling for Spoken Language Identification
###### Abstract
Spoken language identification refers to the task of automatically predicting the spoken language in a given utterance. Conventionally, it is modeled as a speech-based language identification task. Prior techniques have been constrained to a single modality; however in the case of video data there is a wealth of other metadata that may be beneficial for this task. In this work, we propose MuSeLI, a **M**ultimodal **S**poken **L**anguage **I**e**e**tification method, which delves into the use of various metadata sources to enhance language identification. Our study reveals that metadata such as video title, description and geographic location provide substantial information to identify the spoken language of the multimedia recording. We conduct experiments using two diverse public datasets of YouTube videos, and obtain state-of-the-art results on the language identification task. We additionally conduct an ablation study that describes the distinct contribution of each modality for language recognition.
Shikhar Bharadwaj\({}^{*}\), Min Ma\({}^{*}\), Shikhar Vashishth\({}^{*}\), Ankur Bapna, Sriram Ganapathy, Vera Axelrod, Siddharth Dalmia, Wei Han, Yu Zhang, Daan van Esch, Sandy Ritchie, Partha Talukdar\({}^{\dagger}\), Jason Riesa\({}^{\dagger}\)+Google
{shikharop, minm, shikharv, partha, riesa}@google.com multimodal modeling, language identification, low-resource languages
Footnote †: Equal Advising Contributions.
Footnote †: thanks: Equal Advising Contributions.
## 1 Introduction
Spoken language identification (LangID) is the task of automatically recognizing the language of a given multimedia recording. This task serves as a foundational step in the initial stages of multimodal information extraction and analysis. Precise LangID can aid content recognition, language modeling, and other downstream tasks such as automatic speech recognition and speech intent understanding [1, 2].
For multimedia recordings in the wild, such as videos from YouTube, LangID is more challenging due to the presence of multiple speakers, diverse accents and dialects, background non-speech content and noise [3]. One of the earliest attempts to evaluate this setting is the 2017 NIST language recognition evaluation (LRE) [4], where the audio from video [5] was consistently observed to be more challenging [3]. Video annotation for speech technologies (VAST) [5] is another common corpus for video LangID.
Most prior efforts in this domain have focused on extracting the spoken content of videos followed by modeling of language classes inherent in the speech data. PPRLM [6] created an avenue to generate textual information for spoken langID by using multiple phoneme recognition systems to transcribe unlabeled speech. While this research direction remained popular in the past decades, its dependencies on separately trained phoneme recognition systems pose a challenge for spoken langID of low-resource languages, which suffer from limited availability of supervised speech-phoneme data.
Recently, there has been a growing interest in exploring joint modeling techniques for both speech and text data, aiming to construct a shared encoding space for representations. Unified speech-text models, such as mSLAM [7] and Maestro [8] have enabled derivation of speech-text representations that improve downstream tasks such as automatic speech recognition (ASR). Text injection for enhancing speech representation learning has also been explored for low-resource ASR tasks [9]. Other related efforts include text-induced losses for speech model pre-training by Tan et al. [10] as well as a student-teacher framework [11] for text-based supervision in speech representation learning. These endeavors underscore the advantages of having a common embedding space for both speech and text.
In this paper, we present a multimodal framework designed to enhance spoken language recognition by harnessing a wide range of metadata associated with multimedia recordings. We term it **M**ultimodal **S**poken **L**anguage **I**e**tentification (**MuSeLI**). In addition to the audio data, multimedia recordings include supplementary metadata such as title, description, geographic location of uploaded videos, etc. These metadata can provide important context for the content embedded in the video recording, and can be especially useful for distinguishing acoustically similar languages. We show that the effective use of such information can improve the language recognition performance significantly. Our contributions include:
* We propose a multimodal framework that facilitates the incorporation of diverse metadata associated with a multimedia recording for spoken LangID. It does not depend on separately trained text LangID models, nor on text LangID labels.
* To the best of our knowledge, this study is the first attempt to demonstrate that, despite being noisy, video title, description, and geographic location can improve spoken LangID performance.
* Our proposed method achieves state-of-the-art performance on public benchmarks. It is also shown to be effective in distinguishing acoustically similar and low-resource languages.
## 2 Related Works
**Text LangID** - Previous works used n-gram based techniques [12, 13] for this task. Recently, Caswell et al. [14] have explored text LangID in the context of web-crawl corpora. The authors trained LangID models for classifying 1,629 languages and explored a variety of methods to mitigate classification errors. When it comes to YouTube (YT) video title and description, there is no gold text langID available, and langID of text in low-resourced languages remains challenging. In this work, we use _unlabeled_ text in input, and encourage MuSeLI to automatically learn how to address the mismatch between language of text and language of audio.
**Speech LangID** - With the renaissance of deep learning, X-vector [15] has became status duo for spoken langID. Time-delay neural networks [15], residual networks [16], squeeze and excitation models [17], and attentive pooling with conformer models [18] have been investigated for more efficient neural networks. The LASR [19] and MASR [20] methods add additional objectives to pre-training for learning language specific representations.
**Multimodal LangID** - Multimodal models such as mSLAM [7] and Maestro [8] were primarily investigated for speech recognition task, and they do not consider textual information of videos. A limited number of methods have explored multimodal modeling for language recognition, but for music content analysis [21, 22].
## 3 Method
In this paper, we propose to learn multimodal representation of speech and text inputs with a unified multimodal framework. A comprehensive overview of our proposed multimodal language recognition system, MuSeLI, is shown in Figure 1. MuSeLI is based on mSLAM [7], which processes speech and text by modality-specific encoders, followed by a multimodal encoder. mSLAM is pre-trained on unsupervised speech and text data using contrastive and masked language modeling objectives [23]. It also utilizes paired speech-text data through CTC loss [24] to learn speech-text alignment. In this work, we enhance an existing pre-trained mSLAM model by incorporating LASR [19] pre-training. LASR utilizes language-related metadata to enhance the discriminative capabilities of a speech model with respect to different languages.
**Multimodal Embeddings** - In spoken language recognition, a given multimedia recording \(\mathbf{v}\) comprises of a raw audio waveform \(\mathbf{\mathcal{X}}\) and associated metadata information \(\{\phi_{1},\phi_{2},...\}\), where \(\phi_{j}\) corresponds to distinct metadata attributes pertaining to \(\mathbf{v}\). The input audio data \(\mathbf{\mathcal{X}}\) undergoes processing through the speech encoder, which consists of multiple CNN layers followed by a stack of conformer layers [25], to produce latent audio representation \(\mathbf{L}\). All metadata information is concatenated to produce a combined text sequence
\[\mathbf{\mathcal{T}}=[\phi_{1}\texttt{[SEP]}\,\phi_{2}\texttt{[SEP]}...], \tag{1}\]
where [SEP] is a separator tag that allows model to discern between different metadata types. In this work, we utilize three types of metadata: (1) _title_, which is a single sentence summary of the entire recording, (2) _description_, which provides a detailed explanation of the content, and (3) _upload location_, which indicates the region and country the recording was uploaded from. While these signals may exhibit noise and lack a direct connection to the identity of the spoken language, we hypothesize that they may lead to enhanced performance on the task. The metadata text sequence \(\mathbf{\mathcal{T}}\) is input to the text encoder, which consists of a token embedding layer to generate the latent representation \(\mathbf{T}\) for the metadata. Finally, the concatenated speech and metadata embeddings \([\mathbf{L};\mathbf{T}]\) are passed to the multimodal encoder to produce a unified representation \(\mathbf{H}\) for the entire multimedia recording.
**Weighted Layer Representation** - The multimodal encoder consists of a series of conformer layers. Hsu et al. [26] demonstrated that the representations generated by the final layer may not be optimal for all tasks. Hence, we take a weighted combination of representations from all layers where weights are kept learnable and are trained using backpropagation, i.e.,
\[\mathbf{H}=\sum_{k}\alpha_{k}\mathbf{H}_{k}, \tag{2}\]
where \(\mathbf{H}_{k}\) denotes the representation from \(k^{\text{th}}\) conformer layer of the multimodal encoder and \(\alpha_{k}\) is a learnable parameter corresponding to each layer. The weighted representation provides flexibility to the model for weighing different layers of the encoder stack and eliminates the need to carefully choose the layer.
**Attentive Pooling** - To facilitate the merging of audio and text information, we employ an attention-based pooling, where the pooling is performed on the sequence dimension. This layer assigns distinct weights to the hidden sequences from the audio and text components, thereby capturing the significance of each modality effectively. We use a learnable query vector \(\mathbf{Q}\), with \(\mathbf{H}\) as the key and value sequences respectively in the multi-head attention [27]. The final pooled vector \(\mathbf{p}\) is computed as,
\[\mathbf{p}=\mathrm{MultiHead}(\mathbf{Q},\mathbf{H},\mathbf{H}). \tag{3}\]
Figure 1: Overview of MuSeLI, a framework to encode both speech and text modalities, allows to leverage different pre-trained models to initialize speech encoder, text encoder and shared multimodal encoder. Pooling and Softmax layers are added during fine-tuning and randomly initialized. Please see 3 for more details.
Finally, \(\mathbf{p}\) is passed through a soft-max layer for generating class probabilities. We optimize the model on cross-entropy loss over the language classes.
## 4 Experimental Setup
**Datasets** We experiment on the follwoing public datasets derived from YouTube (YT):
* **Dhwani-YT1** We experiment on the publicly available YT portion of the Dhwani dataset [28]. Dhwani-YT contains \(4\)k hours of audio from \(1.9\)k YT channels. This dataset spans over \(22\) south Asian languages, which covers \(4\) language families and \(14\) writing scripts. Footnote 1: [https://github.com/AI4Bharat/IndicWav2Vec](https://github.com/AI4Bharat/IndicWav2Vec) last accessed on 14th September, 2023.
* **VoxLingua107**[17] is a language identification dataset composed of \(6.6\)k hours of audio from approximately \(64\)k videos. The training dataset spans over \(107\) languages, while the evaluation set consists of \(1\),\(609\) samples from \(33\) languages.
**Baselines** In our experiments, we use the \(600\)M mSLAM model, which has undergone pre-training with large volume of raw speech and text data, in addition to paired speech-text datasets [7]. We introduce a modified version of mSLAM, referred to as mSLAM-YT, which is pre-trained using YouTube-based datasets employed in Google-USM [29]. Additionally, we create another variant of mSLAM by utilizing LASR pre-training on publicly available datasets [19], which leverages language metadata to make speech models language-aware through a contrastive objective.
**Evaluation Metrics:** In order to assess the effectiveness of different LangID models, we conduct comparisons based on accuracy, macro-F1 score, precision, and False Positive Rates (FPR). As shown in [14], FPR is a valuable metric for evaluating the efficacy of a LangID system, specifically for low-resource languages.
**Implementation Details** We adopt most of the hyper-parameters from previous works [19, 20, 30]. We use a batch size of \(128\) and trim the text sequence to \(400\) tokens for the multimodal model. The speech sequence is trimmed to \(1.6\)k frames. On the VoxLingua107 and Dhwani dataset we fine-tune for \(26\)k and \(30\)k steps respectively. We use the Adam optimizer with a linear rate schedule.
## 5 Results
### Performance Comparisons
As shown in Table 1, MuSeLI outperforms Speech-only LangID on both datasets, in all evaluation metrics, regardless of the choice of pre-trained models. Specifically, by leveraging multimodal signals, MuSeLI improved accuracy from \(93.0\)% to \(96.5\)% on Voxlingua, and from \(66.1\)% to \(72.7\)% on Dhwani-YT. MuSeLI achieves state-of-the-art performances (cf. Table 2), even without including VoxLingua training data in its pre-training (while [32] and [33] did). The previous best performance in the spoken LangID task was achieved by AmberNet [34], a model suited for practical deployment due to its small size. While in this paper, we propose MuSeLI to model both speech and text modalities in a unified framework, and its larger model capacity would be a better fit to learn a generally useful representation for multiple speech tasks.
ilarity and geographical proximity of the two language pairs, title and description again played a significant role: Urdu uses Perso-Arabic script while Hindi uses Devanagari. Further, grammatical differences may have helped to discriminate Spanish from Catalan.
### Effect of different Metadata and Pooling on Performance
From the various ablations in Table 3, we can observe the impact each metadata has on the LangID task. The upload location is a prominent indicator of the language of the multimedia recording. However, only adding the title and description can also boost LangID accuracy by a fair margin. Interestingly, we note that using a metadata only model (containing title, description and upload location) without any speech signal does performs competitively on the Dhwani-YT dataset compared to the baseline Speech-only LangID system.
Results in Table 3 also indicate that attentive pooling (Equation 3) is better than mean pooling in aggregating information over multiple modalities, since it learns to attend to the indicative parts.
### Estimating Importance of Different Encoder Layers
The attentive pooling outlined in Equation 3, can be applied over outputs from any layer (\(\mathbf{H}_{k}\)) of the conformer stack. We fine-tune our best performing mSLAM variant up to the \(k\)th layer and plot the results in Figure 3. We observe that the intermediate layers are better than using the last layer for finetuning on the LangID task. However, we also note that fine-tuning and evaluating all layers of the model is expensive. On the other hand, our proposed weighted representation scheme (Equation 2) performs comparatively similar to the best layer representation, while being computationally efficient. Our best layer selection led to highest accuracy of 97.6% on Voxlingua107 dataset.
### Robustness to Utterance Duration
To investigate the sensitivity of LangID models to utterance duration, we calculated the accuracy over different utterance durations on the Voxlingua107 corpus. As illustrated in Figure 4, both the speech-only and MuSeLI models generally perform better on longer input utterances. More importantly, MuSeLI is robust to all the utterance duration conditions. In particular, the largest gain for the use of meta data is seen on the most challenging condition of short duration utterances (\(1\)-\(5\) seconds). This is expected since meta-information is consistent for the utterances derived from the same video, regardless of the utterance duration. This analysis highlights that multimodal signals are substantially important when audio signal information is sparse.
## 6 Conclusion
We introduce a general multimodal modeling framework, and explore its effectiveness for spoken langID of videos by experimenting on various unlabeled textual metadata information besides speech. Our proposed method MuSeLI shows substantial improvements over the speech-only baselines across multiple datasets and different baseline models (\(10\)% relative improvement on Dhwani-YT and \(4\)% on Voxlingua107). We conduct comparative studies to show how textual meta-information helps to disentangle similar and low-resourced languages. We also highlight the benefits of utilizing the metadata in short duration audio recordings.
Figure 4: Including meta-information improves accuracy of spoken language identification, across all utterance duration ranges. Please see Section 5.4 for details.
Figure 3: LangID performance on using different layer representations for fine-tuning. Please see Section 5.3 for details.
\begin{table}
\begin{tabular}{l c c} \hline \hline
**LangID Variant** & **Dhwani** & **VoxLingua** \\ \hline Metadata-only & 68.3 & 77.0 \\ Speech-only & 66.1 & 93.0 \\ + Title and Description & 68.3 & 93.3 \\ + Upload Location & & \\ w/ Mean Pooling & 72.2 & 96.1 \\ w/ Attentive Pooling & 72.7 & 96.5 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study with language recognition accuracy (%) for MuSeLI with different metadata and pooling types. Please see Section 5.2 for details.
Figure 2: Languages with the least amount of fine-tuning data show most improvement on Dhwani-YT. Please see Section 5.1 for details. |
2309.15190 | On the Use of the Mellin Transform to Generate Families of Power,
Hyperpower, Lambert and Dirichlet Type Series and Some Consequences | This note is concerned with series of the forms $\sum f(a^n)$ and $\sum
f(n^{-a})$ where f(a) possesses a Mellin transform and $a > 1$ or $a<0$
respectively. Integral representations are derived and used to transform these
series in several ways yielding a selection of interesting integral evaluations
involving Riemann's function $\zeta(s)$, limits and series representations
containing hyperpowers. A number of examples of such sums are provided, each of
which is investigated for possible new structure. In one case, we obtain a
generalization of Riemann's classic relationship among the Zeta, Gamma and
Jacobi Theta functions. | Larry Glasser, Michael Milgram | 2023-09-26T18:49:45Z | http://arxiv.org/abs/2309.15190v2 | On the Use of the Mellin Transform to Generate Families of Power, Hyperpower, Lambert and Dirichlet Type Series and Some Consequences
###### Abstract
This note is concerned with series of the forms \(\sum f(a^{n})\) and \(\sum f(n^{-a})\) where f(a) possesses a Mellin transform and \(a>1\) or \(a<0\) respectively. Integral representations are derived and used to transform these series in several ways yielding a selection of interesting integral evaluations involving Riemann's function \(\zeta(s)\), limits and series representations containing hyperpowers. A number of examples of such sums are provided, each of which is investigated for possible new structure. In one case, we obtain a generalization of Riemann's classic relationship among the Zeta, Gamma and Jacobi Theta functions.
MSC: 44A05; 44A20; 33B99; 40-08
## 1 Introduction
Although series of the form
\[S=\sum_{n=0}^{\infty}f(a^{n}) \tag{1.1}\]
have been extensively studied as \(q-\)extensions of classical functions, specific examples only appear sporadically in mathematical tables. Some examples appear in the extensive tables of Prudnikov et.al. [1, sections 5.4.11 -16], as well as Hansen [2] sections 11-13 and 17.9. General relationships between representative sums of the form studied here and similar products can also be found in [3, Section 17].
In this note we assume that \(f(x)\) possesses a Mellin transform
\[F(s)=\int_{0}^{\infty}x^{s-1}f(x)dx,\qquad\Re(s)>0 \tag{1.2}\]
and employ this property to obtain interesting identities, one of which can be connected to \(q\)-identities. Throughout, \(j\,,n,k,N\) are non-negative integers, \(b\in\Re\) and \(b>0\), all other variables are complex unless specified otherwise, \(\gamma\) is the Euler-Mascheroni constant and \(\psi_{q}(x)\) is the \(q\)-extension of the digamma function. The symbol \(:=\) refers to symbolic replacement and we represent a frequently appearing sum in terms of a Jacobi-type analogue theta function
\[\omega(b,s)\equiv\sum_{j=1}^{\infty}\e^{-b\,j^{s}}\,. \tag{1.3}\]
Basically, we utilize Lebesgue's dominated convergence theorem
\[S=\sum_{n=0}^{\infty}\int_{c-i\infty}^{c+i\infty}\frac{ds}{2\pi i}a^{-ns}F(s)= \int_{c-i\infty}^{c+i\infty}\frac{ds}{2\pi i}\sum_{b=0}^{\infty}a^{-ns}F(s) \tag{1.4}\]
for some \(c>0\) and \(a>1\), ensuring the convergence of the series. Therefore, we have
**Theorem 1**.: _If \(f\) possesses a Mellin transform \(F\) and \(Re\left(a\right)>1\), then_
\[\sum_{n=0}^{\infty}f(a^{n})=\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}\frac {F(s)}{1-a^{-s}}ds. \tag{1.5}\]
Pursuing this principle of treating the free Mellin transform variable \(a\) as the independent variable in a power series, we also consider its utilization in the form of a Dirichlet series; specifically, if
\[f(a)=\frac{1}{2\,\pi i}\int_{c-i\infty}^{c+i\infty}a^{-v}\,F(v)\,dv \tag{1.6}\]
where \(f(a)\) and \(F(v)\) are related by an inverse Mellin transform, then,
**Theorem 2**.: _If \(\Re(s)<0\) and the sum converges, we have_
\[\sum_{j=1}^{\infty}f\big{(}j^{-s}\big{)}=\,\frac{1}{2\,\pi i}\int_{c-i\infty} ^{c+i\infty}\zeta(-s\,v)\,F(v)\,dv\,. \tag{1.7}\]
**Remark:** In another context [4], instead of summing or exponentiating, the variable \(a\) was treated as a complex variable \(a\equiv r\,\exp i\theta\), and the Mellin transforms were studied as a function of \(\theta\).
In the following Section 2, we present some examples based on Theorem 1. In the subsequent Section 3, we present some examples based on Theorem 2. From the first of these Sections, a few of the more interesting identities are listed below:
* see (2.13) \[\sum_{n=0}^{\infty}\frac{\mathrm{e}^{-a^{n}}}{\sinh(a^{n})}= -\frac{\ln(\pi)}{\ln(a)}-\frac{1}{2}+\frac{a}{a-1}+\frac{\gamma}{ \ln(a)}+\sum_{k=1}^{\infty}\,\big{(}a^{k}-\coth\big{(}a^{-k}\big{)}\big{)}\] \[+\frac{4}{\ln(a)}\,\Re\!\left(\sum_{k=1}^{\infty}\,\zeta\!\left( \frac{2\,i\,\pi\,k}{\ln(a)}\right)\Gamma\!\left(\frac{2\,i\,\pi\,k}{\ln(a)} \right)2^{-\frac{2\,i\,\pi\,k}{\ln(a)}}\right)\,;\]
* see (2.26) \[\sum_{k=1}^{\infty}\,\frac{1}{1-a^{2\,k+1}}=\frac{1}{a-1}-\frac{\psi_{1/a^{2}} (1)}{2\,\ln(a)}+\frac{\psi_{1/a}(1)}{\ln(a)}+\frac{\ln\!\left(\frac{a-1}{a+1} \right)}{2\,\ln(a)}\,;\]
* see (2.47) \[\sum_{j=1}^{\infty}\Re\!\left(\Gamma\!\left(\frac{2\,i\,\pi\,j}{\ln(b)} \right)\right)=\frac{\ln(b)}{2}\sum_{j=0}^{\infty}\!\left(\mathrm{e}^{-1/b^{j }}-1+\mathrm{e}^{-b^{j}}\right)\!+\!\left(\frac{1}{4}-\frac{1}{2\,e}\right)\ln (b)\!+\!\frac{\gamma}{2}\,.\] From the next Section 3, interesting identities include
* see (3.35) \[\zeta(-b\,s)\,\Gamma(b)=\int_{0}^{\infty}x^{b-1}\!\sum_{j=1}^{ \infty}\,\mathrm{e}^{-x\,j^{-s}}dx,\quad s<0,\] reducing to Riemann's classic identities [5, Eqs. (2.4.1) and (2.6.2)] when \(s=-1\) and \(s=-2\) respectively;
* see (3.43) \[\lim_{n\to\infty}\!\left(\sum_{j=1}^{\infty}\,\frac{1}{\mathrm{e}^{j^{1/n}}-1} -\zeta(n)\,\Gamma(n+1)\right)=\frac{1}{2-2\,\mathrm{e}}\,.\]
## 2 Examples based on Theorem 1
### Example 2.1
From [6, Eq. 6.6(2)] with \(f(x)=1/\sinh(ax),\ F(s)=2\,a^{-s}\,(1-2^{-s})\,\Gamma(s)\,\zeta(s)\),
we have the Mellin transform
\[\int_{0}^{\infty}\frac{x^{s-1}}{\sinh(a\,x)}dx=2\,a^{-s}\,\big{(}1-2^{-s}\big{)} \,\Gamma(s)\,\zeta(s) \tag{2.1}\]
yielding, after differentiating with respect to the variable \(a\),
\[\int_{0}^{\infty}\frac{x^{s}\,\cosh(a\,x)}{\cosh(2\,a\,x)-1}dx=a^{-s-1}\,\big{(} 1-2^{-s}\big{)}\,\Gamma(s+1)\,\zeta(s)\,,\quad Re\;s>1. \tag{2.2}\]
According to (1.5), after summation, we find the inverse Mellin transform with \(c>1\)
\[\frac{1}{2\,\pi\,i}\int_{c-i\,\infty}^{c+i\,\infty}\frac{b^{-s-1}\,(1-2^{-s}) \,\Gamma\,(s+1)\,\zeta\,(s)}{1-a^{-s}}ds=\sum_{j=0}^{\infty}\,\frac{a^{j}\, \cosh\!\big{(}a^{j}\,b\big{)}}{\cosh(2\,a^{j}\,b)-1} \tag{2.3}\]
giving the (possibly new) series
\[\sum_{j=0}^{\infty}2^{j}\frac{\cosh(2^{j}\,b)}{\sinh^{2}(2^{j}\,b)}=\frac{2}{ b^{2}}+2\!\!\sum_{k=1}^{\infty}\frac{b^{2\,k-2}\,\zeta(1-2\,k)}{\Gamma(2\,k-1)}= \frac{1}{2}\,\frac{1}{\sinh^{2}(b/2)}, \tag{2.4}\]
by evaluating the residues (see Appendix A) as the contour is moved to negative infinity when \(a=2\).
Cases like this, where the quantity \(1-a^{-s}\) cancels from the denominator of (4) are scarce. However since the factor \(1-a^{-s}=1-e^{-s\ln a}\) vanishes for complex \(s\), when the contour is moved into the left-half \(s-\)plane, further interesting residue sums over the imaginary poles \(2\pi ki/\ln a\), \(k=0,\pm 1,\pm 2\cdots\) can be found, especially when \(F(s)\) is meromorphic as will now be demonstrated.
### Example 2.2
Consider next [6, Eq. 6.3(7)]
\[f(x)=\frac{1}{e^{x}-1},\quad F(s)=\Gamma(s)\zeta(s)\,\quad\Re(s)>1. \tag{2.5}\]
Thus,
\[\sum_{n=0}^{\infty}\frac{1}{e^{a^{n}}-1}=\frac{1}{2\pi i}\int_{c-i\infty}^{c+ i\infty}\frac{\Gamma(s)\zeta(s)}{1-a^{-s}}ds\,,\quad c>1\,. \tag{2.6}\]
The integrand has a double pole at \(s=0\), simple poles at s=-k, \(k\) odd and \(s=2k\pi i/\ln a\), \(k\in\mathcal{N}\). Closing the contour to the left and summing the
appropriate residues yields
\[\sum_{n=0}^{\infty}\frac{1}{e^{a^{n}}-1}=\frac{\gamma-\ln(2\pi\sqrt{a})}{\ln(a^{2} )}+\frac{a}{a-1}\]
\[+\sum_{k=0}^{\infty}\frac{\zeta(-2k-1)}{(2k+1)!(a^{2k+1}-1)}+\frac{2}{\ln a}{ \cal R}\sum_{k=1}^{\infty}\Gamma\left(\frac{2\pi ik}{\ln a}\right)\zeta\left( \frac{2\pi ik}{\ln a}\right). \tag{2.7}\]
However, since
\[\sum_{k=0}^{\infty}\frac{\zeta(-2k-1)}{(2k+1)!(a^{2k+1}-1)}=-\frac{1}{2}\sum_{k =1}^{\infty}\left[\coth\left(a^{-k}/2\right)-2a^{k}\right], \tag{2.8}\]
(see Appendix B), then
\[{\cal R}\sum_{k=1}^{\infty}\Gamma\left(\frac{2\pi ik}{\ln a}\right) \zeta\left(\frac{2\pi ik}{\ln a}\right)=-\frac{a\ln a}{2(a-1)}- \frac{\gamma-\ln(2\pi\sqrt{a})}{4}\] \[+\frac{\ln a}{2}\sum_{k=0}^{\infty}\frac{1}{e^{a^{k}}-1}+\frac{ \ln a}{4}\sum_{k=1}^{\infty}\,\left(\coth\left(a^{-k}/2\right)-2a^{k}\right)\,. \tag{2.9}\]
Similarly [6, Eq. 6.3(6)], which, by analogy to (2.5), provides the Mellin transform pair
\[f(x)=\frac{1}{e^{x}+1},\quad F(s)=\left(1-2^{(1-s)}\right)\Gamma(s)\,\zeta(s) \,,\quad\Re(s)>0 \tag{2.10}\]
leads to
\[\sum_{n=0}^{\infty}\frac{1}{e^{a^{n}}+1}= -\frac{1}{2\,\ln(a)}\ln\!\left(\frac{2}{\pi\,\sqrt{a}}\right)- \frac{1}{2}\sum_{k=1}^{\infty}\,\left(\coth\!\left(a^{-k}/2\right)-2\,\coth\! \left(a^{-k}\right)\right)\] \[-\frac{\gamma}{2\,\ln(a)}-\frac{4}{\ln(a)}\,\Re\sum_{k=1}^{ \infty}\,\left(2^{-\frac{2\,i\,\pi\,k}{\ln(a)}}-\frac{1}{2}\right)\Gamma \!\left(\frac{2\,i\,\pi\,k}{\ln\left(a\right)}\right)\zeta\!\left(\frac{2\,i \,\pi\,k}{\ln\left(a\right)}\right)\,. \tag{2.11}\]
By adding (2.11) and (2.2) we obtain
\[\sum_{n=0}^{\infty}\,\frac{1}{\sinh(a^{n})}= -\!\sum_{k=1}^{\infty}\!\left(\coth\!\left(a^{-k}/2\right)-\coth \!\left(a^{-k}\right)-a^{k}\right)-\frac{a}{1-a}-\frac{\ln(2)}{\ln(a)}\] \[+\frac{4}{\ln(a)}\,\Re\sum_{k=1}^{\infty}\!\left(1-2^{-\frac{2\,i \,\pi\,k}{\ln(a)}}\right)\Gamma\!\left(\frac{2\,i\,\pi\,k}{\ln\left(a\right)} \right)\zeta\!\left(\frac{2\,i\,\pi\,k}{\ln\left(a\right)}\right)\,, \tag{2.12}\]
and by subtracting we find
\[\sum_{n=0}^{\infty}\,\frac{\mathrm{e}^{-a^{n}}}{\sinh(a^{n})}= -\frac{\ln(\pi)}{\ln(a)}-\frac{1}{2}+\frac{a}{a-1}+\frac{\gamma}{ \ln(a)}+\sum_{k=1}^{\infty}\,\big{(}a^{k}-\coth\big{(}a^{-k}\big{)}\big{)}\] \[+\frac{4}{\ln(a)}\,\Re\sum_{k=1}^{\infty}\,\zeta\bigg{(}\frac{2\,i \,\pi\,k}{\ln\,(a)}\bigg{)}\,\Gamma\bigg{(}\frac{2\,i\,\pi\,k}{\ln\,(a)}\bigg{)} \,2^{-\frac{2\,i\,\pi\,k}{\ln(a)}}\,. \tag{2.13}\]
Note that in taking the limit of (2.12) as \(a\to 2\), the first series on the right-hand side is telescoping while the second series vanishes since \(1-2^{-2\,i\,\pi\,k/\ln(2)}=0\), yielding
\[\sum_{n=0}^{\infty}\!\frac{1}{\sinh(2^{n})}=\coth\!\left(\frac{1}{2}\right)-1\,, \tag{2.14}\]
an identity that could also be obtained by evaluating the listed identity [2, Eq. 25.1.1]
\[\sum_{n=0}^{N}\,\frac{1}{\sin(2^{n}\,x)}=\cot\!\left(\frac{x}{2}\right)-\cot \!\left(2^{N}\,x\right) \tag{2.15}\]
after setting \(x=i\) with \(N\to\infty\). See also [7, Eq. 1.121.2] and [8]. It is notable that the left-hand side of (2.9) is expressible in elementary terms when \(a=2\).
### Example 2.3
In a more elementary vein, let us take
\[f(a)=(a^{2}+1)^{-1},\ \ F(s)=(\pi/2)/\sin(\pi s/2),a>1,\ \ \ 0<\Re(s)<2, \tag{2.16}\]
yielding the identity
\[\sum_{k=0}^{\infty}\,\frac{1}{a^{2k}+1}=\,-\frac{i}{4}\int_{1-i\,\infty}^{1+i \,\infty}\frac{1}{\sin\left(\pi\,s/2\right)(1-a^{-s})}ds\,. \tag{2.17}\]
Except for the case \(s=0\), the residues from the zeroes of the denominator term \(1/(1-a^{-s})\) in the integrand of (2.17) cancel, and closing the contour by transiting the poles \(s=-2k,\ \ k=1,2,3\cdots\) and the double pole \(s=0\), we obtain the known [9, Eq. (2.1a)] Lambert series identity (originally attributed to Ramanujan)
\[\sum_{k=1}^{\infty}\,\frac{1}{a^{2\,k}+1}=\sum_{k=1}^{\infty}\frac{\left(-1 \right)^{1+k}}{a^{2\,k}-1},\ \ \ a>1 \tag{2.18}\]
which, by letting \(a=\exp(x)\), is equivalent to
\[\sum_{k=1}^{\infty}(-1)^{k+1}\left(1/\tanh(kx)-1\right)=\sum_{k=1}^{\infty} \left(1-\tanh(kx)\right)\ \ \ (x>0) \tag{2.19}\]
because
\[\sum_{k=1}^{\infty}\,\left(\frac{\mathrm{e}^{k\,x}}{2\,\cosh(k\,x)}-1\right)=\sum _{k=1}^{\infty}\,(-1)^{1+k}\left(1-\frac{\mathrm{e}^{k\,x}}{2\,\sinh(k\,x)}\right) \tag{2.20}\]
using (2.18). The identity (2.18) provides an interesting connection to the \(q-\)digamma function, by considering the odd and even terms of each sum independently:
\[\sum_{k=1}^{\infty}\,\frac{1}{1-a^{2\,k+1}}=\,-\sum_{k=1}^{\infty}\,\frac{1}{1 +a^{k}}+\sum_{k=1}^{\infty}\,\frac{1}{1-a^{2\,k}}-\frac{1}{1-a} \tag{2.21}\]
and
\[\sum_{k=1}^{\infty}\,\frac{1}{1-a^{2\,k+1}}=\sum_{k=1}^{\infty}\,\frac{1}{1-a^ {k}}-\sum_{k=1}^{\infty}\,\frac{1}{1-a^{2\,k}}-\frac{1}{1-a}\,. \tag{2.22}\]
So, by equating the right-hand sides of each, we obtain
\[\sum_{k=1}^{\infty}\,\frac{1}{1+a^{k}}=\frac{1}{\ln(a)}\left(\ln(1+1/a)-\psi_ {1/a}(1)+\psi_{1/a^{2}}(1)\right) \tag{2.23}\]
employing the identity [10, Eq. (4)]
\[\sum_{k=1}^{\infty}\,\frac{1}{1-a^{k}}=\frac{\psi_{1/a}(1)+\ln(a-1)}{\ln(a)}- 1,\qquad a>1, \tag{2.24}\]
where the \(q-\)digamma function is defined by [10, Eq. (2)]
\[\psi_{q}(z)=-\ln(1-q)+\ln(q)\sum_{k=0}^{\infty}\,\frac{q^{k+z}}{1-q^{k+z}}\,, \qquad|q|<1\,. \tag{2.25}\]
Either of (2.21) or (2.22) further yields
\[\sum_{k=1}^{\infty}\,\frac{1}{1-a^{2\,k+1}}=\frac{1}{a-1}-\frac{\psi_{1/a^{2 }}(1)}{2\,\ln(a)}+\frac{\psi_{1/a}(1)}{\ln(a)}+\frac{\ln\!\left(\frac{a-1}{a +1}\right)}{2\,\ln(a)}\,, \tag{2.26}\]
the alternating version of which is the well-known identity [9, Eq. (5.1)]
\[4\!\!\sum_{k=0}^{\infty}\,\frac{\left(-1\right)^{k}\,x^{2\,k+1}}{1-x^{2\,k+1 }}=\vartheta_{3}(0,x)^{2}-1\qquad 0<x<1 \tag{2.27}\]
where \(\vartheta_{3}(0,x)\) is the Jacobi theta function. Fundamentally, (2.23) reduces to the elementary identity
\[\sum_{k=1}^{\infty}\,\frac{1}{1+a^{k}}=\,-\!\!\sum_{k=1}^{\infty}\,\frac{1}{ 1-a^{k}}+2\!\!\sum_{k=1}^{\infty}\,\frac{1}{1-a^{2\,k}}\,. \tag{2.28}\]
### Example 2.4
We start by noting the identity
\[(a^{m}+e^{-t})^{-1}-(a^{m}+e^{t})^{-1}=\frac{2\sinh t}{(a^{2m}+2a^{m}\cosh t+1)}, \tag{2.29}\]
and, with (1.5) in mind, utilize
\[f(a)=1/(a+x),\ \ \ \ \ F(s)=\pi\,a^{s-1}/\sin(\pi\,s) \tag{2.30}\]
to obtain
\[\sum_{n=0}^{\infty}\,\frac{1}{a^{n}+{\rm e}^{t}} =\frac{\left(2\,t+\ln(a)\right){\rm e}^{-t}}{2\,\ln(a)}-\sum_{k=1} ^{\infty}\,\frac{\left(-1\right)^{k}\,{\rm e}^{-(k+1)t}}{a^{k}-1}\] \[+\frac{2\,\pi}{\ln(a)}\Im\sum_{k=1}^{\infty}\,\exp\left(\frac{ \left(2\,i\,\pi\,k-\ln\left(a\right)\right)t}{\ln\left(a\right)}\right)\,{ \rm csch}\!\left(\frac{2\,k\,\pi^{2}}{\ln\left(a\right)}\right) \tag{2.31}\]
by evaluating the residues as before. Let \(t:=-t\), subtract, and with \(a:=a^{m}\), after comparing with (2.29) we have
\[\sum_{n=0}^{\infty}\,\frac{1}{a^{2\,m\,n}+2\,a^{m\,n}\,\cosh(t)+1 }=\frac{1}{2}\,-\frac{2\,\pi\,\coth(t)}{m\,\ln(a)}{\sum_{k=1}^{ \infty}\,\frac{\sin\left(\frac{2\,\pi\,k\,t}{m\,\ln(a)}\right)}{\sinh\left( \frac{2\,k\,\pi^{2}}{m\,\ln(a)}\right)}}\\ -\frac{1}{\sinh(t)}{\sum_{k=1}^{\infty}}\frac{\left(-1\right)^{k} \,\sinh\left(\left(k+1\right)t\right)}{a^{m\,k}-1}-\frac{t\,\coth(t)}{m\,\ln( a)}\,. \tag{2.32}\]
By evaluating (2.32) in the limit \(t\to 0\) and setting \(a^{m}:=b\), we find
\[\sum_{n=0}^{\infty}\,\frac{1}{\left(b^{n}+1\right)^{2}}=\frac{1}{2}-\frac{1}{ \ln(b)}-\frac{4\,\pi^{2}}{\ln^{2}(b)}{\sum_{k=1}^{\infty}\,k\,{\rm csch}\! \left(\frac{2\,k\,\pi^{2}}{\ln\left(b\right)}\right)}+{\sum_{k=2}^{ \infty}\,\frac{k\,\left(-1\right)^{k}}{b^{k-1}-1}}\,. \tag{2.33}\]
However, by expanding the denominator and transposing the resulting series (e.g. (B.2)), it is easy to write
\[\sum_{k=2}^{\infty}\,\frac{k\,\left(-1\right)^{k}}{b^{k-1}-1}=\sum_{n=1}^{ \infty}\,\frac{1+2\,b^{n}}{\left(1+b^{n}\right)^{2}},\ \ \ b>1, \tag{2.34}\]
so that (2.33) reduces to a transformation between similar generalized Lambert series ([11] )
\[\sum_{n=1}^{\infty}\,\frac{b^{n}}{\left(1+b^{n}\right)^{2}}-\frac{1}{2\,\ln(b )}=-\frac{1}{8}+\frac{4\,\pi^{2}}{\ln(b)^{2}}\,\sum_{j=0}^{\infty}\,\frac{q^{ 1+2j}}{\left(q^{1+2j}-1\right)^{2}}\ \ \ b>1, \tag{2.35}\]
where
\[q={\rm e}^{2\,\pi^{2}/\ln(b)}\,. \tag{2.36}\]
**Remarks:**
* Because the sum on the right-hand side of (2.35) effectively vanishes exponentially as \(b\to 1\), we obtain \[\lim_{b\to 1}\!\left(\sum_{j=1}^{\infty}\frac{b^{j}}{\left(1+b^{j}\right)^{2}}- \frac{1}{2\,\ln(b)}\right)=-\frac{1}{8}\,;\] (2.37)
* By setting \(b=\exp(2\pi\,a),\ a>0\), we arrive at \[\sum_{j=1}^{\infty}\,\operatorname{sech}^{2}(j\,\pi\,a)-\frac{1}{a^{2}}\,\sum _{j=0}^{\infty}\operatorname{csch}^{2}\left(\left(j+1/2\right)\frac{\pi}{a} \right)=\frac{1}{\pi\,a}-\frac{1}{2}\,,\] (2.38) a known result when \(a=1\), if one notes that the misprint \(\Gamma(1/4)^{2}\), listed by both Hansen [2, Eq. (43.8.12)] and Erdelyi et.al. [12, Eq. 5.3.7(13)], should be \(\Gamma(1/4)^{4}\).
* In the case that \(a=\exp(t/m)\), with \(t>0\), (2.32) becomes \[\sum_{n=0}^{\infty}\frac{1}{\operatorname{e}^{2\,n\,t}+2\operatorname{e}^{n\,t }\,\cosh(t)+1}=\frac{1}{2}-\frac{\cosh(t)}{\sinh(t)}-\frac{1}{\sinh(t)}\!\sum _{k=1}^{\infty}\,\frac{\left(-1\right)^{k}\,\sinh\left(\left(k+1\right)t \right)}{\operatorname{e}^{t\,k}-1}\,.\] (2.39)
### Example 2.5
Here we consider the Mellin transform pair \(f(x)=e^{-bx}\) and \(F(s)=b^{-s}\Gamma(s)\) giving
\[\sum_{j=0}^{\infty}\operatorname{e}^{-b^{j}}=\frac{1}{2\,\pi\,i}\int_{c-i\, \infty}^{c+i\,\infty}\frac{\Gamma\left(s\right)}{1-b^{-s}}ds \tag{2.40}\]
with \(\Re(b)>1\) and \(c>0\). Shifting the contour such that \(-1<c<0\) produces
\[\sum_{j=0}^{\infty}\operatorname{e}^{-b^{j}}=\frac{1}{2}-\frac{\gamma}{\ln(b )}+\frac{1}{2\,\pi\,i}\int_{c-i\,\infty}^{c+i\,\infty}\frac{\Gamma\left(s \right)}{1-b^{-s}}ds+\frac{2}{\ln(b)}\!\sum_{j=1}^{\infty}\,\Re\!\left(\Gamma \!\left(\frac{2\,i\,\pi\,j}{\ln\left(b\right)}\right)\right) \tag{2.41}\]
by taking into account the residues of the poles at \(s=0\) and \(s=2\pi\,i\,j/\ln(b)\). Further shifts of the contour \(N\) units to the left following an obvious change of variables, yields
\[\sum_{j=0}^{\infty}\operatorname{e}^{-b^{j}}= \frac{1}{2}-\frac{\gamma}{\ln(b)}+\frac{2}{\ln(b)}\!\sum_{j=1}^{ \infty}\,\Re\left(\Gamma\!\left(\frac{2\,i\,\pi\,j}{\ln\left(b\right)}\right)\right)\] \[+\left[\frac{1}{2\,\pi}\int_{-\infty}^{\infty}\frac{\Gamma\left(c _{N}+i\,v\right)}{1-b^{-i\,v-c_{N}}}dv+\sum_{k=1}^{N}\,\frac{\left(-1\right)^{ k}}{\Gamma(1+k)\left(1-b^{k}\right)}\right] \tag{2.42}\]
where again \(b>1\) and \(-N-1<c_{N}<-N\). Since the terms enclosed in brackets (\([..]\)) contain the only \(N\) dependence, if we consider the case that \(N\to\infty\), the sum of the enclosed terms must remain constant, and since the sum clearly converges, so must the integral. Since the integral does not vary over the range \(-N-1<c_{N}<-N\), this allows us to choose \(c_{N}=-N-1/2\) and note first that
\[\Gamma\biggl{(}-N-\frac{1}{2}+i\,v\biggr{)}=\frac{\Gamma\bigl{(}-\frac{1}{2}+i \,v\bigr{)}}{\prod\limits_{j=0}^{N-1}\bigl{(}-N+j-\frac{1}{2}+i\,v\bigr{)}} \tag{2.43}\]
and second that
\[\lim_{N\to\infty}\frac{1}{\prod\limits_{j=0}^{N-1}\ \bigl{(}-N+j-\frac{1}{2} \bigr{)}}\sim\biggl{(}\frac{e}{N}\biggr{)}^{N}/N \tag{2.44}\]
and therefore if the contour is moved such that \(c_{N}\to\,-\infty\), the integral vanishes, leaving
\[\sum_{j=0}^{\infty}\mathrm{e}^{-b^{j}}=\frac{1}{2}-\frac{\gamma}{\ln(b)}-\sum _{k=1}^{\infty}\,\frac{\left(-1\right)^{k}}{\Gamma(1+k)\left(b^{k}-1\right)} +\frac{2}{\ln(b)}\!\sum_{j=1}^{\infty}\Re\left(\Gamma\biggl{(}\frac{2\,i\,\pi \,j}{\ln\left(b\right)}\biggr{)}\right)\,. \tag{2.45}\]
Further, since \(b>1\), we can expand the denominator term in the first sum on the right-hand side of (2.45), interchange the two sums and eventually identify
\[\sum_{k=1}^{\infty}\,\frac{\left(-1\right)^{k}}{\Gamma(1+k)\left(b^{k}-1 \right)}=\sum_{j=1}^{\infty}\left(\mathrm{e}^{-1/b^{j}}-1\right)\,, \tag{2.46}\]
in which case we find
\[\sum_{j=1}^{\infty}\Re\biggl{(}\Gamma\biggl{(}\frac{2\,i\,\pi\,j}{\ln(b)} \biggr{)}\biggr{)}=\frac{\ln(b)}{2}\sum_{j=0}^{\infty}\Bigl{(}\mathrm{e}^{-1/ b^{j}}-1+\mathrm{e}^{-b^{j}}\Bigr{)}+\biggl{(}\frac{1}{4}-\frac{1}{2\,e} \biggr{)}\ln(b)+\frac{\gamma}{2}\,, \tag{2.47}\]
an identity that could also be rewritten as
\[\sum_{j=1}^{\infty}\Re\biggl{(}\Gamma\biggl{(}\frac{2\,i\,\pi\,j}{b}\biggr{)} \biggr{)}=\frac{b}{2}\!\sum_{j=0}^{\infty}\,\left(\mathrm{e}^{-1/\mathrm{e}^{ j\,b}}-1+\mathrm{e}^{-\mathrm{e}^{j\,b}}\right)+\biggl{(}\frac{1}{4}-\frac{1}{2e} \biggr{)}\,b+\frac{\gamma}{2} \tag{2.48}\]
by setting \(b:=\exp(b)\).
## 3 Examples based on Theorem 2
### Example 3.1
Continuing from the previous Section, consider the transform pair (2.5)
\[F(v) =\Gamma(v)\zeta(v) \tag{3.1}\] \[f(a) =1/(\exp(a)-1) \tag{3.2}\]
leading to the identity
\[\frac{1}{2\,\pi}\int_{-\infty}^{\infty}\zeta(-s\,(c+i\,v))\,\Gamma(c+i\,v)\, \zeta(c+i\,v)\,dv=\sum_{j=1}^{\infty}\,\frac{1}{\exp(j^{-s})-1} \tag{3.3}\]
after applying (1.7), where both sides converge if \(s\in\Re\,,\ s<0,\ c>1\\) and \(c>-1/s\). By shifting the contour left, variations arise by evaluating the appropriate residues as follows:
\[\frac{1}{2\,\pi}\int_{-\infty}^{\infty}\zeta(-s\,(c+i\,v))\, \Gamma(c+i\,v)\,\zeta(c+i\,v)\,dv-\sum_{j=1}^{\infty}\,\frac{1}{\exp{(j^{-s}) -1}}\] \[=\left\{\begin{array}{ll}0&\mbox{$c>1$ and $c>-1/s$,}\\ \Gamma(-1/s)\,\zeta(-1/s)\,/s&\mbox{$c>1$ and $c<-1/s$,}\\ -\zeta(-s)&\mbox{$0<c<1$ and $c>-1/s$,}\\ \Gamma(-1/s)\,\zeta(-1/s)\,/s-\zeta(-s)&\mbox{$0<c<1$ and $c<-1/s$,}\\ \Gamma(-1/s)\,\zeta(-1/s)/s-\zeta(-s)-\sum\limits_{j=0}^{N}\,\frac{(-1)^{j}\, \zeta(s\,j)\,\zeta(-j)}{\Gamma(1+j)}&\mbox{$-N-1<c<-N$,}\end{array}\right. \tag{3.4}\]
where \(N=0,1,...\).
We also now consider the transform pair (2.10) leading to
\[\frac{1}{2\,\pi}\int_{-\infty}^{\infty}\zeta(-s\,(c+i\,v))\,\Gamma(c+i\,v)\, \zeta(c+i\,v)\,\big{(}1-2^{1-i\,v-c}\big{)}\,dv=\sum_{j=1}^{\infty}\,\frac{1} {\exp(j^{-s})+1} \tag{3.5}\]
where we require \(s\in\Re\), \(s<0\) and \(c>-1/s\). Again, if the contour is shifted left, we find that various residues must be incorporated depending on the relative values of \(s\) and \(c\). Specifically
\[\frac{1}{2\,\pi}\int_{-\infty}^{\infty}\zeta(-s\,(c+i\,v\,\,))\,\Gamma(c+i\,v )\,\zeta(c+i\,v)\,\big{(}2^{1-v\,i-c}-1\big{)}\,dv+\sum_{j=1}^{\infty}\,\frac{1} {\,\mathrm{e}^{j^{-s}}+1}\] \[=\left\{\begin{array}{ll}0&\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad c>-1/s,\\ \Gamma(-1/s)\,\zeta(-1/s)\,\big{(}-1+2^{1+1/s}\big{)}\,/s&\qquad\qquad c>0\, \,\text{and}\,\,c<-1/s,\\ \Gamma(-1/s)\,\zeta(-1/s)\,\big{(}-1+2^{1+1/s}\big{)}\,/s&\\ -\sum\limits_{j=1}^{N}\,\frac{\big{(}2^{2\,j}-1\big{)}}{\,\mathrm{I}(2\,j+1) }\,\zeta((2\,j-1)\,s)\,B_{2\,j}-1/4&\qquad-N-1<c<-N\,,\end{array}\right. \tag{3.6}\]
where \(B_{2j}\) are Bernoulli numbers (see (A.2)). In the case of equality, half the residue at that point must be included.
#### 3.1.1 The case \(s=-1\)
By taking the appropriate limit in (3.4), let \(s\to-1\), which, with \(c=1/2\), gives
\[\frac{1}{2\,\pi}\int_{-\infty}^{\infty}\zeta\bigg{(}\frac{1}{2}+i\,v\bigg{)}^ {2}\,\,\Gamma\bigg{(}\frac{1}{2}+i\,v\bigg{)}\,dv=\sum_{j=1}^{\infty}\,\frac{ 1}{\,\mathrm{e}^{j}-1}-\gamma\,, \tag{3.7}\]
which can be rewritten as
\[\frac{1}{\sqrt{\pi}}\int_{0}^{\infty}\frac{\left|\zeta\,\big{(}\frac{1}{2}+i \,v\big{)}\,\right|^{2}\,\cos\big{(}2\,\alpha\,\big{(}\frac{1}{2}+i\,v\big{)} +\theta\,\big{(}\frac{1}{2}+i\,v\big{)}\big{)}}{\sqrt{\cosh\left(\pi\,v\right) }}dv=\sum_{j=1}^{\infty}\,\frac{1}{\,\mathrm{e}^{j}-1}-\gamma \tag{3.8}\]
where
\[\zeta\left(\frac{1}{2}+i\,v\right)\equiv e^{i\,\alpha(1/2+iv)}|\zeta\left( \frac{1}{2}+i\,v\right)| \tag{3.9}\]
and
\[\Gamma\left(\frac{1}{2}+i\,v\right)\equiv e^{i\,\theta(1/2+iv)}|\Gamma\left( \frac{1}{2}+i\,v\right)| \tag{3.10}\]
by writing both in polar form and employing the identity [13, Eq. 5.4.4]
\[\left|\Gamma\left(\frac{1}{2}+i\,v\right)\right|=\sqrt{\pi/\cosh\pi\,v}. \tag{3.11}\]
From the identity ([14, Eq. (6.15)])
\[\alpha(1/2+i\,v)=\frac{v\,\ln(2\,\pi)}{2}-\frac{\theta\big{(}\frac{1}{2}+i\,v \big{)}}{2}-\frac{9\,\pi}{8}+\frac{\arctan(\mathrm{e}^{\pi\,v})}{2}\,, \tag{3.12}\]
(3.8) then identifies
\[\int_{0}^{\infty}\!\frac{\left|\zeta\big{(}\frac{1}{2}+i\,v\big{)} \right|^{2}}{\cosh(\pi\,v)}\left(\cos(v\,\ln(2\,\pi))\cosh\!\left(\frac{\pi\,v} {2}\right)-\sin(v\,\ln(2\,\pi))\sinh\!\left(\frac{\pi\,v}{2}\right)\right)dv\] \[\qquad=\sqrt{\pi}\left(\sum_{j=1}^{\infty}\frac{1}{\,\mathrm{e}^{ j-1}}-\gamma\right) \tag{3.13}\]
after applying elementary trigonometric identities and simplification. We now consider (3.6) with the same limit \(s\to-1\), and, comparing with (3.7), arrive at
\[\int_{-\infty}^{\infty}\zeta\!\left(\frac{1}{2}+i\,v\right)^{2}\,\Gamma\!\left( \frac{1}{2}+i\,v\right)2^{-i\,v}dv=\sqrt{2}\,\pi\left(2\,\sum_{j=1}^{\infty} \,\frac{1}{\,\mathrm{e}^{2\,j}-1}-\gamma+\ln(2)\right)\,. \tag{3.14}\]
Applying the same identities as above, yields the equivalent form
\[\int_{-\infty}^{\infty}\frac{\left|\zeta\big{(}\frac{1}{2}+i\,v \big{)}\right|^{2}}{\cosh(\pi\,v)}\left(\sinh\!\left(\frac{\pi\,v}{2}\right) \sin(v\,\ln(\pi))-\cosh\!\left(\frac{\pi\,v}{2}\right)\cos(v\,\ln(\pi))\right)dv\] \[\qquad=-\sqrt{2\,\pi}\left(2\,\sum_{j=1}^{\infty}\,\frac{1}{\, \mathrm{e}^{2\,j}-1}-\gamma+\ln(2)\right)\,, \tag{3.15}\]
a companion to (3.13).
#### 3.1.2 \(s=-2n\)
* Case \(c=0^{-}\) Consider the case \(s=-2n\) where \(n=1,2,\dots\). In that eventuality, all poles in (3.4) corresponding to \(c<0\) vanish as does the finite sum with \(N\geq 1\), so the original contour can be moved with impunity as far to the left (where it does NOT vanish) as one wishes. Of more interest, with \(c=0^{-}\), since the pole at \(v=0\) is imaginary, by adding \(\pi/4=\) half the residue at \(v=0\), (3.4) becomes \[\int_{-\infty}^{\infty}\Re(\zeta(2\,i\,v\,n) \,\Gamma(i\,v)\,\zeta(i\,v))\,dv\] \[=2\,\pi\!\sum_{j=1}^{\infty}\,\frac{1}{\,\mathrm{e}^{j^{2\,n}-1 }}-\frac{\pi}{n}\,\zeta\!\left(\frac{1}{2\,n}\right)\Gamma\!\left(\frac{1}{2 \,n}\right)-2\,\zeta(2\,n)\,\pi-\frac{\pi}{4}\] (3.16)
and the sum of of (3.4) and (3.6) becomes
\[\int_{-\infty}^{\infty}\Re\Big{(}2^{-i\,v} \zeta(2\,i\,v\,n)\,\Gamma(i\,v)\,\zeta(i\,v)\Big{)}\,dv\] \[=2\,\pi\!\!\sum_{j=1}^{\infty}\frac{1}{\,\mathrm{e}^{2j^{2\,n}}-1 }-\frac{\pi\,2^{-1/2\,n}}{n}\,\zeta\!\left(\frac{1}{2\,n}\right)\Gamma\!\left( \frac{1}{2\,n}\right)-\zeta(2\,n)\,\pi-\frac{\pi}{4}\,. \tag{3.17}\]
If we now define the Riemann function
\[\Upsilon(s,b)\equiv\zeta(s)\,b^{-s/2}\,\Gamma(s/2) \tag{3.18}\]
which is well-known to satisfy
\[\Upsilon(s,\pi)=\Upsilon(1-s,\pi) \tag{3.19}\]
due to the reflection property of \(\Gamma(s/2)\) and the functional equation of \(\zeta(s)\), with \(n=1\), (3.17) can be rewritten
\[\int_{-\infty}^{\infty}\Re(\Upsilon(2\,i\,v,2)\,\zeta(i\,v))\,dv=-\frac{\zeta \big{(}\frac{1}{2}\big{)}\,\pi^{\frac{3}{2}}\,\sqrt{2}}{2}-\frac{\pi^{3}}{6}+2 \,\pi\!\!\sum_{j=1}^{\infty}\,\frac{1}{\,\mathrm{e}^{2j^{2}}-1}-\frac{\pi}{4}. \tag{3.20}\]
From (3.18) and(3.19) we have the more general form of (3.19)
\[\Upsilon(s,b)=\Upsilon(1-s,b)\left(\frac{b}{\pi}\right)^{(1/2-s)} \tag{3.21}\]
so that taking (3.21) into account yields a very different form of (3.17), that being
\[\int_{-\infty}^{\infty}\Re\Bigg{(}\zeta(1-2\,i\,v)\left(\frac{ \pi^{2}}{2}\right)^{i\,v}\,\Gamma\!\left(\frac{1}{2}-i\,v\right)\zeta(i\,v) \Bigg{)}\,dv\] \[=\pi^{\frac{3}{2}}\left(-\zeta\!\left(\frac{1}{2}\right)\sqrt{ \frac{\pi}{2}}-\frac{\pi^{2}}{6}+2\!\sum_{j=1}^{\infty}\,\frac{1}{\,\mathrm{ e}^{2\,j^{2}}-1}-\frac{1}{4}\right)\,. \tag{3.22}\]
**Remark:** Since \(\left|\Gamma(i\,v)\,\right|^{2}=\pi/(v\,\sinh(\pi\,v))\), all integrals are convergent.
In the case that \(n\to\infty\), (3.16) and (3.17) respectively become
\[\lim_{n\to\infty}\!\int_{-\infty}^{\infty}\Re(\zeta(2\,i\,v\,n)\,\Gamma(i\,v )\,\zeta(i\,v))\,dv=\frac{\pi\left(13-5\,\mathrm{e}\right)}{4\,\mathrm{e}-4} \tag{3.23}\]
and
\[\lim_{n\to\infty}\int_{-\infty}^{\infty}\Re\big{(}2^{-i\,v}\,\zeta(2\,i\,v\,n )\,\Gamma(i\,v)\,\zeta(i\,v)\big{)}\,dv=\frac{\pi\left(9-\mathrm{e}^{2}\right) }{4\,(\mathrm{e}^{2}-1)}\,. \tag{3.24}\]
* Case \(c=1/(2n)\) Other interesting cases arise when \(c=1/(2n)=-1/s\) when the poles of the integrand appear only in the imaginary part and we must use the corresponding half residues in (3.4) and (3.6). Specifically, and of interest, the following are obtained by combining the two cases appropriately, to yield \[\int_{-\infty}^{\infty}\!\!\Re\bigg{(}\zeta(1+2\,i\,v\,n)\,\Gamma \bigg{(}\frac{1}{2\,n}+i\,v\bigg{)}\,\zeta\bigg{(}\frac{1}{2\,n}+i\,v\bigg{)} \bigg{)}\,dv\] \[\qquad=2\,\pi\!\sum_{j=1}^{\infty}\,\frac{1}{\,\mathrm{e}^{j^{2\, n}}-1}-\pi\,\zeta\bigg{(}\frac{1}{2\,n}\bigg{)}\,\Gamma\bigg{(}1+\frac{1}{2\,n} \bigg{)}-2\,\pi\,\zeta(2\,n)\] (3.25) and \[\int_{-\infty}^{\infty}\Re\bigg{(}\Big{(}2^{-1/(2\,n)-i\,v}-1 \Big{)}\,\zeta(1+2\,i\,v\,n)\,\zeta\bigg{(}\frac{1}{2\,n}+i\,v\bigg{)}\,\Gamma \bigg{(}\frac{1}{2\,n}+i\,v\bigg{)}\bigg{)}\,dv=\pi\,\zeta(2\,n)\] \[-2\,\pi\left(\sum_{j=1}^{\infty}\,\frac{1}{\,\mathrm{e}^{j^{2\, n}}-1}-\sum_{j=1}^{\infty}\,\frac{1}{\,\mathrm{e}^{2j^{2\,n}}-1}\right)+\pi \left(1-2^{-\frac{1}{2\,n}}\right)\zeta\bigg{(}\frac{1}{2\,n}\bigg{)}\,\Gamma \bigg{(}1+\frac{1}{2\,n}\bigg{)}\.\] (3.26) In contrast to (3.25), in the case that \(n\to\infty\), the integrand (3.26) converges at \(v=0\), and we find \[\lim_{n\to\infty}\!\int_{-\infty}^{\infty}\Re\big{(}\big{(}2^{-i\,v}-1\big{)} \,\zeta(1+2\,i\,v\,n)\,\zeta(i\,v)\,\Gamma(i\,v)\big{)}\,dv=\pi\,\frac{\big{(} \mathrm{e}^{2}-2\,\mathrm{e}-1\big{)}}{\big{(}\mathrm{e}-1\big{)}\,\big{(} \mathrm{e}+1\big{)}}.\] (3.27)
* Case \(c=1/n\) and \(c=1/(4n)\) Other special cases abound, among which we consider \(c=1/n\) and \(c=1/(4n)\) to respectively yield \[\int_{-\infty}^{\infty}\!\!\Re\bigg{(}\zeta(2+2\,i\,v\,n)\, \Gamma\bigg{(}\frac{1}{n}+i\,v\bigg{)}\,\zeta\bigg{(}\frac{1}{n}+i\,v\bigg{)} \bigg{)}\,dv\] \[\qquad=2\,\pi\!\sum_{j=1}^{\infty}\,\frac{1}{\,\mathrm{e}^{j^{2\, n}}-1}-2\,\pi\,\zeta(2\,n)\] (3.28) \[\int_{-\infty}^{\infty}\!\!\Re\bigg{(}\zeta\bigg{(}\frac{1}{2}+2 \,i\,v\,n\bigg{)}\,\Gamma\bigg{(}\frac{1}{4\,n}+i\,v\bigg{)}\,\zeta\bigg{(} \frac{1}{4\,n}+i\,v\bigg{)}\bigg{)}\,dv\] \[\qquad=2\,\pi\!\sum_{j=1}^{\infty}\,\frac{1}{\,\mathrm{e}^{j^{2\, n}}-1}-\frac{\pi}{n}\,\zeta\bigg{(}\frac{1}{2\,n}\bigg{)}\,\Gamma\bigg{(} \frac{1}{2\,n}\bigg{)}-2\pi\,\zeta(2\,n)\,.\] (3.29)
#### 3.1.3 Other values of \(s\)
For other values of \(s\), interesting cases also arise. For example, if \(s=-n\) and \(c=1/(2n)\) we arrive at
\[\int_{-\infty}^{\infty}\!\Re\bigg{(}\zeta\bigg{(}\frac{1}{2}+i\,v\, n\bigg{)}\,\Gamma\bigg{(}\frac{1}{2\,n}+i\,v\bigg{)}\,\zeta\bigg{(}\frac{1}{2\,n} +i\,v\bigg{)}\bigg{)}\,dv\] \[\qquad=2\,\pi\!\!\sum_{j=1}^{\infty}\,\frac{1}{\,\mathrm{e}^{j^{ n}}-1}-2\pi\,\zeta\bigg{(}\frac{1}{n}\bigg{)}\,\,\Gamma\bigg{(}1+\frac{1}{n} \bigg{)}-2\,\pi\,\zeta(n)\,, \tag{3.30}\]
reducing to (3.7) if \(n=1\) and (3.29) if \(n:=2n\). If \(s=-1/n\) with \(c=1/2\) we find
\[\int_{-\infty}^{\infty}\!\Re\bigg{(}\zeta\bigg{(}\frac{1}{2\,n}+ \frac{i\,v}{n}\bigg{)}\,\Gamma\bigg{(}\frac{1}{2}+i\,v\bigg{)}\,\zeta\bigg{(} \frac{1}{2}+i\,v\bigg{)}\bigg{)}\,dv\] \[\qquad=2\,\pi\!\!\sum_{j=1}^{\infty}\,\frac{1}{\,\mathrm{e}^{j^{ 1/n}}-1}-2\,\pi\,\zeta(n)\,\Gamma(n+1)-2\,\pi\,\zeta\bigg{(}\frac{1}{n}\bigg{)} \tag{3.31}\]
and, as \(n\to\infty\) the first factor in the integrand reduces to \(\zeta(0)=-1/2\) because most of the integrand originates near \(v=0\) due to (3.11), giving
\[\int_{-\infty}^{\infty}\!\Re\bigg{(}\Gamma\bigg{(}\frac{1}{2}+i\, v\bigg{)}\,\zeta\bigg{(}\frac{1}{2}+i\,v\bigg{)}\bigg{)}\,dv\] \[\qquad=4\,\pi\!\lim_{n\to\infty}\!\left(\zeta(n)\,\Gamma(n+1)+ \zeta\bigg{(}\frac{1}{n}\bigg{)}-\sum_{j=1}^{\infty}\,\frac{1}{\,\mathrm{e}^{ j^{1/n}}-1}\right)\,, \tag{3.32}\]
an identity whose right-hand side confounds numerical verification - however see (3.43) below.
### Example 3.2
Here we again consider the Mellin transform pair \(f(x)=e^{-bx}\) and \(F(s)=b^{-s}\Gamma(s)\) as in Section 2.5, giving, with \(b>0\),
\[\frac{1}{2\,\pi\,i}\int_{c-i\,\infty}^{c+i\,\infty}\zeta(-s\,v)\,b^{-v}\, \Gamma(v)\,dv=\sum_{j=1}^{\infty}\,\mathrm{e}^{-b\,j^{-s}}=\omega(b,-s) \tag{3.33}\]
valid for \(-1<s<0\) and \(c>-1/s\). After a change of variables, (3.33) can also be written as
\[\frac{1}{2\,\pi}\int_{-\infty}^{\infty}\zeta(-s\,(c+i\,v))\,b^{-c-i\,v}\, \Gamma(c+i\,v)\,dv=\sum_{j=1}^{\infty}\,\mathrm{e}^{-b\,j^{-s}}\,. \tag{3.34}\]
Furthermore, since (3.33) is an inverse Mellin transform, by inverting, **iff**\(s<0\)**and**\(b>0\), we find
\[\zeta(-b\,s)\,\Gamma(b)=\int_{0}^{\infty}x^{b-1}{\sum_{j=1}^{\infty}\,{\rm e}^{-x \,j^{-s}}}dx,\ \ s<0, \tag{3.35}\]
reducing to the classic results [5, Eq. (2.4.1)] and [5, Eq. (2.6.2)] if \(s=-1\) (see (3.69)) and \(s=-2\) respectively.
#### 3.2.1 Case \(0<c<-1/s\)
In the case that \(0<c<-1/s\), which allows \(s<-1\), we must include a residue term, so (3.34) becomes
\[\frac{1}{2\,\pi}\int_{-\infty}^{\infty}\zeta(-s\,(c+i\,v))\,b^{-c-i\,v}\, \Gamma(c+i\,v)\,dv=\omega(b,-s)-b^{1/s}\,\Gamma(1-1/s)\,, \tag{3.36}\]
and further, if \(c<0\), by moving the contour \(N+1\) units to the left, (3.34) becomes
\[\int_{-\infty}^{\infty}\zeta(-s\,(c_{N}+i\,v))\,b^{-c_{N}-i\,v}\, \Gamma(c_{N}+i\,v)\,dv\] \[=2\,\pi\left(\omega(b,-s)-\sum_{j=0}^{N}\,\frac{\zeta(s\,j)\,(-b) ^{j}}{\Gamma(1+j)}-\,b^{1/s}\Gamma(1-1/s)\right), \tag{3.37}\]
where \(c\) has been replaced by \(c_{N}\) such that \(-N-1<c_{N}<-N\), \(N\in{\cal Z}\) and always \(s<0\).
**Case \(s=-1\)**
In the case that \(s=-1\), we find
\[\int_{-\infty}^{\infty}\zeta(c+i\,v)\,b^{-c-i\,v}\,\Gamma(c+i\,v)\,dv=2\,\pi \left(\frac{1}{{\rm e}^{b}-1}-\frac{X}{b}\right) \tag{3.38}\]
where \(X=0\) if \(c>1\) and \(X=1\) if \(0<c<1\). In the case that \(c=1\), the singularity of the integrand only occurs in the imaginary part, and, with \(X=1/2\) we obtain
\[\int_{-\infty}^{\infty}\Re\big{(}\zeta(1+i\,v)\,b^{-i\,v}\,\Gamma(1+i\,v)\big{)} \,dv=2\,\pi\,b\left(\frac{1}{{\rm e}^{b}-1}-\frac{1}{2\,b}\right)\,. \tag{3.39}\]
Similarly, in the case that \(c=0\), we find, in exactly the same way
\[\int_{0}^{\infty}\Re\big{(}\zeta(i\,s\,v)\,b^{i\,v}\,\Gamma(-i\,v)\big{)}\,dv= \pi\,\omega(b,-s)-\pi\,b^{1/s}\,\Gamma\bigg{(}1-\frac{1}{s}\bigg{)}+\frac{\pi }{4}\,, \tag{3.40}\]
so that, if \(s=-1\) we obtain
\[\int_{-\infty}^{\infty}\zeta(i\,v)\,b^{-i\,v}\,\Gamma(i\,v)\,dv=2\,\pi\left(\frac {1}{{\rm e}^{b}-1}-\frac{1}{b}+1\right)\,, \tag{3.41}\]
and, if \(c=1/2\), \(b=1\) we find
\[\int_{-\infty}^{\infty}\zeta\biggl{(}\frac{1}{2}+i\,v\biggr{)}\,\Gamma\biggl{(} \frac{1}{2}+i\,v\biggr{)}\,dv=2\,\pi\left(\frac{1}{{\rm e}-1}-1\right)\,. \tag{3.42}\]
**Remark:** Comparison of the right-hand sides of (3.32) and (3.42) identifies
\[\lim_{n\to\infty}\left(\sum_{j=1}^{\infty}\frac{1}{{\rm e}^{j^{1/n}}-1}-\zeta( n)\,\Gamma(n+1)\right)=\frac{1}{2-2\,{\rm e}} \tag{3.43}\]
or, alternatively, for large values of \(n\),
\[\sum_{j=1}^{\infty}\frac{1}{{\rm e}^{j^{1/n}}-1}\sim\sqrt{2\,\pi}\,n^{n+\frac{ 1}{2}}\,{\rm e}^{-n}\,. \tag{3.44}\]
This result can, with some difficulty, be tested numerically - see Figure 1.
**Case \(s=-2\) and yet another proof of the Poisson-Jacobi transform.**
Consider the case \(s=-2\) with \(0<c<1/2\) yielding the identity
\[\int_{-\infty}^{\infty}\zeta(2\,c-2\,i\,v)\,b^{-c+i\,v}\,\Gamma(c-i\,v)\,dv=2 \,\pi\sum_{j=1}^{\infty}\,{\rm e}^{-b\,j^{2}}-\frac{\pi^{\frac{3}{2}}}{\sqrt{b}} \tag{3.45}\]
and, from (3.18) and (3.21) the left-hand side of (3.45) satisfies
\[\int_{-\infty}^{\infty}\zeta (2\,c-2\,i\,v)\,b^{-c+i\,v}\,\Gamma(c-i\,v)\,dv\] \[=b^{-c}\,\pi^{-\frac{1}{2}+2\,c}\int_{-\infty}^{\infty}\zeta(1-2 \,c+2\,i\,v)\,\Gamma\biggl{(}\frac{1}{2}-c+i\,v\biggr{)}\left(\frac{\pi^{2}}{ b}\right)^{-i\,v}dv\,. \tag{3.46}\]
Now, consider a reflection of integration variables, replacements \(c:=1/2-c\) and \(b:=\pi^{2}/b\), all the while retaining \(0<c<1/2\), and find that (3.45) becomes
\[\int_{-\infty}^{\infty}\zeta(1-2\,c+2\,i\,v)\,\Gamma\biggl{(} \frac{1}{2}-c+i\,v\biggr{)}\left(\frac{\pi^{2}}{b}\right)^{-i\,v}dv\] \[=2\,\pi^{2-2\,c}\,b^{c-\frac{1}{2}}\!\!\sum_{j=1}^{\infty}\,{\rm e }^{-\pi^{2}\,j^{2}/b}-\pi^{3/2-2\,c}\,b^{c}\,. \tag{3.47}\]
Comparison of (3.45), (3.46) and (3.47) eventually leads to
\[\sum_{j=1}^{\infty}\,{\rm e}^{-\pi\,b\,j^{2}}-\sqrt{\frac{1}{b}}\,\sum_{j=1}^{ \infty}\,{\rm e}^{-\pi\,j^{2}/b}=\frac{1}{2}\sqrt{\frac{1}{b}}-\frac{1}{2}, \tag{3.48}\]
equivalent to the well-known Poisson-Jacobi transform [15, page 124]
\[\theta_{3}(0,b)=\sqrt{\frac{1}{b}}\,\theta_{3}\bigg{(}0,\frac{1}{b}\bigg{)}. \tag{3.49}\]
#### 3.2.2 Case \(s=-2n\)
In the case that \(s=-2n\), the infinite sum in (3.37) vanishes except for the term corresponding to \(j=0\), leading to
\[\int_{-\infty}^{\infty}\Re\big{(}\zeta(2\,n\,(c+i\,v))\,b^{-i\,v-c}\,\Gamma(i \,v+c)\big{)}\,dv=\pi\,\bigg{(}\,X+2\,\omega(b,2n)-2\,Y\,b^{-1/(2n)}\,\Gamma \bigg{(}1+\frac{1}{2\,n}\bigg{)}\bigg{)} \tag{3.50}\]
where \(\{X=1,\ 1/2,\ 0\}\) if \(\{c<0,\ c=0,\ c>0\}\) and \(\{Y=1,\ 1/2,\ 0\}\) if
\(\{c<1/(2n),\ c=1/(2n),\ c>1/(2n)\}\) respectively. A family of interesting integrals arises if we let \(c=p/n\), where \(p\in\Re\), yielding the following:
Figure 1: Approximations to the left-hand side of (3.43) for increasing values of \(n\) compared to the numerical value of the right-hand side. For \(n>15\) an extraordinarily large number of digits was required.
* if \(p<0\) \[\int_{-\infty}^{\infty}\zeta(2\,p+2\,i\,v\,n)\,b^{-i\,v}\,\Gamma\Big{(}\frac{p}{n }+i\,v\Big{)}\,dv=\pi\,b^{p/n}\left(1+2\omega(b,2n)-2\,b^{-1/(2\,n)}\,\Gamma \bigg{(}1+\frac{1}{2\,n}\bigg{)}\right)\,;\] (3.51)
* if \(p>1/2\) \[\int_{-\infty}^{\infty}\Re\Big{(}\zeta(2\,p+2\,i\,v\,n)\,b^{-i\,v}\,\Gamma \Big{(}\frac{p}{n}+i\,v\Big{)}\Big{)}\,dv=2\,\pi\,b^{p/n}\omega(b,2n)\,;\] (3.52)
* if \(p=1/2\) \[\int_{-\infty}^{\infty}\Re\Big{(}\zeta(1+2\,i\,v\,n)\,b^{-i\,v}\,\Gamma \bigg{(}\frac{1}{2\,n}+i\,v\bigg{)}\Big{)}\,dv=\pi\left(2\,b^{1/(2\,n)}\omega (b,2n)-\Gamma\bigg{(}1+\frac{1}{2\,n}\bigg{)}\right)\] (3.53)
* if \(0<p<1/2\) \[\int_{-\infty}^{\infty}\Re\Big{(}\zeta(2\,p+2\,i\,v\,n)\,b^{-i\,v}\,\Gamma \Big{(}\frac{p}{n}+i\,v\Big{)}\Big{)}\,dv=2\,\pi\,b^{p/n}\left(\omega(b,2n)-b^ {-1/(2\,n)}\,\Gamma\bigg{(}1+\frac{1}{2\,n}\bigg{)}\right)\] (3.54)
* if \(p=0\) \[\int_{-\infty}^{\infty}\Re\big{(}\zeta(2\,i\,v\,n)\,b^{-i\,v}\,\Gamma(i\,v) \big{)}\,dv=\pi\left(\frac{1}{2}+2\omega(b,2n)-2\,b^{-1/(2\,n)}\,\Gamma\bigg{(} 1+\frac{1}{2\,n}\bigg{)}\right)\,.\] (3.55)
**Remarks:**
* The case (3.54) covers the interior of the critical strip.
* The above resolves a special case discussed in [16, page 4].
* In any of the above, if \(n=1\), we have the known [17, Eq. (3)] identity \(\omega(\pi,2)=\frac{\pi^{1/4}}{2\Gamma(3/4)}-\frac{1}{2}\).
* The case (3.51) reduces to the known result [18, Eq. (3.9)] if we set \(b=\pi\) and \(n=1\).
* Setting \(\Gamma(p/n+i\,v)\approx\Gamma(i\,v)\) will obtain valid numerical approximations for the case of large \(n\), but any attempt to equate them at the limit \(n\to\infty\) is incorrect, because that degenerates into the case \(p=0\) and \(p\) and \(n\) are independent variables.
#### 3.2.3 Case s=-1/n and \(c<0\)
Here we consider the case \(s=-1/n\), with \(c<0\), allowing us to choose \(-n-1<c<-n\), so from (3.37) and without loss of generality, let \(c=-n-1/2\), leading to
\[\int_{-\infty}^{\infty}\Re\bigg{(}\zeta\bigg{(}-1-\frac{1}{2\,n}+ \frac{i\,v}{n}\bigg{)}\,b^{n+\frac{1}{2}-i\,v}\,\Gamma\bigg{(}-n-\frac{1}{2}+i \,v\bigg{)}\bigg{)}\,dv+2\,\pi\!\!\sum_{j=0}^{n}\,\frac{\zeta\big{(}-\frac{j}{n }\big{)}\,(-b)^{j}}{\Gamma(1+j)}\] \[\qquad=2\,\pi\!\!\sum_{j=1}^{\infty}\,\mathrm{e}^{-b\,j^{1/n}}-2 \,\pi\,b^{-n}\,\Gamma(1+n). \tag{3.56}\]
If we now consider the limiting case \(n\to\infty\), it is easy to discover that the integration term vanishes by writing
\[\Gamma\bigg{(}-n-\frac{1}{2}+i\,v\bigg{)}=-\frac{\pi\,(-1)^{n}}{\prod\limits_ {j=0}^{n}\,\big{(}\tfrac{1}{2}+n-j-i\,v\big{)}\,\Gamma\big{(}\tfrac{1}{2}-i\, v\big{)}\cosh(\pi\,v)}, \tag{3.57}\]
and the left-side summation term becomes
\[\lim_{n\to\infty}\,\sum_{j=0}^{n}\,\frac{\zeta\big{(}-\frac{j}{n}\big{)}\,(-b )^{j}}{\Gamma(1+j)}=-\frac{1}{2\,\mathrm{e}^{b}} \tag{3.58}\]
because as \(n\to\infty\), \(\zeta\big{(}-\frac{j}{n}\big{)}\approx\zeta(0)\). The identity (3.58) can be numerically verified for a large range of the variable \(b>0\). Therefore, we find
\[\lim_{n\to\infty}\,\left(\sum_{j=1}^{\infty}\,\mathrm{e}^{-b\,j^{1/n}}-b^{-n} \,\Gamma(1+n)\right)=-\frac{1}{2\,\mathrm{e}^{b}}\,, \tag{3.59}\]
or equivalently, as \(n\to\infty\),
\[\sum_{j=1}^{\infty}\,\mathrm{e}^{-b\,j^{\frac{1}{n}}}\sim\sqrt{2\,\pi\,n}\, \Big{(}\frac{n}{\mathrm{e}\,b}\Big{)}^{n}. \tag{3.60}\]
The identity (3.59) can, as with (3.43), be tested numerically - see Figure 2 and the remark following (3.62) below. See also [16, Eq. 2.5].
#### 3.2.4 Case \(s=-2n\) and \(c<0\)
In this case, again we choose \(c=-n/2\) so (3.37) becomes
\[b^{n+\frac{1}{2}}\int_{-\infty}^{\infty}\zeta\big{(}2\,i\,n\,v-2\,n^ {2}-n\big{)}\,b^{-i\,v}\,\Gamma\bigg{(}-n-\frac{1}{2}+i\,v\bigg{)}\,dv\] \[\qquad\qquad=\pi+2\,\pi\left(\sum_{j=1}^{\infty}\,{\rm e}^{-b\,j^ {2\,n}}-b^{-\frac{1}{2\,n}}\,\Gamma\bigg{(}1+\frac{1}{2\,n}\bigg{)}\right) \tag{3.61}\]
because only the term indexed by \(j=0\) in the infinite sum included in (3.37) does not vanish. We now consider the limiting case \(n=0\), leading to
\[\frac{1}{2\,\pi}\,\int_{-\infty}^{\infty}b^{-i\,v}\,\Gamma\bigg{(}-\frac{1}{2}+ i\,v\bigg{)}\,dv=b^{-1/2}\,\left(1/{\rm e}^{b}-1\right) \tag{3.62}\]
by identifying \(2n:=1/n\) in (3.59).
**Remark:** By a simple change of integration variables in (3.62), wrapping the contour about the negative real axis and evaluating the residues so enclosed we arrive at an equivalent representation of (3.62)
Figure 2: Approximations to the left-hand side of (3.59) for increasing values of \(n\) compared to the numerical value of the right-hand side for three values of the parameter \(b\).
\[\frac{1}{2\,\pi\,i}\int_{-\,i\,\infty}^{i\,\infty}b^{-t}\,\Gamma\!\left(-\frac{1}{ 2}+t\right)dt=b^{-1/2}\sum_{n=1}^{\infty}\,\frac{b^{n}\left(-1\right)^{n}}{ \Gamma\left(1+n\right)}, \tag{3.63}\]
recognizing that the equivalence of the right-hand sides of (3.62) and (3.63) is an elementary identity, yielding a validity check of (3.59) since it is the basis for (3.62).
#### 3.2.5 Case \(s=-(2n+1)\) and \(c<0\)
As before, without loss of generality, we choose \(c=-N/2,\ N>0\) in (3.37), whose general form becomes
\[\int_{-\infty}^{\infty}\zeta(\left(n+1/2\right)\left(2\,i\,v-2\,N -1\right))\,b^{N+\frac{1}{2}-i\,v}\,\Gamma(-N-1/2+i\,v)\,dv\] \[+2\,\pi\!\sum_{j=0}^{N}\,\frac{\zeta\,\left(\left(-2\,n-1\right) j\right)\left(-b\right)^{j}}{\Gamma(1+j)}=\,2\,\pi\!\sum_{j=1}^{\infty}\,{\rm e }^{-b\,j^{2\,n+1}}-2\,\pi\,b^{-1/(2\,n+1)}\,\Gamma\!\left(\frac{2\,n+2}{2\,n+ 1}\right) \tag{3.64}\]
Notice that if the index \(j\) in the left-hand sum is odd, the next term in the series vanishes when \(j\) is even and the left-hand side of (3.64) does not change. Therefore the integral is invariant when \(N\to N+1\) if \(N>0\) is odd. That is
\[\int_{-\infty}^{\infty}\zeta\!\left(\left(n+\frac{1}{2}\right) \left(2\,i\,v-4\,N+1\right)\right)b^{-i\,v}\,\Gamma\!\left(-2\,N+\frac{1}{2}+i \,v\right)dv\] \[= b\int_{-\infty}^{\infty}\zeta\!\left(\left(n+\frac{1}{2}\right) \left(2\,i\,v-4\,N-1\right)\right)b^{-i\,v}\,\Gamma\!\left(-2\,N-\frac{1}{2}+i \,v\right)dv \tag{3.65}\]
There are two interesting cases here, the first corresponding to \(n=0\) - see subsection (3.2.1) - when \(N\to\infty\) so that
\[\lim_{N\to\infty}\,\int_{-\infty}^{\infty}\zeta\!\left(-N-\frac{1 }{2}+i\,v\right)b^{N+\frac{1}{2}-i\,v}\,\Gamma\!\left(-N-\frac{1}{2}+i\,v \right)dv \tag{3.66}\] \[\qquad\qquad+2\,\pi\!\sum_{j=0}^{\infty}\,\frac{\zeta(-j)\left(- b\right)^{j}}{\Gamma(1+j)}=2\,\pi\!\sum_{j=1}^{\infty}\,{\rm e}^{-b\,j}-\frac{2\, \pi}{b} \tag{3.67}\]
But, from [13, Eqs. (24.2.1) and (25.6.3)]
\[\sum_{j=0}^{\infty}\,\frac{\zeta(-j)\left(-b\right)^{j}}{\Gamma(1+j)}=\frac{1 }{b}\!\sum_{j=1}^{\infty}\,\frac{B_{j}\,b^{j}}{\Gamma\left(1+j\right)}=\frac{ 1}{e^{b}-1}-\frac{1}{b} \tag{3.68}\]
and the elementary relation
\[\sum_{j=1}^{\infty}{\rm e}^{-b\,j}=\frac{1}{{\rm e}^{b}-1} \tag{3.69}\]
we find
\[\lim_{N\to\infty}\,\,\int_{-\infty}^{\infty}\zeta\biggl{(}-N-\frac{1}{2}+i\,v \biggr{)}\,b^{N+\frac{1}{2}-i\,v}\,\Gamma\biggl{(}-N-\frac{1}{2}+i\,v\biggr{)}\, dv=0\,. \tag{3.70}\]
## 4 Summary
It has been shown by way of a limited number of examples that summing over the free variable introduced by the inverse Mellin transform yields a number of interesting identities, each of which can be studied and pursued on their own. Some of these are possibly new and at least one (i.e. (3.35)) generalizes two classic identities due to Riemann.
Of particular ongoing interest is the fact that the modified inverse Mellin transform studied here yields a contour integral that can then be transformed in such a way as to generate infinite series and integrals that can in turn be modified to produce unexpected identities involving hyperpowers. It is suggested that further study along the lines presented here is warranted. A cursory scan of [6, Table 6] finds a plethora of Mellin transform pairs involving the fundamental functions of classical analysis and the most common hypergeometric functions (e.g. [19]), each of which possesses known transformations that could be invoked to generate new identities in the same manner as has been done here. For the ambitious reader, here is a suggestion for further emulation:
**If \(f(x)=\sin(x)\) and \(F(s)=\sin(\pi\,s/2)\,\Gamma(s)\), from Theorem 2 we obtain**
\[\sum_{k=1}^{\infty}\,\sin\bigl{(}k^{-s}\bigr{)}=\sum_{k=0}^{\infty}\,\frac{( -1)^{k}\,\,\zeta((2\,k+1)\,s)}{\Gamma(2\,k+2)}\quad\Re(s)>1, \tag{4.1}\]
reducing to
\[\sum_{k=1}^{\infty}\,\,\biggl{(}\sin\biggl{(}\frac{1}{k}\biggr{)}-\frac{1}{k }\biggr{)}=\sum_{k=1}^{\infty}\,\frac{(-1)^{k}\,\,\zeta(2\,k+1)}{\Gamma(2\,k+ 2)} \tag{4.2}\]
if \(s=1\). The possibilities are endless.
## Appendix A Proof of (2.4)
By evaluating the residues in (2.4) as the contour is moved leftwards, we arrive at the following sum and its representation
\[\sum_{k=1}^{\infty}\,\frac{b^{2\,k-2}\,\zeta(1-2\,k)}{\Gamma(2\,k-1)}=\,- \frac{1}{2}\!\!\sum_{k=1}^{\infty}\,\frac{b^{2\,k-2}\,B_{2\,k}}{k\,\Gamma(2\,k -1)}\,,\] (A.1)
where [13, Eq. 25.6.3]
\[\zeta(-k)=\,(-1)^{k}\,B_{1+k}/(1+k)\] (A.2)
has been used, \(B_{k}\) are Bernoulli numbers and we note that \(\zeta(-2k)=0\). Following the application of [13, Eq. 24.7.4]
\[B_{2\,k}=(-1)^{k}\,\,\pi\int_{0}^{\infty}\frac{t^{2\,k}}{\sinh^{2}(\pi\,t)}dt,\] (A.3)
we now invert the sum and integral (both convergent) and, since
\[\sum_{k=1}^{\infty}\,\frac{\left(-1\right)^{k}\left(b\,t\right)^{2\,k}}{k\, \Gamma(2\,k-1)}=\,-2\,\sin(b\,t)\,b\,t-2\,\cos(b\,t)+2\] (A.4)
courtesy of [20, Maple], we are now left with the following integrals
\[\int_{0}^{\infty}\frac{1-\cos\,\left(b\,t\right)}{\sinh(\pi\,t)^{2}}\,dt= \frac{b\,\coth\!\left(\frac{b}{2}\right)-2}{2\,\pi}\] (A.5)
and
\[\int_{0}^{\infty}\frac{\sin(b\,t)\,t}{\sinh(\pi\,t)^{2}}dt=\frac{b-\sinh(b)}{ 2\,\pi\,(1-\cosh(b))}\] (A.6)
both courtesy of [21, Mathematica]. Putting it all together yields (2.4).
## Appendix B Proof of (2.8)
From (2.8) we are interested in the summation
\[\sum_{k=0}^{\infty}\,\frac{\zeta(-2\,k-1)}{\Gamma(2+2\,k)\left(a^{2\,k+1}-1 \right)}=\,-\sum_{k=0}^{\infty}\,\frac{B_{2+2\,k}}{\Gamma(3+2\,k)\left(-1+a^{ 2\,k+1}\right)}\] (B.1)
employing (A.2). Noting the identity (A.3) and the well-known expansion
\[\frac{1}{1-a^{-2\,k-1}}=\sum_{j=0}^{\infty}\,\left(\frac{1}{a^{2\,k+1}} \right)^{j},\,\,\,\,a>1\] (B.2)
eventually leads us to consider
\[\sum_{k=0}^{\infty}\,\frac{\left(-1\right)^{k}\left(t/a\right)^{2+2\,k}}{a^{j \,(2\,k+1)}\,\Gamma(3+2\,k)}=a^{j}\left(1-\cos\!\left(\frac{t}{a^{j+1}}\right)\right)\] (B.3)
after interchanging the order of summation. Now applying the identity (A.5) will eventually yield (2.8). |
2309.13287 | Conjunctive Queries on Probabilistic Graphs: The Limits of
Approximability | Query evaluation over probabilistic databases is a notoriously intractable
problem -- not only in combined complexity, but for many natural queries in
data complexity as well. This motivates the study of probabilistic query
evaluation through the lens of approximation algorithms, and particularly of
combined FPRASes, whose runtime is polynomial in both the query and instance
size. In this paper, we focus on tuple-independent probabilistic databases over
binary signatures, which can be equivalently viewed as probabilistic graphs. We
study in which cases we can devise combined FPRASes for probabilistic query
evaluation in this setting.
We settle the complexity of this problem for a variety of query and instance
classes, by proving both approximability and (conditional) inapproximability
results. This allows us to deduce many corollaries of possible independent
interest. For example, we show how the results of Arenas et al. on counting
fixed-length strings accepted by an NFA imply the existence of an FPRAS for the
two-terminal network reliability problem on directed acyclic graphs: this was
an open problem until now. We also show that one cannot extend a recent result
of van Bremen and Meel that gives a combined FPRAS for self-join-free
conjunctive queries of bounded hypertree width on probabilistic databases:
neither the bounded-hypertree-width condition nor the self-join-freeness
hypothesis can be relaxed. Finally, we complement all our inapproximability
results with unconditional lower bounds, showing that DNNF provenance circuits
must have at least moderately exponential size in combined complexity. | Antoine Amarilli, Timothy van Bremen, Kuldeep S. Meel | 2023-09-23T07:02:50Z | http://arxiv.org/abs/2309.13287v4 | # Conjunctive Queries on Probabilistic Graphs:
###### Abstract
Query evaluation over probabilistic databases is a notoriously intractable problem--not only in combined complexity, but for many natural queries in data complexity as well [7, 14]. This motivates the study of probabilistic query evaluation through the lens of approximation algorithms, and particularly of _combined FPRASes_, whose runtime is polynomial in both the query and instance size. In this paper, we focus on tuple-independent probabilistic databases over binary signatures, which can be equivalently viewed as _probabilistic graphs_. We study in which cases we can devise combined FPRASes for probabilistic query evaluation in this setting.
We settle the complexity of this problem for a variety of query and instance classes, by proving both approximability and (conditional) inapproximability results. This allows us to deduce many corollaries of possible independent interest. For example, we show how the results of [8] on counting fixed-length strings accepted by an NFA imply the existence of an FPRAS for the two-terminal network reliability problem on directed acyclic graphs: this was an open problem until now [36]. We also show that one cannot extend a recent result [33] that gives a combined FPRAS for self-join-free conjunctive queries of bounded hypertree width on probabilistic databases: neither the bounded-hypertree-width condition nor the self-join-freeness hypothesis can be relaxed. Finally, we complement all our inapproximability results with unconditional lower bounds, showing that DNNF provenance circuits must have at least moderately exponential size in combined complexity.
Probabilistic query evaluation; tuple-independent databases; approximation The Limits of Approximability 2012 ac
PQE problem is highly intractable, even in data complexity for many natural queries (_e.g._, a path query of length three), and hence also in combined complexity.
Faced with this intractability, a natural approach is to study _approximate PQE_: we relax the requirement of computing the exact probability that the query holds, and settle for an approximate answer. This approach has been studied in data complexity [31]: for any fixed union of conjunctive queries (UCQ), we can always tractably approximate the answer to PQE, additively (simply by Monte Carlo sampling), or multiplicatively (using the Karp-Luby approximation algorithm on a disjunctive-normal-form representation of the query provenance). However, these approaches are not tractable in combined complexity, and moreover the latter approach exhibits a "slicewise polynomial" runtime of the form \(O(|I|^{|Q|})\)--rather than, say, \(O(2^{|Q|}\mathsf{poly}(|I|))\)--which seriously limits its practical utility. Thus, our goal is to obtain a _combined FPRAS_ for PQE: by this we mean a fully polynomial-time randomized approximation scheme, giving a multiplicative approximation of the probability, whose runtime is polynomial in the query and TID (and in the desired precision). This approach has been recently proposed by van Bremen and Meel [33], who show a combined FPRAS for CQs when assuming that the query is self-join-free and has bounded hypertree width; their work leaves open the question of which other cases admit combined FPRASes.
Main Results.In this paper, following the work of Amarilli, Monet and Senellart [7] for exact PQE, we investigate the combined complexity of _approximate_ PQE in the setting of _probabilistic graphs_. In other words, we study _probabilistic graph homomorphism_, which is the equivalent analogue of CQ evaluation: given a (deterministic) query graph \(G\), and given a instance graph \(H\) with edges annotated with independent probabilities (like a TID), we wish to approximate the probability that a randomly selected subgraph \(H^{\prime}\subseteq H\) admits a homomorphism from \(G\). This setting is incomparable to that of [33], because it allows for self-joins and for queries of unbounded width, but assumes that relations are binary. For a variety of graph classes from [7], we show either (i) the existence of a combined FPRAS, or (ii) the non-existence of such an FPRAS, subject to standard complexity-theoretic assumptions. We summarize our results in Table 1, respectively for graphs that are _labelled_ (_i.e._, the signature features several binary relations), or _unlabelled_ (_i.e._, only one binary relation).
**Result 1.1** (Sections 3 and 4).: _The results in Table 1 hold._
In summary, our results mostly show that, on connected probabilistic graphs, the general intractability of combined PQE carries over in many settings to the approximate PQE problem. There is, however, an important exception: the PQE problem for one-way path queries on _directed acyclic graphs_ (DAGs) admits a combined FPRAS, implying the same for downwards trees on DAGs in the unlabelled setting. We discuss more in detail below how this result is proved and some of its consequences. Another case is left open: in the unlabelled setting, we do not settle the approximability of combined PQE for one-way path queries (or equivalently downwards trees queries) on connected graphs. For all other cases, either exact combined PQE was already shown to be tractable in the exact setting [7], or we strengthen the #P-hardness of exact PQE from [7] by showing that combined FPRASes conditionally do not exist. Note that our intractability results are always shown in _combined complexity_--in data complexity, PQE is always multiplicatively approximable via the Karp-Luby algorithm [31].
As an important consequence, our techniques yield connections between approximate PQE and _intensional_ approaches to the PQE problem. Recall that the intensional approach was introduced by Jha and Suciu [20] in the setting of exact evaluation, and when measuring data complexity. They show that many tractable queries for PQE also admit tractable
provenance representations. More precisely, for these queries \(Q\), there is a polynomial-time algorithm that takes as input any database instance and computes a representation of the Boolean provenance of \(Q\) in a form which admits tractable model counting (_e.g._, OBDD, d-DNNF, etc.). This intensional approach contrasts with _extensional_ approaches (like [14]) which exploit the structure of the query directly: comparing both approaches is still open [26].
In line with this intensional approach, we complement our conditional hardness results on approximate PQE with _unconditional_ lower bounds on the _combined_ size of tractable representations of query provenance. Namely, we show a moderately exponential lower bound on DNNF provenance representations for all our non-approximable query-instance class pairs:
[Section 5, informal] Let \(\langle\mathcal{G},\mathcal{H}\rangle\) be a conditionally non-approximable query-instance class pair studied in this paper. For any \(\epsilon>0\), there is an infinite family \(G_{1},G_{2},\ldots\) of \(\mathcal{G}\) queries and an infinite family \(H_{1},H_{2},\ldots\) of \(\mathcal{H}\) instances such that, for any \(i>0\), any DNNF circuit representing the provenance \(\mathsf{Prov}^{G}_{H_{i}}\) has size at least \(2^{\Omega((|G_{i}|+|H_{i}|)^{-\epsilon})}\).
The class of DNNF circuits is arguably the most succinct circuit class in knowledge compilation that still has desirable properties [15, 16]. Such circuits subsume in particular the class of _structured DNNFs_, for which tractable approximation algorithms were recently proposed [9]. Thus, these bounds help to better understand the limitations of intensional approaches.
Consequences.Our results and techniques have several interesting consequences of potential independent interest. First, they imply that we cannot relax the hypotheses of the result of van Bremen and Meel mentioned earlier [33]. They show the following result on combined FPRASes for PQE in the more general context of probabilistic databases:
[Theorem 1 of [33]] Let \(Q\) be a self-join-free conjunctive query of bounded hypertree width, and \(H\) a tuple-independent database instance. Then there exists a combined FPRAS for computing the probability of \(Q\) on \(H\), _i.e._, an FPRAS whose runtime is \(\mathsf{poly}(|Q|,|H|,\epsilon^{-1})\), where \(\epsilon\) is the multiplicative error.
It was left open in [33] whether intractability held without these assumptions on the query. Hardness is immediate if we do not bound the width of queries and allow arbitrary self-join-free CQs, as combined query evaluation is then NP-hard already in the non-probabilistic setting. However, it is less clear whether the self-join-freeness condition can be lifted. Our results give a negative answer, already in a severely restricted setting:
[Corollaries 6.1 and 6.2] Assuming \(\mathsf{RP}\neq\mathsf{NP}\), neither the bounded hypertree width nor self-join-free condition in Theorem 1 can be relaxed: even on a fixed signature consisting of a single binary relation, there is no FPRAS to approximate the probability of an input treewidth-1 CQ on an input treewidth-1 TID instance.
A second consequence implied by our techniques concerns the _two-terminal network reliability problem_ on directed acyclic graphs (DAGs). Roughly speaking, given a directed graph \(G=(V,E)\) with independent edge reliability probabilities \(\pi:E\to[0,1]\), and two distinguished vertices \(s,t\in V\), the two-terminal network reliability problem asks for the probability that there is a path from \(s\) to \(t\). The problem is known to be \(\mathsf{\#P}\)-hard even on DAGs [28, Table 2]. The existence of an FPRAS for the two-terminal network reliability problem is a long-standing open question [21], and the case of DAGs was explicitly left open by Zenklusen and Laumanns [36]. Our results allow us to answer in the affirmative:
**Result 1.5** (Theorem 6.3).: _There exists an FPRAS for the two-terminal network reliability problem over DAGs._
This result and our approximability results follow from the observation that path queries on directed acyclic graphs admit a compact representation of their Boolean provenance as _non-deterministic ordered binary decision diagrams_ (nOBDDs). We are then able to use a recent result by Arenas et al. [8, Corollary 4.5] giving an FPRAS for counting the satisfying assignments of an nOBDD, adapted to the weighted setting.
Paper Structure.In Section 2, we review some of the technical background. We then present our main results on approximability, divided into the labelled and unlabelled case, in Sections 3 and 4 respectively. Next, in Section 5, we show lower bounds on DNNF provenance circuit sizes. In Section 6, we show some consequences for previous work [33], as well as for the two-terminal network reliability problem. We conclude in Section 7.
## 2 Preliminaries
We provide some technical background below, much of which closes follows that in [4] and [7].
Graphs and Graph Homomorphisms.Let \(\sigma\) be an non-empty finite set of labels. When \(|\sigma|>1\), we say that we are in the _labelled setting_, and when \(|\sigma|=1\), the _unlabelled setting_. In this paper, we study only _directed_ graphs with edge labels from \(\sigma\). A graph \(G\) over \(\sigma\) is a tuple \((V,E,\lambda)\) with finite non-empty vertex set \(V\), edge set \(E\subseteq V^{2}\), and \(\lambda\colon E\to\sigma\) a labelling function mapping each edge to a single label (we may omit \(\lambda\) in the unlabelled setting). The _size_\(|G|\) of \(G\) is its number of edges. We write \(x\xrightarrow{R}y\) for an edge \(e=(x,y)\in E\) with label \(\lambda(e)=R\), and \(x\xrightarrow{}y\) for \((x,y)\in E\) (no matter the edge label). A graph \(H=(V^{\prime},E^{\prime},\lambda^{\prime})\) is a _subgraph_ of \(G\), written \(H\subseteq G\), if \(V=V^{\prime}\), \(E^{\prime}\subseteq E\), and \(\lambda^{\prime}\) is the restriction of \(\lambda\) to \(E\).
A _graph homomorphism_\(h\) from a graph \(G=(V_{G},E_{G},\lambda_{G})\) to a graph \(H=(V_{H},E_{H},\lambda_{H})\) is a function \(h:V_{G}\to V_{H}\) such that, for all \((u,v)\in E_{G}\), we have \((h(u),h(v))\in E_{H}\) and \(\lambda_{H}((h(u),h(v))=\lambda_{G}((u,v))\). We write \(G\rightsquigarrow H\) to say that such a homomorphism exists.
Probabilistic Graphs and Probabilistic Graph Homomorphism.A _probabilistic graph_ is a pair \((H,\pi)\), where \(H\) is a graph with edge labels from \(\sigma\), and \(\pi:E\to[0,1]\) is a probability labelling on the edges. Note that edges \(e\) in \(H\) are annotated both by their probability value \(\pi(e)\) and their \(\sigma\)-label \(\lambda(e)\). Intuitively, \(\pi\) gives us a succinct specification of a probability distribution over the \(2^{|H|}\) possible subgraphs of \(H\), by independently including each edge \(e\) with probability \(\pi(e)\). Formally, the distribution induced by \(\pi\) on the subgraphs \(H^{\prime}\subseteq H\) is defined by \(\Pr_{\pi}(H^{\prime})=\prod_{e\in E^{\prime}}\pi(e)\prod_{e\in E\setminus E^{ \prime}}(1-\pi(e))\).
In this paper, we study the _probabilistic graph homomorphism_ problem \(\mathsf{PHom}\) for a fixed set of labels \(\sigma\): given a graph \(G\) called the _query graph_ and a probabilistic graph \((H,\pi)\) called the _instance graph_, both using labels from \(\sigma\), we must compute the probability \(\Pr_{\pi}(G\rightsquigarrow H)\) that a subgraph of \(H\), sampled according to the distribution induced by \(\pi\), admits a homomorphism from \(G\). That is, we must compute \(\Pr_{\pi}(G\rightsquigarrow H):=\sum_{H^{\prime}\subseteq H\text{ s.t. }G\rightsquigarrow H^{\prime}}\Pr_{\pi}(H^{\prime})\).
We study \(\mathsf{PHom}\) in _combined complexity_, i.e., when both the query graph \(G\) and instance graph \((H,\pi)\) are given as input. Further, we study \(\mathsf{PHom}\) when we restrict \(G\) and \(H\) to be taken from specific _graph classes_, i.e., infinite families of (non-probabilistic) graphs, denoted respectively \(\mathcal{G}\) and \(\mathcal{H}\). (Note that \(\mathcal{H}\) does not restrict the probability labelling \(\pi\).) To distinguish the _labelled_ and _unlabelled_ setting, we denote by \(\mathsf{PHom}_{\mathsf{L}}(\mathcal{G},\mathcal{H})\) the problem of computing \(\Pr_{\pi}(G\rightsquigarrow H)\) for \(G\in\mathcal{G}\) and \((H,\pi)\) with \(H\in\mathcal{H}\) when the fixed set of allowed
labels in \(\mathcal{G}\) and \(\mathcal{H}\) has cardinality \(|\sigma|>1\), and likewise write \(\mathsf{PHom}_{\!\!\nu}(\mathcal{G},\mathcal{H})\) when \(\mathcal{G}\) and \(\mathcal{H}\) are classes of unlabelled graphs. We focus on _approximation algorithms_: fixing classes \(\mathcal{G}\) and \(\mathcal{H}\), a _fully polynomial-time randomized approximation scheme_ (FPRAS) for \(\mathsf{PHom}_{\!\!\nu}(\mathcal{G},\mathcal{H})\) (in the labelled setting) or \(\mathsf{PHom}_{\!\!\nu}(\mathcal{G},\mathcal{H})\) (in the unlabelled setting) is a randomized algorithm that runs in time \(\mathsf{poly}(|G|,|H|,\epsilon^{-1})\) on inputs \(G\in\mathcal{G}\), \((H,\pi)\) for \(H\in\mathcal{H}\), and \(\epsilon>0\). The algorithm must return, with probability at least \(3/4\), a _multiplicative approximation_ of the probability \(\Pr_{\pi}(G\rightsquigarrow H)\), i.e., a value between \((1-\epsilon)\Pr_{\pi}(G\rightsquigarrow H)\) and \((1+\epsilon)\Pr_{\pi}(G\rightsquigarrow H)\).
Graph Classes.We study \(\mathsf{PHom}\) on the following graph classes, which are defined on a graph \(G\) with edge labels from \(\sigma\), and are either labelled or unlabelled depending on \(\sigma\):
* \(G\) is a _one-way path_ (1WP) if it is of the form \(a_{1}\xrightarrow{R_{i_{1}}}\ldots\xrightarrow{R_{m-1}}a_{m}\) for some \(m\), with all \(a_{1},\ldots,a_{m}\) being pairwise distinct, and with \(R_{i}\in\sigma\) for \(1\leq i<m\).
* \(G\) is a _two-way path_ (2WP) if it is of the form \(a_{1}\ -\ \ldots\ -\ a_{m}\) for some \(m\), with pairwise distinct \(a_{1},\ldots,a_{m}\), and each \(-\) being \(\xrightarrow{R_{i}}\) or \(\xleftarrow{R_{i}}\) (but not both) for some label \(R_{i}\in\sigma\).
* \(G\) is a _downwards tree_ (DWT) if it is a rooted unranked tree (each node can have an arbitrary number of children), with all edges pointing from parent to child in the tree.
* \(G\) is a _polytree_ (PT) if its underlying undirected graph is a rooted unranked tree, without restrictions on the edge directions.
* \(G\) is a _DAG_ (DAG) if it is connected (i.e., the underlying undirected graph is connected) and (directed) acyclic.
* \(G\) is _connected_ (Conn) if it is an arbitrary connected graph.
These refine the classes of connected queries considered in [7], by adding the DAG class. Note that both 2WP and DWT generalize 1WP and are incomparable; PT generalizes both 2WP and DWT; DAG generalizes PT; Conn generalizes DAG (see Figure 2 of [7]).
Boolean Provenance.We use the notion of _Boolean provenance_, or simply _provenance_[19, 3, 29], to show both upper and lower bounds. Let \(G=(V_{G},E_{G},\lambda_{G})\) and \(H=(V_{H},E_{H},\lambda_{H})\) be graphs. For \(a_{i}\in V_{G}\) and \(b_{i}\in V_{H}\), denote by \(\mathsf{Prov}_{H}^{G}(a_{1}:=b_{1},\ldots,a_{n}:=b_{n})\) the function mapping every valuation \(\nu\) of \(E_{G}\) to \(1\) (true) or \(0\) (false), depending on whether \(H\) admits a homomorphism \(h\) from \(G\) to the subgraph \(\{e\in H\ |\ \nu(e)=1\}\subseteq H\) such that \(h(a_{i})=b_{i}\) for all \(1\leq i\leq n\). When no constraints on the homomorphism are given, we simply write \(\mathsf{Prov}_{H}^{G}\), and call this function the _provenance_ of \(G\) on \(H\). For our lower bounds, we will often seek to represent Boolean formulas as the provenance of queries on graphs:
Given a Boolean formula \(\phi\) on variables \(x_{1},\ldots,x_{n}\), two graphs \(G\) and \(H\), and an \(n\)-tuple \((e_{1},\ldots,e_{n})\) of pairwise distinct edges of \(H\), we say that \(\mathsf{Prov}_{H}^{G}\) represents \(\phi\) on \((e_{1},\ldots,e_{n})\) if the following is true: renaming the variables \(e_{1},\ldots,e_{n}\) of \(\mathsf{Prov}_{H}^{G}\) to \(x_{1},\ldots,x_{n}\), and fixing the other variables to be \(1\), then we obtain exactly \(\phi\).
Circuits and Knowledge Compilation.We consider representations of Boolean functions in terms of _non-deterministic (ordered) binary decision diagrams_, as well as _decomposable circuits_, which we define below.
A _non-deterministic binary decision diagram_ (nBDD) on a set of variables \(V=\{v_{1},\ldots,v_{n}\}\) is a rooted DAG \(D\) whose nodes carry a label in \(V\sqcup\{0,1,\lor\}\) and whose edges can carry an optional label in \(\{0,1\}\), subject to the following requirements:
1. there are exactly two leaves (called _sinks_), one labelled by \(1\) (the \(1\)_-sink_), and the other by \(0\) (the \(0\)_-sink_);
2. internal nodes are labelled either by \(\vee\) (called an _\(\vee\)-node_) or by a variable of \(V\) (called a _decision node_); and
3. each decision node has exactly two outgoing edges, labelled \(0\) and \(1\); the outgoing edges of \(\vee\)-nodes are unlabelled.
The size \(|D|\) of \(D\) is its number of edges. Let \(\nu\) be a valuation of \(V\), and let \(\pi\) be a path in \(D\) going from the root to one of the sinks. We say that \(\pi\) is _compatible_ with \(\nu\) if for every decision node \(n\) of the path, letting \(v\in V\) be the variable labelling \(n\), then \(\pi\) passes through the outgoing edge of \(n\) labelled with \(\nu(v)\). In particular, no constraints are imposed at \(\vee\)-nodes; thus, we may have that multiple paths are compatible with a single valuation. The nBDD \(D\)_represents_ a Boolean function, also written \(D\) by abuse of notation, which is defined as follows: for each valuation \(\nu\) of \(V\), we set \(D(\nu):=1\) if there exists a path \(\pi\) from the root to the \(1\)-sink of \(D\) that is compatible with \(\nu\), and set \(D(\nu):=0\) otherwise. Given an nBDD \(D\) over variables \(V\), we denote by \(\mathsf{Mods}(D)\) the set of satisfying valuations \(\nu\) of \(D\) such that \(D(\nu)=1\), and by \(\mathsf{MC}(D)\) the number \(|\mathsf{Mods}(D)|\) of such valuations. Further, given a rational probability function \(w:V\to[0,1]\) on the variables of \(V\), define \(\mathsf{WMC}(D,w)\) to be the probability that a random valuation \(\nu\) satisfies \(F\), that is, \(\mathsf{WMC}(D,w)=\sum_{\nu\in\mathsf{Mods}(D)}\prod_{x\in V\text{ s.t. }\nu(x)=1}w(x)\prod_{x\in V\text{ s.t. }\nu(x)=0} \left(1-w(x)\right)\).
In this paper, we primarily focus on a subclass of nBDDs called _non-deterministic ordered binary decision diagrams_ (nOBDDs). An nOBDD \(D\) is an nBDD for which there exists a strict total order \(\prec\) on the variables \(V\) such that, for any two decision nodes \(n\neq n^{\prime}\) such that there is a path from \(n\) to \(n^{\prime}\), then, letting \(v\) and \(v^{\prime}\) be the variables that respectively label \(n\) and \(n^{\prime}\), we have \(v\prec v^{\prime}\). This implies that, along any path going from the root to a sink, the sequence of variables will be ordered according to \(V\), with each variable occurring at most once. We use nOBDDs because they admit tractable approximate counting of their satisfying assignments, as we discuss later.
We also show lower bounds on a class of _circuits_, called _decomposable negation normal form_ (DNNF) circuits. A _circuit_ on a set of variables \(V\) is a directed acyclic graph \(C=(G,W)\), where \(G\) is a set of _gates_, where \(W\subseteq G\times G\) is a set of edges called _wires_, and where we distinguish an _output gate_\(g_{0}\in G\). The _inputs_ of a gate \(g\in G\) are the gates \(g^{\prime}\) such that there is a wire \((g^{\prime},g)\) in \(W\). The gates can be labelled with variables of \(V\) (called a _variable gate_), or with the Boolean operators \(\vee\), \(\wedge\), and \(\neg\). We require that gates labelled with variables have no inputs, and that gates labelled with \(\neg\) have exactly one input. A circuit \(C\) defines a Boolean function on \(V\), also written \(C\) by abuse of notation. Formally, given a valuation \(\nu\) of \(V\), we define inductively the _evaluation_\(\nu^{\prime}\) of the gates of \(C\) by setting \(\nu^{\prime}(g):=\nu(v)\) for a variable-gate \(g\) labelled with variable \(v\), and setting \(\nu^{\prime}(g)\) for other gates to be the result of applying the Boolean operators of \(g\) to \(\nu^{\prime}(g_{1}),\ldots,\nu^{\prime}(g_{n})\) for the inputs \(g_{1},\ldots,g_{n}\) of \(g\). We then define \(C(\nu)\) to be \(\nu^{\prime}(g_{0})\) where \(g_{0}\) is the output gate of \(C\).
The circuit is in _negation normal form_ if negations are only applied to variables, i.e., for every \(\neg\)-gate, its input is a variable gate. The circuit is _decomposable_ if the \(\wedge\)-gates always apply to inputs that depend on disjoint variables: formally, there is no \(\wedge\)-gate \(g\) with two distinct inputs \(g_{1}\) and \(g_{2}\), such that some variable \(v\) labels two variable gates \(g^{\prime}_{1}\) and \(g^{\prime}_{2}\) with \(g^{\prime}_{1}\) having a directed path to \(g_{1}\) and \(g^{\prime}_{2}\) having a directed path to \(g_{2}\). A _DNNF_ is a circuit which is both decomposable and in negation normal form. Note that we can translate nOBDDs in linear time to DNNFs, more specifically to _structured DNNFs_[4, Proposition 3.8].
Approximate Weighted Counting for nOBDDs.Recently, Arenas et al. [9] showed the following result on approximate counting of satisfying assignments of an nOBDD.
**Theorem 2.2** (Corollary 4.5 of [8]).: _Let \(D\) be an nOBDD. Then there exists an FPRAS for computing \(\mathsf{MC}(\Delta)\)._
For our upper bounds, we need a slight strengthening of this result to apply to _weighted model counting_ (WMC) in order to handle probabilities. This can be achieved by translating the approach used in [33, Section 5.1] to the nOBDD setting. We thus show (see Appendix A):
**Theorem 2.3**.: _Let \(D\) be an nOBDD, and \(w:\mathsf{vars}(\Delta)\to[0,1]\) be a rational probability function defined on the variables appearing in \(D\). Then there exists an FPRAS for computing \(\mathsf{WMC}(D,w)\), running in time polynomial in \(|D|\) and \(w\)._
## 3 Results in the Labelled Setting
We now move on to the presentation of our results. We start with the _labelled_ setting of probabilistic graph homomorphism in which the fixed signature \(\sigma\) of the query and instance graph contains more than one label (\(|\sigma|>1\)). Our results are summarized in Table 0(a).
1WP on DAG.We start by showing the tractability of approximation for \(\mathsf{PHom}_{\mathsf{L}}(\mathsf{1WP},\mathsf{DAG})\), which also implies tractability of approximation for \(\mathsf{PHom}_{\mathsf{L}}(\mathsf{1WP},\mathsf{PT})\), since \(\mathsf{PT}\subseteq\mathsf{DAG}\).
**Proposition 3.1**.: \(\mathsf{PHom}_{\mathsf{L}}(\mathsf{1WP},\mathsf{DAG})\) _is \(\#\mathsf{P}\)-hard already in data complexity, but it admits an FPRAS._
For \(\#\mathsf{P}\)-hardness, the result already holds in the unlabelled setting, so it will be shown in Section 4 (see Proposition 4.1). Hence, we focus on the upper bound. We rely on the notion of a _topological ordering_ of the edges of a directed acyclic graph \(H=(V,E)\): it is simply a strict total order \((E,\prec)\) with the property that for every consecutive pair of edges \(e_{1}=(a_{1},a_{2})\) and \(e_{2}=(a_{2},a_{3})\), we have that \(e_{1}\prec e_{2}\). Let us fix such an ordering.
Proof of Proposition 3.1.: We will show that every \(\mathsf{1WP}\) query on a DAG instance admits an nOBDD representation of its provenance, which we can compute in combined polynomial time. We can then apply Theorem 2.3, from which the result follows. Let \(G=a_{1}\xrightarrow{R_{1}}\ldots\xrightarrow{R_{m}}a_{m+1}\) be the input path query, and \(H\) the instance graph. We make the following claim:
\(\rhd\) Claim 3.2.For every \(v\in H\), we can compute in time \(O(|G|\times|H|)\) an nOBDD representing \(\mathsf{Prov}_{H}^{G}(a_{1}:=v)\) which is ordered by the topological ordering \(\prec\) fixed above.
\begin{table}
\begin{tabular}{c|
Proof.: Writing \(H=(V,E)\), we build an nBDD \(D\) consisting of the two sinks and of the following nodes:
* \(|V|\times|G|\)\(\vee\)-nodes written \(n_{u,i}\) for \(u\in V\) and \(1\leq i\leq m\); and
* \(|E|\times|G|\) decision nodes written \(d_{e,i}\) for \(e\in E\) and \(1\leq i\leq m\) which test the edge \(e\).
Each \(\vee\)-node \(n_{u,i}\) for \(u\in V\) and \(1\leq i\leq m\) has outgoing edges to each \(d_{e,i}\) for every edge \(e\) emanating from \(u\) which is labelled \(R_{i}\). For each decision node \(d_{e,i}\), letting \(w\) be the target of edge \(e\), then \(d_{e,i}\) has an outgoing \(0\)-edge to the \(0\)-sink and an outgoing \(1\)-edge to either \(n_{w,i+1}\) if \(i<m\) or to the \(1\)-sink if \(i=m\). The root of the nBDD is the node \(n_{v,1}\).
This construction clearly respects the time bound. To check correctness of the resulting nBDD, it is immediate to observe that, for any path from the root to a sink, the sequence of decision nodes traversed is of the form \(d_{e_{1},1},\ldots,d_{e_{k},k}\) where the \(e_{1},\ldots,e_{k}\) form a path of consecutive edges starting at \(v\) and successively labelled \(R_{1},\ldots,R_{k}\). This implies that the nBDD is in fact an nOBDD ordered by \(\prec\). Further, such a path reaches the \(1\)-sink iff \(k=m\) and all decisions are positive, which implies that whenever the nOBDD accepts a subgraph \(H^{\prime}\) of \(H\) then indeed \(H^{\prime}\) contains a match of \(G\) mapping \(a_{1}\) to \(v\). For the converse direction, we observe that, for any subgraph \(H^{\prime}\) of \(H\) containing a match of \(G\) mapping \(a_{1}\) to \(v\), then, letting \(e_{1},\ldots,e_{m}\) be the successive edges traversed in the match of \(G\), there is a path from the root of \(D\) to the \(1\)-sink which tests these edges in order. This establishes correctness and concludes the proof of the claim.
Now observe that \(\mathsf{Prov}_{\mathsf{L}}^{G}=\mathsf{Prov}_{\mathsf{L}}^{G}(a_{1}:=v_{1})\ \vee\ \cdots\ \vee\ \mathsf{Prov}_{\mathsf{L}}^{G}(a_{1}:=v_{n})\), where \(v_{1},\ldots,v_{n}\) are precisely the vertices of \(H\). Thus, it suffices to simply take the disjunction of each nOBDD obtained using the process above across every vertex in \(H\), which yields in linear time the desired nOBDD. From here we can apply Theorem 2.3, concluding the proof.
IWP on arbitrary graphs.We show, however, that tractability of approximation does _not_ continue to hold when relaxing the instance class from DAG to arbitrary connected graphs. This also implies that more expressive classes of query graphs--such as \(\mathsf{2WP}\), \(\mathsf{DWT}\), and \(\mathsf{PT}\) also cannot be tractable to approximate on Conn instances.
**Proposition 3.3**.: \(\mathsf{PHom}_{\mathsf{L}}(\mathsf{1WP},\mathsf{Conn})\) _does not admit an FPRAS unless \(\mathsf{RP}=\mathsf{NP}\)._
Proof.: Our result hinges on the following claim:
**Claim 3.4**.: Let \(d>1\) be a constant. Given a monotone \(2\)-CNF formula \(\phi\) on \(n\) variables where each variable occurs in at most \(d\) clauses, we can build in time \(O(|\phi|)\) a \(\mathsf{1WP}\)\(G_{\phi}\) and Conn graph \(H_{\phi}\) containing edges \((e_{1},\ldots,e_{n})\) such that \(\mathsf{Prov}_{H_{\phi}}^{G_{\phi}}\) represents \(\phi\) on \((e_{1},\ldots,e_{n})\).
Proof.: Let \(\phi=\bigwedge_{1\leq i\leq m}(X_{f_{1}(i)}\lor X_{f_{2}(i)})\) be the input CNF instance over the variables \(\{X_{1},\ldots,X_{n}\}\). As we are in the labelled setting, let \(U\) and \(R\) be two distinct labels from the signature. Define the \(\mathsf{1WP}\) query graph \(G_{\phi}\) to be \(\stackrel{{ U}}{{\rightarrow}}\left(\stackrel{{ R}}{{\rightarrow}}^{d+2}\stackrel{{ U}}{{\rightarrow}}\right)^{m}\). The instance Conn graph \(H_{\phi}\) is defined in the following way:
* For all \(1\leq i\leq n\), add an edge \(a_{i}\stackrel{{ R}}{{\rightarrow}}b_{i}\).
* Add an edge \(c_{0}\stackrel{{ U}}{{\rightarrow}}d_{0}\) and for each clause \(1\leq j\leq m\), an edge \(c_{j}\stackrel{{ U}}{{\rightarrow}}d_{j}\).
* For each clause \(1\leq j\leq m\) and variable \(X_{i}\) occurring in that clause, let \(p\) be the number of this occurrence of \(X_{i}\) in the formula (_i.e._, the occurrence of \(X_{i}\) in the \(j\)-th clause is the \(p\)-th occurrence of \(X_{i}\)), with \(1\leq p\leq d\) by assumption on \(\phi\). Then add a path of length \(p\) of \(R\)-edges from \(d_{j-1}\) to \(a_{i}\) and a path of length \((d+1)-p\) of \(R\)-edges from \(b_{i}\) to \(c_{j}\).
The construction of \(G_{\phi}\) and \(H_{\phi}\) is in \(O(|\phi|)\). Furthermore, notice the following (\(\star\)). For any \(1\leq i\leq n\), the edge \(e=a_{i}\stackrel{{ R}}{{\rightarrow}}b_{i}\) has at most \(d\) incoming \(R\)-paths and \(d\) outgoing \(R\)-paths; the outgoing paths have pairwise distinct _length_ (_i.e._, the number of edges until the next edge is a \(U\)-edge), and likewise for the incoming paths. What is more, each incoming \(R\)-path of length \(p\) corresponds to an outgoing path of length \((d+1)-p\) and together they connect some \(d_{j-1}\) to some \(c_{j}\) via the edge \(e\), where the \(j\)-th clause contains variable \(X_{i}\).
Now, define \((e_{1},\ldots,e_{n})\) to be precisely the edges of the form \(a_{i}\stackrel{{ R}}{{\rightarrow}}b_{i}\) for every \(1\leq i\leq n\). Intuitively, the presence or absence of each of these edges corresponds to the valuation of each variable in \(\phi\). We claim that \(\mathsf{Prov}_{H_{\phi}}^{G_{\phi}}\) represents \(\phi\) on \((e_{1},\ldots,e_{n})\). It will suffice to show that there is a bijection between the satisfying valuations of \(\phi\), and the subgraphs of \(H_{\phi}\) that both (i) contain all the edges not in \((e_{1},\ldots,e_{n})\), as these are fixed to \(1\), and (ii) admit a homomorphism from \(G_{\phi}\).
Indeed, consider the bijection defined in the obvious way: keep the edge \(a_{i}\stackrel{{ R}}{{\rightarrow}}b_{i}\) iff \(X_{i}\) is assigned to true in the valuation. First suppose that some valuation of \(\{X_{1},\ldots,X_{n}\}\) satisfies \(\phi\). Then, for each clause \(1\leq j\leq m\), there is a variable in the clause which evaluates to true. We build a match of \(G_{\phi}\) on the corresponding possible world of \(H_{\phi}\) by mapping the \(j\)-th \(U\)-edge to \(c_{j}\stackrel{{ U}}{{\rightarrow}}d_{j}\) for all \(0\leq j\leq m\), and mapping the \(R\)-paths for each \(1\leq j\leq m\) by picking a variable \(X_{i}\) witnessing that the clause is satisfied and going via the path of length \(1+(p)+((d+1)-p)=d+2\) that uses the edge \(a_{i}\stackrel{{ R}}{{\rightarrow}}b_{i}\), which is present by assumption.
Conversely, assume that we have a match of \(G_{\phi}\) on a possible world of \(H_{\phi}\). We show that the corresponding valuation satisfies \(\phi\). Consider the edge \(c_{j}\stackrel{{ U}}{{\rightarrow}}d_{j}\) to which the first \(U\)-edge is mapped. The \(R\)-path that follows must be mapped to a path from \(d_{j}\) to some \(a_{i}\), and then take the edge \(a_{i}\stackrel{{ R}}{{\rightarrow}}b_{i}\), whose presence witnesses that the corresponding variable \(X_{i}\) is true. But importantly, in order for the path to have length precisely \(d+2\) before reaching the next \(U\)-edge, it must be the case that the length of the path before and after the edge \(a_{i}\stackrel{{ R}}{{\rightarrow}}b_{i}\) sums up to \(d+1\). As a result of (\(\star\)), this is only possible by taking a path that leads to \(c_{j+1}\stackrel{{ U}}{{\rightarrow}}d_{j+1}\), and so we know that variable \(X_{i}\) occurs in the \(j\)-th clause so that clause is satisfied. Repeating the argument shows that all clauses from the \(j\)-th onwards are satisfied, and as we have \(m+1\)\(U\)-edges in the graph \(H_{\phi}\) and \(m+1\)\(U\)-edges in the graph \(G_{\phi}\) we know that in fact we must have mapped the first \(U\)-edge to the first \(U\)-edge (_i.e._, \(j=0\)), and all clauses are satisfied. \(\lhd\)
By [30, Theorem 2], counting the independent sets of a graph of maximal degree \(6\) admits an FPRAS only if \(\mathsf{RP}=\mathsf{NP}\). It is not hard to see that this problem is equivalent to counting satisfying assignments of a monotone \(2\)-CNF formula in which a variable can appear in up to \(6\) clauses (see, for example, [24, Proposition 1.1]). Thus, we can apply Claim 3 above for the class of formulas in which \(d=6\) to obtain (deterministic) graphs \(G_{\phi}\) and \(H_{\phi}\), and then build a probabilistic graph \(H^{\prime}_{\phi}\) identical to \(H_{\phi}\), in which the edges \((e_{1},\ldots,e_{n})\) are assigned probability \(0.5\) and all other edges probability \(1\), giving the desired reduction.
DWT on DWT.Having classified the cases of one-way path queries (\(\mathsf{1WP}\)) on all instances classes considered, we turn to more expressive queries. The next two query classes to consider are two-way path queries (\(\mathsf{2WP}\)) and downwards trees queries (\(\mathsf{DWT}\)). For these query classes, exact computation on \(\mathsf{2WP}\) instances is tractable by [7], so the first case to classify is that of \(\mathsf{DWT}\) instances. Exact computation is intractable in this case by [7], and we show here that, unfortunately, approximation is intractable as well, so that the border for exact tractability coincides with that for approximate tractability. We first focus on \(\mathsf{DWT}\) queries:
**Proposition 3.5**.: \(\mathsf{PHom_{L}}(\mathsf{DWT},\mathsf{DWT})\) _does not admit an FPRAS unless \(\mathsf{RP}=\mathsf{NP}\)._
Proof.: Our result hinges on the following, whose proof adapts [25, Proposition 2.4.3]:
* Claim 3.6. Given a monotone \(2\)-CNF formula \(\phi\) on \(n\) variables, we can build in time \(O(|\phi|\log|\phi|)\)\(\mathsf{DWT}\) graphs \(G_{\phi}\) and \(H_{\phi}\), with the latter containing edges \((e_{1},\ldots,e_{n})\) such that \(\mathsf{Prov}_{H_{\phi}}^{G_{\phi}}\) represents \(\phi\) on \((e_{1},\ldots,e_{n})\).
Proof.: Let \(\phi=\bigwedge_{1\leq i\leq m}(X_{f_{1}(i)}\lor X_{f_{2}(i)})\) be the input CNF instance over the variables \(\{X_{1},\ldots,X_{n}\}\). We let \(L=\lceil\log_{2}m\rceil\) be the number of bits needed to write clause numbers in binary. As we are in the labelled setting, let \(0\) and \(1\) be two distinct labels from the signature. Construct the query graph \(G_{\phi}\) as follows:
* For all \(1\leq i\leq m\), add an edge \(z\xrightarrow{0}x_{i}\).
* For each \(1\leq i\leq m\), letting \(b_{1}\cdots b_{L}\) be the clause number \(i\) written in binary, add a path of \(L\) edges \(x_{i}\xrightarrow{b_{1}}y_{i,1}\xrightarrow{b_{2}}\cdots\xrightarrow{b_{L-1}}y _{i,L}\xrightarrow{b_{L}}y_{i,L}\). Now, construct the \(\mathsf{DWT}\) instance \(H_{\phi}\) as follows:
* For all \(1\leq i\leq n\), add the edges \(a\xrightarrow{0}c_{i}\).
* For all \(1\leq i\leq n\) and \(1\leq j\leq m\) such that \(X_{i}\) occurs in the \(j\)-th clause of \(\phi\) (_i.e._, \(X_{i}\) is in \(f_{1}^{-1}(j)\) or \(f_{2}^{-1}(j)\)), letting \(b_{1}\cdots b_{L}\) be the clause number \(j\) written in binary, add a path of \(L\) edges \(c_{i}\xrightarrow{b_{1}}d_{i,j,1}\xrightarrow{b_{2}}\cdots\xrightarrow{b_{L-1}} d_{i,j,L-1}\xrightarrow{b_{L}}d_{i,j,L+1}\).
It is clear that \(G_{\phi}\in\mathsf{DWT}\), \(H_{\phi}\in\mathsf{DWT}\), and that both graphs can be built in time \(O(|\phi|\log|\phi|)\). Now, define \((e_{1},\ldots,e_{n})\) to be the edges of the form \(a\xrightarrow{0}c_{i}\) for every \(1\leq i\leq n\).
We claim that \(\mathsf{Prov}_{H_{\phi}}^{G_{\phi}}\) represents \(\phi\) on \((e_{1},\ldots,e_{n})\). It suffices to show that there is a bijection between the satisfying valuations \(\nu\) of \(\phi\), and the subgraphs of \(H_{\phi}\) that both (i) contain all the edges not in \((e_{1},\ldots,e_{n})\), as these are fixed to \(1\), and (ii) admit a homomorphism from \(G_{\phi}\). Indeed, consider the bijection defined in the obvious way: keep the edge \(a\xrightarrow{T}c_{i}\) iff \(X_{i}\) is assigned to true in the valuation. First, if there is a homomorphism from \(G_{\phi}\) to such a subgraph, then the root \(z\) of the query must be mapped to \(a\) (since this is the only element with outgoing paths of length \(L+1\) as prescribed by the query), and then it is clear that the image of any such homomorphism must take the form of a \(\mathsf{DWT}\) instance that contains, for each clause number \(1\leq i\leq m\), a path of length \(L\) representing this clause number. This witnesses that the valuation \(\nu\) makes a variable true which satisfies clause \(i\). Hence, \(\nu\) is a satisfying assignment of \(\phi\). Conversely, for every satisfying assignment \(\nu\), considering the corresponding subgraph of \(H_{\phi}\), we can construct a homomorphism mapping the edges of \(G_{\phi}\) to the edges of \(H_{\phi}\), by mapping the path of every clause to a path connected to a variable that witnesses that this clause is satisfied by \(\nu\).
The result then follows by an argument analogous to the one in Proposition 3.3.
2WP on Dwt.We then move to \(\mathsf{2WP}\) queries:
**Proposition 3.7**.: \(\mathsf{PHom_{L}}(\mathsf{2WP},\mathsf{DWT})\) _does not admit an FPRAS unless \(\mathsf{RP}=\mathsf{NP}\)._
This result follows from a general reduction technique from \(\mathsf{DWT}\) queries on \(\mathsf{DWT}\) instances to \(\mathsf{2WP}\) queries on \(\mathsf{DWT}\) instances, which allows us to conclude using the result already shown on \(\mathsf{DWT}\) queries (Proposition 3.5). We note that this technique could also have been used to simplify the proofs of hardness of exact computation in [7] and [2]. We claim:
**Lemma 3.8**.: _For any \(\mathsf{DWT}\) query \(G\), we can compute in time \(O(|G|)\) a \(\mathsf{2WP}\) query \(G^{\prime}\) which is equivalent to \(G\) on \(\mathsf{DWT}\) instances: for any \(\mathsf{DWT}\)\(H\), there is a homomorphism from \(G\) to \(H\) iff there is a homomorphism from \(G^{\prime}\) to \(H\)._
For lack of space, we give only the construction of \(G^{\prime}\) here, and defer the full proof of the correctness of this construction to Appendix B.
Proof.: Let \(G\) be a \(\mathsf{DWT}\) query. We build \(G^{\prime}\) following a tree traversal of \(G\). More precisely, we define the translation inductively as follows. If \(G\) is the trivial query with no edges, then we let the translation of \(G\) be the trivial query with no edges. Otherwise, let \(x\) be the root of \(G\), let \(x\xrightarrow{R_{1}}y_{1},\ldots,x\xrightarrow{R_{n}}y_{n}\) be the successive children, and call \(G_{1},\ldots,G_{n}\) the \(\mathsf{DWT}\) subqueries of \(G\) respectively rooted at \(y_{1},\ldots,y_{n}\). We define the translation of \(G\) to be \(\xrightarrow{R_{1}}G_{1}^{\prime}\xrightarrow{R_{1}}\cdots\xrightarrow{R_{n}}G _{n}^{\prime}\xrightarrow{R_{n}}\), where \(G_{1}^{\prime},\ldots,G_{n}^{\prime}\) are the respective translations of \(G_{1}\), \(\ldots\), \(G_{n}\). This translation is in linear time, and the translated query has twice as many edges as the original query.
Lemma 3.2 allows us to conclude from Proposition 3.1, as it allows us to reduce in linear time (in combined complexity) the evaluation of a \(\mathsf{DWT}\) query on a \(\mathsf{DWT}\) probabilistic instance to the evaluation of an equivalent \(\mathsf{2WP}\) query on the same instance. This establishes that any approximation algorithm for \(\mathsf{2WP}\) queries on \(\mathsf{DWT}\) instances would give an approximation for \(\mathsf{DWT}\) queries on \(\mathsf{DWT}\) instances, which by Proposition 3.1 is conditionally impossible.
These results complete Table 1, concluding the classification of the complexity of \(\mathsf{PHom}\) in the labelled setting: all cases that were intractable for exact computation are also hard to approximate, with the notable exception of \(\mathsf{1WP}\) queries on DAG instances.
## 4 Results in the Unlabelled Setting
We now turn to the _unlabelled_ setting of probabilistic graph homomorphism, where the signature \(\sigma\) has only one label (\(|\sigma|=1\)). Our results are summarized in Table 0(b): we settle all cases except \(\mathsf{PHom}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
2WP on PT.In contrast to \(1\mathsf{WP}\) queries, which are exactly tractable on \(\mathsf{PT}\) instances and admit an FPRAS on \(\mathsf{DAG}\) instances, \(2\mathsf{WP}\) queries have no FPRAS already on \(\mathsf{PT}\) instances:
**Proposition 4.3**: \(\mathsf{PHom}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
of \(f^{\prime}\): indeed, it cannot be mapped backwards on the last edge of the preceding path \(\to^{k+3}\) because \(k+3>1\) and \(i+1>1\) so the next edges \(\to^{i+1}\) would then have no image. Then the next directed path \(\to^{i+1}\) of \(e^{\prime}\) is mapped in \(f^{\prime}\), necessarily forward because we fail if we map the first edge backwards: this implies that there at least as many edges going in that direction in \(f^{\prime}\) as there are in \(e^{\prime}\), i.e., \(i\leq j\). Now, the last path \(\gets^{k+2}\) of \(e^{\prime}\) cannot be mapped backwards because \(k+2>i+1\), so we must map it forwards in \(f^{\prime}\): for this to be possible, we must have reached the end of the directed path \(\to^{j+1}\) in \(f^{\prime}\), so that we have \(j=i\). We are now done reading \(e^{\prime}\) and \(f^{\prime}\), so we have indeed mapped \(y\) to \(b\). This, along with \(i=j\), establishes that the claim is true, and concludes the proof.
We can thus prove Claim 4.4, starting from Claim 3.6 and translating it first via Lemma 3.8 and then via Lemma 4.5. Using the same argument as in Proposition 3.3, we conclude the proof of Proposition 4.3.
## 5 DNNF Lower Bounds
In this section, we investigate how to represent the provenance of the query-instance pairs that we consider. More specifically, we study whether there exist polynomially-sized representations in tractable circuit classes of Boolean provenance functions \(\mathsf{Prov}_{H}^{G}\), for \(G\in\mathcal{G}\) and \(H\in\mathcal{H}\) in the graph classes studied in this paper. Certainly, for every graph class \(\mathcal{G}\) and \(\mathcal{H}\), the (conditional) non-existence of an FPRAS for \(\mathsf{PHom}(\mathcal{G},\mathcal{H})\) implies that, conditionally, we cannot compute nOBDD representations of provenance in polynomial time combined complexity--as otherwise we could obtain an FPRAS via Theorem 2.3. In fact, beyond nOBDDs, it follows from [9, Theorem 6.3] that, conditionally, we cannot tractably compute provenance representations even in the more general class of _structured DNNFs_. Indeed, as for nOBDDs, fixed edges in the reductions can be handled by conditioning [27, Proposition 4].
However, even in settings where there is conditionally no combined FPRAS, it could be the case that there are polynomial-_sized_ tractable circuits that are difficult to compute, or that we can tractably compute circuits in a more general formalism such as _unstructured_ DNNF circuits. The goal of this section is to give a negative answer to these two questions, for all of the non-approximable query-instance class pairs studied in Sections 3 and 4.
Specifically, we show moderately exponential lower bounds on the size of DNNF circuits for infinite families of graphs taken from these classes. Remember that DNNF is arguably the most general knowledge compilation circuit class that still enjoys some tractable properties [16]. Hence, these lower bounds imply that no tractable provenance representation exists in other tractable subclasses of DNNFs, e.g., structured DNNFs [27], or Decision-DNNFs [10]. We also emphasize that, unlike the intractability results of Sections 3 and 4 which assumed \(\mathsf{RP}\neq\mathsf{NP}\), all of the DNNF lower bounds given here are unconditional.
We first show a _strongly_ exponential lower bound for labelled \(\mathsf{1WP}\) on \(\mathsf{Conn}\) instances:
There is an infinite family \(G_{1},G_{2},\ldots\) of labelled \(\mathsf{1WP}\) queries and an infinite family \(H_{1},H_{2},\ldots\) of labelled \(\mathsf{Conn}\) instances such that, for any \(i>0\), any DNNF circuit representing the Boolean function \(\mathsf{Prov}_{H_{i}}^{G_{i}}\) has size \(2^{\Omega(|G_{i}|+|H_{i}|)}\).
Proof.: By _treewidth_ of a monotone 2-CNF formula, we mean the treewidth of the graph on the variables whose edges correspond to clauses in the expected way; and by _degree_ we mean the maximal number of clauses in which any variable occurs. Let us consider an infinite family \(\phi_{1},\phi_{2},\ldots\) of monotone 2-CNF formulas of constant degree \(d=3\) whose treewidth is linear in their size: this exists by [17, Proposition 1, Theorem 5]. We accordingly know by [4, Corollary 8.5] that any DNNF computing \(\phi_{i}\) must have size \(2^{\Omega(|\phi_{i}|)}\) for all \(i>1\). Using
Claim 3.4, we obtain infinite families \(G_{1},G_{2},\ldots\) of \(1\mathsf{WP}\) and \(H_{1},H_{2},\ldots\) of \(\mathsf{Conn}\) graphs such that \(\mathsf{Prov}_{H_{i}}^{G_{i}}\) represents \(\phi_{i}\) on some choice of edges, and we have \(|G_{i}|+|H_{i}|=O(|\phi_{i}|)\) for all \(i>0\) (from the running time bound). Now, any representation of \(\mathsf{Prov}_{\mathsf{L}_{H_{i}}^{G_{i}}}^{G_{i}}\) as a DNNF can be translated in linear time to a representation of \(\phi_{i}\) as a DNNF of the same size, simply by renaming the edges \((e_{1},\ldots,e_{n})\) to the right variables, and replacing all other variables by the constant \(1\). This means that the lower bound on the size of DNNFs computing \(\phi_{i}\) also applies to DNNFs representing \(\mathsf{Prov}_{H_{i}}^{G_{i}}\), _i.e._, they must have size at least \(2^{\Omega(|\phi_{i}|)}\), hence \(2^{\Omega(|G_{i}|+|H_{i}|)}\) as we claimed.
We now present lower bounds for the remaining non-approximable query-instance class pairs, which are not exponential but rather _moderately_ exponential. This is because our encoding of CNFs into these classes (specifically, Claim 3.6, and its images by Lemma 3.8 and Lemma 4.5) do not give a linear, but rather linearithmic bound. We leave to future work the question of proving strongly exponential lower bounds for these classes, like we did in Proposition 5.1.
For any \(\epsilon>0\), there is an infinite family \(G_{1},G_{2},\ldots\) of labelled \(\mathsf{DWT}\) queries and an infinite family \(H_{1},H_{2},\ldots\) of labelled \(\mathsf{DWT}\) instances such that, for any \(i>0\), any DNNF circuit representing the Boolean function \(\mathsf{Prov}_{H_{i}}^{G_{i}}\) has size at least \(2^{\Omega\left((|G_{i}|+|H_{i}|)^{1-\epsilon}\right)}\).
Proof.: The proof is identical to that of Proposition 5.1, except that we apply Claim 3.6: for all \(i>0\), \(|G_{i}|+|H_{i}|=O(|\phi_{i}|\log|\phi_{i}|)\). We perform a change of variables: if we write \(y=|\phi_{i}|\log|\phi_{i}|\), then we can show that \(|\phi_{i}|=e^{W(y)}\), where \(W\) denotes the Lambert \(W\) function [12]; equivalently \(|\phi_{i}|=y/W(y)\) as the \(W\) function satisfies \(W(z)e^{W(z)}=z\) for all \(z>0\). Thus, the lower bound of \(2^{\Omega(|\phi_{i}|)}\) on DNNF representations of \(\phi_{i}\) implies that any DNNF for \(\mathsf{Prov}_{H_{j}}^{G_{j}}\) has size at least \(2^{\Omega\left(\frac{|G_{i}|+|H_{i}|}{W(|G_{i}|+|H_{i}|)}\right)}\). In particular, as \(W\) grows more slowly than \(n^{\epsilon}\) for any \(\epsilon>0\), this gives a bound of \(2^{\Omega\left((|G_{i}|+|H_{i}|)^{1-\epsilon}\right)}\) for sufficiently large \(\phi_{j}\).
The proof for the following two claims are analogous to that of Proposition 5.2, but using Lemma 3.8 (for the first result) and Claim 4.4 (for the second result):
For any \(\epsilon>0\), there is an infinite family \(G_{1},G_{2},\ldots\) of labelled \(2\mathsf{WP}\) queries and an infinite family \(H_{1},H_{2},\ldots\) of labelled \(\mathsf{DWT}\) instances such that, for any \(i>0\), any DNNF circuit representing the Boolean function \(\mathsf{Prov}_{H_{i}}^{G_{i}}\) has size at least \(2^{\Omega\left((|G_{i}|+|H_{i}|)^{1-\epsilon}\right)}\).
For any \(\epsilon>0\), there is an infinite family \(G_{1},G_{2},\ldots\) of unlabelled \(2\mathsf{WP}\) queries and an infinite family \(H_{1},H_{2},\ldots\) of unlabelled \(\mathsf{PT}\) instances such that, for any \(i>0\), any DNNF circuit representing the Boolean function \(\mathsf{Prov}_{H_{i}}^{G_{i}}\) has size at least \(2^{\Omega\left((|G_{i}|+|H_{i}|)^{1-\epsilon}\right)}\).
We finish by remarking that all of the lower bounds above apply to acyclic query classes (_i.e._, queries of treewidth \(1\)), for which non-probabilistic query evaluation is well-known to be linear in combined complexity [35]. Thus, these results give an interesting example of query classes for which query evaluation is in linear-time combined complexity, but computing even a DNNF representation of query provenance is (moderately) exponential.
## 6 Consequences
In this section, we consider some corollaries and extensions to the results above.
Optimality of a Previous Result.Recall from the introduction that, as was shown in [33], PQE for self-join-free conjunctive queries of bounded hypertree width admits a combined FPRAS (in the general setting of probabilistic databases, rather than probabilistic graphs): [Theorem 1 of [33]] Let \(Q\) be a self-join-free conjunctive query of bounded hypertree width, and \(H\) a tuple-independent database instance. Then there exists a combined FPRAS for computing the probability of \(Q\) on \(H\),, an FPRAS whose runtime is \(\mathsf{poly}(|Q|,|H|,\epsilon^{-1})\), where \(\epsilon\) is the multiplicative error.
Can a stronger result be achieved? Our Proposition 4 immediately implies the following:
Assuming \(\mathsf{RP}\neq\mathsf{NP}\), even on a fixed signature consisting of a single binary relation there is no FPRAS to approximate the probability of an input treewidth-1 CQ on an input treewidth-1 TID instance.
Hence, tractability no longer holds with self-joins. So, as unbounded hypertree width queries are intractable in combined complexity even for _deterministic_ query evaluation, we have:
The result in Theorem 1 is optimal in the following sense: relaxing either the self-join-free or bounded-hypertree-width condition on the query implies the non-existence of a combined FPRAS, unless \(\mathsf{RP}=\mathsf{NP}\).
Network Reliability.The two-terminal network reliability problem asks the following: given a graph with probabilistic edges and with source and target vertices \(s\) and \(t\), compute the probability that \(s\) and \(t\) remain connected, assuming independence across edges. Valiant showed that this problem is \(\mathsf{\#P}\)-complete [32, Theorem 1], and Provan and Ball showed that this holds already on directed acyclic graphs [28, Table 1]. Hardness also holds for the related problem of _all-terminal reliability_[28, Table 1], which asks for the probability that the probabilistic graph remains connected as a whole. Given the inherent \(\mathsf{\#P}\)-hardness of these problems, subsequent research has focused on developing tractable approximations.
Although significant progress has been made on FPRASes for all-terminal (un)reliability [18, 22], designing an FPRAS for two-terminal reliability has remained open. This question was even open for the restricted case of directed acyclic graphs; indeed, it was explicitly posed as an open problem by Zenklusen and Laumanns [36]. We now point out that the nOBDD construction of Proposition 3 implies an FPRAS for two-terminal reliability on DAGs, again by leveraging the approximate counting result of Arenas et al. [8]: There exists an FPRAS for the two-terminal network reliability problem over directed acyclic graphs.
Proof.: Given as input an unlabelled probabilistic DAG instance \(H=(V,E)\) and two distinguished source and target vertices \(s\) and \(t\in V\), construct the labelled DAG instance \(G^{\prime}=(V,E,\lambda)\) as follows. All vertices and edges are identical to that of \(G\), but every edge of the form \((s,x)\) emanating from \(s\) is assigned label \(\lambda((s,x))=R_{s}\), every edge \((x,t)\) directed towards \(t\) is assigned label \(\lambda((x,t))=R_{t}\), and every other edge \((x,y)\) is assigned the label \(\lambda((x,y))=R\). In the case that \((s,t)\in E\), then assign \(\lambda((s,t))=R_{s}\).
Now, by the result in Proposition 3, we can construct an nOBDD for each of the following \(|E|\) different labelled \(1\mathsf{WP}\) queries: \(\xrightarrow{R_{s}}\xrightarrow{R_{s}}\xrightarrow{R_{s}}\xrightarrow{R_{t}} \xrightarrow{R_{t}}\), \(\ldots,\xrightarrow{R_{s}}\left(\xrightarrow{R}\right)^{|E|-2} \xrightarrow{R_{t}}\). All of the nOBDDs have the same ordering (given by a topological ordering of the edges of \(G\)), so we may take their disjunction to obtain a (complete) nOBDD \(D\) in linear time, whose accepting paths are in bijection with the \((s,t)\)-connected valuations of the edges in \(G\). From here we conclude by applying Theorem 2.
## 7 Conclusions and Future Work
We studied the existence and non-existence of _combined approximation algorithms_ for the PQE problem, as well as the existence of polynomially-sized tractable circuit representations of provenance, under the lens of combined complexity.
We see several potential directions for future work. First, it would be interesting to see if the results in Proposition 3.1 and Theorem 6.3 can be extended beyond DAG instances: graph classes of bounded _DAG-width_[11] could be a possible candidate here. We also leave open the problem of filling in the two remaining gaps in Table 1. Namely, we would like to obtain either an FPRAS or hardness of approximation result for the equivalent problems \(\mathsf{PHom}_{\mathsf{f}}(\mathsf{IWP},\mathsf{Conn})\) and \(\mathsf{PHom}_{\mathsf{f}}(\mathsf{DWT},\mathsf{Conn})\). It is also natural to ask whether our results can be lifted from graph signatures to arbitrary relational signatures, or whether they apply in the _unweighted_ setting where all edges are required to have the same probability [6, 1, 23]. Another question is whether we can classify the combined complexity of approximate PQE for _disconnected_ queries, as was done in [7] in the case of exact computation, for queries that feature disjunction such as UCQs (already in the exact case [7]), or for more general query classes, _e.g._, with recursion [5].
|
2309.12199 | Holonomic $\mathcal{D}$-modules of arithmetic type and middle
convolution | The aim of the present paper is to study arithmetic properties of
$\mathcal{D}$-modules on an algebraic variety over the field of algebraic
numbers. We first provide a framework for extending a class of $G$-connections
(resp., globally nilpotent connections; resp., almost everywhere nilpotent
connections) to holonomic $\mathcal{D}$-modules. It is shown that the derived
category of $\mathcal{D}$-modules in each of such extended classes carries a
Grothendieck six-functor formalism. This fact leads us to obtain the stability
of the middle convolution for $G$-connections with respect to the global
inverse radii. As a consequence of our study of middle convolution, we prove
equivalences between various arithmetic properties on rigid Fuchsian systems.
This result gives, for such systems of differential equations, an affirmative
answer to a conjecture described in a paper written by Y. Andr\'{e} and F.
Baldassarri. | Yasuhiro Wakabayashi | 2023-09-21T16:06:03Z | http://arxiv.org/abs/2309.12199v1 | # Holonomic \(\mathcal{D}\)-modules of arithmetic type
###### Abstract.
The aim of the present paper is to study arithmetic properties of \(\mathcal{D}\)-modules on an algebraic variety over the field of algebraic numbers. We first provide a framework for extending a class of \(G\)-connections (resp., globally nilpotent connections; resp., almost everywhere nilpotent connections) to holonomic \(\mathcal{D}\)-modules. It is shown that the derived category of \(\mathcal{D}\)-modules in each of such extended classes carries a Grothendieck six-functor formalism. This fact leads us to obtain the stability of the middle convolution for \(G\)-connections with respect to the global inverse radii. As a consequence of our study of middle convolution, we prove equivalences between various arithmetic properties on rigid Fuchsian systems. This result gives, for such systems of differential equations, an affirmative answer to a conjecture described in a paper written by Y. Andre and F. Baldassarri.
2020 _Mathematical Subject Classification_: Primary 14F10, Secondary 11G35; Key words: \(\mathcal{D}\)-module, \(G\)-connection, globally nilpotent connection, rigid connection, \(p\)-curvature, radius of convergence
4.4 Global inverse radius of a holonomic \(\mathcal{D}_{\mathbb{A}^{1}}\)-modules
* 5 Middle convolution on holonomic \(\mathcal{D}\)-modules of arithmetic types
* 5.1 Middle convolution
* 5.2 Estimate of global inverse radii I
* 5.3 Estimate of global inverse radii II
* 6 Equivalence among various arithmetic properties on rigid flat bundles
* 6.1 Katz's middle convolution algorithm
* 6.2 Equivalence for rigid Fuchsian systems
## 1. Introduction
### What is a \(G\)-connection?
In the present paper, we investigate \(G\)-connections, i.e., certain connections on vector bundles satisfying a condition of moderate growth, that are fundamental in understanding the diophantine properties of Siegel's \(G\)-functions.
Let \(K\) be a number field, and denote by \(K(t)\) the rational function field in one variable \(t\) over \(K\). Each \(n\times n\) matrix \(A\in M_{n}(K(t))\) (where \(n\in\mathbb{Z}_{>0}\)) with entries in \(K(t)\) associates a system of linear differential equations
\[f:\frac{d}{dt}\vec{y}=A\vec{y},\hskip 14.226378pt\vec{y}=\begin{pmatrix}y_{1} \\ y_{2}\\ \vdots\\ y_{n}\end{pmatrix}. \tag{1}\]
Solutions to this system may be identified with horizontal elements (i.e., elements in the kernel) of the connection given by
\[\nabla:=\frac{d}{dt}-A:K(t)^{n}\to K(t)^{n}. \tag{2}\]
For each \(s\in\mathbb{Z}_{\geq 0}\), we define \(A_{[s]}\) as the matrix such that, if \(\vec{y}\) is a solution of the system \(f\), then the equality \(\left(\frac{d}{dt}\right)^{s}\vec{y}=A_{[s]}\vec{y}\) holds. Hence, \(A_{0}\) coincides with the identity matrix \(I_{n}\) and \(A_{[s]}\) (\(s=1,2,\cdots\)) satisfy the recurrence relation:
\[A_{[s+1]}=\frac{d}{dt}A_{[s]}+A\cdot A_{[s]}. \tag{3}\]
Given a finite place \(v\) of \(K\), we denote by \(|-|_{v}\) the normalized non-archimedean absolute value corresponding to \(v\). The Gauss norm \(|-|_{\operatorname{Gauss},v}\) on \(K(t)\) determined by \(|-|_{v}\) (cf. (66)) extends to the norm on \(M_{n}(K(t))\), which we also denote by \(|\!|-|\!|_{\operatorname{Gauss},v}\). Then, the **global inverse radius** of \(\nabla\) is defined as
\[\rho(\nabla):=\sum_{v}\log\left(\max\left\{1,\limsup_{s\to\infty}\left\|\frac {A_{[s]}}{s!}\right\|_{\operatorname{Gauss},v}^{\frac{1}{s}}\right\}\right) \in\mathbb{R}_{\geq 0}\sqcup\{\infty\} \tag{4}\]
(cf. (70) for a general definition), where the sum in the right-hand side runs over the set of finite places \(v\) of \(K\). We say that \(\nabla\) is a \(G\)**-connection** (or, a \(G\)**-operator**) if \(\rho(\nabla)<\infty\).
Since the value \(\rho(\nabla)\) is invariant under any base-change over \(K\), the notion of a \(G\)-connection can be formulated for connections on \(\overline{\mathbb{Q}}(t)\)-vector spaces (where \(\overline{\mathbb{Q}}\) denotes the field of algebraic
numbers) in a well-defined manner; moreover, we can generalize that notion to connections on a smooth algebraic variety over \(\overline{\mathbb{Q}}\). The class of \(G\)-connections has various basic examples. In fact, it contains differential operators of minimal order annihilating \(G\)-functions, e.g., algebraic functions over \(\mathbb{Q}(t)\) regular at the origin, the polylogarithm functions, and some hypergeometric series with rational parameters. Also, \(G\)-connections have specific properties, and they are believed to come from geometry (Bombieri-Dwork's conjecture). The study of \(G\)-connections with geometric treatments was developed from 1970s with the works of Galochkin, Chudnovsky, Andre, Dwork, Baldassarri and other mathematician (cf. e.g., [And], [AnBa], [DGS]).
On the other hand, there are several other classes of connections characterized by important arithmetic conditions, e.g., _globally nilpotent connections_ and _almost everywhere (a.e.) nilpotent connections_ (cf. Definition 4.4).
### First result: Generalization to holonomic \(\mathcal{D}\)-modules
The primary purpose of the present paper is to generalize \(G\)-connections, as well as globally nilpotent or a.e. nilpotent connections, to holonomic \(\mathcal{D}\)-modules in order to discuss various functors (including the middle convolution functor) between the derived categories of \(\mathcal{D}\)-modules from an arithmetic point of view.
Given a smooth algebraic variety \(X\) over \(\overline{\mathbb{Q}}\), we denote by \(D^{b}_{h}(\mathcal{D}_{X})\) the derived category of bounded chain complexes of \(\mathcal{D}_{X}\)-modules having holonomic cohomology. If \(f:X\to Y\) is a morphism of smooth varieties over \(\overline{\mathbb{Q}}\), then we can push-forward \(\mathcal{D}_{X}\)-modules, as well as pull-back \(\mathcal{D}_{Y}\)-modules, along that morphism. There are also some other natural functors between categories of \(\mathcal{D}\)-modules which together make up a version of the so-called "six-functor formalism" of Grothendieck.
In the present paper, we introduce the full subcategory
\[D^{b}_{h}(\mathcal{D}_{X})_{G}\ \left(\text{resp., }\ D^{b}_{h}(\mathcal{D}_{X})_{ \text{nilp}};\text{resp., }D^{b}_{h}(\mathcal{D}_{X})_{\text{aen}}\right) \tag{5}\]
(cf. (79)) of \(D^{b}_{h}(\mathcal{D}_{X})\) consisting of bounded chain complexes such that \(X\) can be stratified by locally closed subschemes on each of which the cohomology sheaves are \(G\)-connections (resp., globally nilpotent connections; resp., a.e. nilpotent connections).
Here, recall a result by Andre and Baldassarri (cf. [AnBa, Main Theorem]), asserting that the cohomology sheaves of the push-forward of a \(G\)-connection define _generically_\(G\)-connections. Our formulation in terms of \(\mathcal{D}\)-modules has the advantage that the result by Andre-Baldassarri can be simply formulated as the stability of the subcategories \(D^{b}_{h}(\mathcal{D}_{X})_{G}\) with respect to the push-forward functor. Together with other kinds of functors, we obtain the following assertion, which is the main result of the first part.
**Theorem A** (cf. Theorems 3.10 and 4.6).: _Let \(f:X\to Y\) be a morphism of smooth algebraic varieties over \(\overline{\mathbb{Q}}\). Then, there is a six-functor formalism_
\[\int_{f}:D^{b}_{h}(X)_{G}\to D^{b}_{h}(Y)_{G},\] \[f^{\dagger}:D^{b}_{h}(Y)_{G}\to D^{b}_{h}(X)_{G},\] \[\int_{f^{\dagger}}:D^{b}_{h}(X)_{G}\to D^{b}_{h}(Y)_{G}, \tag{6}\]
\[f^{!}:D^{b}_{h}(Y)_{G}\to D^{b}_{h}(X)_{G},\] \[\mathbb{D}:D^{b}_{h}(X)^{\mathrm{op}}_{G}\to D^{b}_{h}(X)_{G},\] \[\otimes^{L}_{\mathcal{O}_{X}}:D^{b}_{h}(X)_{G}\times D^{b}_{h}(X)_{ G}\to D^{b}_{h}(X)_{G}\]
_satisfying all the usual adjointness properties that one has in the theory of the derived category of \(\mathcal{D}\)-modules. Also, the same assertion holds for \(D^{b}_{h}(\mathcal{D}_{X})_{\mathrm{nilp}}\) and \(D^{b}_{h}(\mathcal{D}_{X})_{\mathrm{aen}}\)._
### Second result: Middle convolution on \(G\)-connections
The second part of the present paper discusses the middle convolution functors on the derived categories under consideration. The middle convolution, which is introduced by Katz (cf. [10]), is an operation for local systems on (an open subscheme of) the affine line and plays a fundamental role in the theory of rigid local systems. By the Riemann-Hilbert correspondence, it can be formulated as an operation on flat bundles, which moreover carries a chain complex of \(\mathcal{D}\)-modules to another one. The middle convolution depends on a parameter \(\lambda\), and we denote by
\[\mathrm{mc}_{\lambda}(\mathscr{F}^{\bullet}) \tag{7}\]
(cf. (91)) the result of that operation applied to a chain complex \(\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{\mathbb{A}^{1}})\), where \(\mathbb{A}^{1}\) denotes the affine line over \(\overline{\mathbb{Q}}\).
As a corollary of Theorem A, it is shown that the assignment \(\mathscr{F}^{\bullet}\mapsto\mathrm{mc}_{\lambda}(\mathscr{F}^{\bullet})\) preserves the arithmetic properties mentioned above. In [11], Dettweiler and Reiter provided an explicit description of that operation on a Fuchsian system and then studied how its \(p\)-curvature (for a prime \(p\)) changes under the convolution process. The results of that work enables us to measure the complexity of the middle convolution for globally nilpotent or a.e. nilpotent connections from an arithmetic point of view (cf. [11]).
On the other hand, to investigate \(G\)-connections, we estimate the global inverse radii \(\rho(-)\) of connections (cf. (70), (82)) by applying some basic generalities on \(\mathcal{D}\)-modules together with the work of Andre and Baldassarri (cf. [12, Theorem 3.1.2]). Our results are summarized as follows.
**Theorem B** (cf. Theorems 5.2 and 5.4).: _Let \(\lambda\) be an element of \(\mathbb{Q}\setminus\mathbb{Z}\)._
1. _Let_ \(\Box\in\{G,\mathrm{nilp},\mathrm{aen}\}\)_. Then, the endofunctor_ \(\mathrm{mc}_{\lambda}\) _on_ \(D^{b}_{h}(\mathcal{D}_{\mathbb{A}^{1}})\) _given by assigning_ \(\mathscr{F}^{\bullet}\mapsto\mathrm{mc}_{\lambda}(\mathscr{F}^{\bullet})\) _restricts to an endofunctor_ (8) \[\mathrm{mc}_{\lambda}:D^{b}_{h}(\mathcal{D}_{\mathbb{A}^{1}})_{\Box}\to D^{b}_ {h}(\mathcal{D}_{\mathbb{A}^{1}})_{\Box}\] _on_ \(D^{b}_{h}(\mathcal{D}_{\mathbb{A}^{1}})_{\Box}\)_._
2. _Let_ \(\mathscr{F}\) _be a flat bundle on a nonempty open subscheme_ \(U\) _of_ \(\mathbb{A}^{1}\) _of rank_ \(n\in\mathbb{Z}_{>0}\)_. Denote by_ \(\iota\) _the natural open immersion_ \(U\hookrightarrow\mathbb{A}^{1}\)_, and suppose that_ \(\mathscr{F}\) _is of type_ \(G\)_. Then, the following inequalities of global inverse radii hold:_ (9) \[\left|\rho(\mathrm{mc}_{\lambda}(\int_{\iota}\!\mathscr{F}))-\rho(\mathrm{mc}_ {\lambda}(\iota_{\iota_{\ast}}\!\mathscr{F}))\right|\leq H(\lambda),\ \ \ \ \rho(\mathrm{mc}_{\lambda}(\int_{\iota}\!\mathscr{F}))\leq(n^{2}+1)\left(\rho( \mathscr{F})+H(\lambda)\right),\] _where_ \(H(\lambda)\) _denotes the positive real number defined in (_96_) and_ \(\iota_{\iota_{\ast}}\!\mathscr{F}\) _denotes the minimal extension of_ \(\mathscr{F}\) _(cf. (_61_))._
### Third result: Equivalence for rigid \(G\)-connections
At the end of the present paper, we discuss an application of our study of middle convolution. Recall the following conjecture, which was described in a paper by Andre and Baldassarri; it asserts an equivalence among the classes of \(G\)-connections, globally nilpotent connection, and a.e. nilpotent connections.
**Conjecture 1.1** (cf. [AnBa], SS 1.4).: _Let \(X\) be a smooth algebraic variety over \(\overline{\mathbb{Q}}\) and \(\mathscr{F}\) a flat bundle on \(X\). Then, the following three conditions are equivalent to each other:_
1. \(\mathscr{F}\) _is a_ \(G\)_-connection;_
2. \(\mathscr{F}\) _is globally nilpotent;_
3. \(\mathscr{F}\) _is almost everywhere nilpotent._
To consider this problem, we focus on _rigid_\(G\)-connections defined on an open subscheme \(U\) of the projective line \(\mathbb{P}^{1}\) over \(\overline{\mathbb{Q}}\). (A connection is rigid if it is determined up to isomorphism by the conjugacy classes of its local monodromies; see SS 6.1.) Katz showed that any rigid irreducible local system on \(U\) can be obtained from a rank one connection by applying iteratively a suitable sequence of middle convolutions and scalar multiplications (cf. [Kat4], [Ari2]). By applying this fact and the stabilities of the arithmetic classes of \(\mathcal{D}\)-modules proved in Theorem B, we give an affirmative answer to the above conjecture for rigid flat bundles, as described below. In particular, this provides a characterization of \(G\)-connections in terms of \(p\)-curvature, i.e., without using any limits nor infinite sums.
**Theorem C** (cf. Theorem 6.2).: _Conjecture 1.1 is true when \(X\) is a nonempty open subscheme \(U\) of the projective line \(\mathbb{P}^{1}\) over \(\overline{\mathbb{Q}}\) and \(\mathscr{F}\) is a rigid flat bundle on \(U\)._
## 2. Holonomic \(\mathcal{D}\)-modules of type \(\mathcal{A}\)
In this section, we discuss a formal way of generalizing flat bundles satisfying suitable conditions to chain complexes of holonomic \(\mathcal{D}\)-modules. Given a subcategory of the stack of flat bundles (i.e., an "\(\mathcal{A}\)" introduced in SS 2.3), we define the derived category of chain complexes of \(\mathcal{D}\)-modules such that the underlying variety can be stratified by locally closed subschemes on each of which the cohomology sheaves belong to that subcategory. We also prove several basic properties of chain complexes in that derived category from the homological point of view.
By an _algebraic variety_ over a field \(k_{0}\), we mean a geometrically connected scheme of finite type over \(k_{0}\). We shall fix an algebraically closed field \(k\) of characteristic \(0\).
### Smooth stratifications
Let \(X\) be a nonempty connected reduced scheme of finite type over \(k\).
**Definition 2.1**.:
1. A **stratification** on \(X\) is a collection \(\mathfrak{X}:=\{X_{j}\}_{j=0}^{m+1}\) (where \(m\in\mathbb{Z}_{\geq 0}\)) forming a decreasing sequence of reduced closed subschemes (10) \[X=X_{0}\supsetneq X_{1}\supsetneq X_{2}\supsetneq\dots\supsetneq X_{m} \supsetneq X_{m+1}=\emptyset\] of \(X\). If we are given a stratification on \(X\) indicated, say, by \(\mathfrak{X}:=\{X_{j}\}_{j=0}^{m+1}\), then it is occasionally considered as a collection \(\{X_{j}\}_{j\in\mathbb{Z}}\) indexed by the elements of \(\mathbb{Z}\) by putting \(X_{j}:=X\) (resp., \(X_{j}:=\emptyset\)) for \(j<0\) (resp., \(j>m+1\)). Also, for each \(j\in\mathbb{Z}\), we shall denote by (11) \[\iota_{\mathfrak{X},j}:X_{j}\setminus X_{j+1}\hookrightarrow X\]
the natural immersion.
2. A stratification \(\mathfrak{X}:=\{X_{j}\}_{j=0}^{m+1}\) is called **smooth** if, for each \(j=0,\cdots,m\), the (nonempty) subscheme \(X_{j}\setminus X_{j+1}\) of \(X\) is smooth over \(k\).
**Definition 2.2**.: Let \(\mathfrak{X}:=\{X_{j}\}_{j=1}^{m+1}\) and \(\mathfrak{X}^{\prime}:=\{X^{\prime}_{j}\}_{j=1}^{m^{\prime}+1}\) be stratifications on \(X\).
1. We shall say that \(\mathfrak{X}\)**is subordinate to \(\mathfrak{X}^{\prime}\)** if, for each \(j^{\prime}\in\{0,\cdots,m^{\prime}\}\), there exists \(j\in\{0,\cdots,m\}\) such that the immersion \(\iota_{\mathfrak{X}^{\prime},j^{\prime}}:X^{\prime}_{j^{\prime}}\setminus X^{ \prime}_{j^{\prime}+1}\hookrightarrow X\) factors through \(\iota_{\mathfrak{X},j}:X_{j}\setminus X_{j+1}\hookrightarrow X\).
2. We shall say that \(\mathfrak{X}\)**is strictly subordinate to \(\mathfrak{X}^{\prime}\)** if, for each \(j\in\{0,\cdots,m\}\), there exists \(j^{\prime}\in\{0,\cdots,m^{\prime}\}\) satisfying \(X_{j}=X_{j^{\prime}}\). (One may immediately verify that if \(\mathfrak{X}\) is strictly subordinate to \(\mathfrak{X}^{\prime}\), then it is also subordinate to \(\mathfrak{X}^{\prime}\).)
We shall prove the following two basic properties on (smooth) stratifications.
**Proposition 2.3**.: _Let \(\mathfrak{X}:=\{X_{j}\}_{j=0}^{m+1}\) be a stratification on \(X\). Then, there exists a smooth stratification on \(X\) to which \(\mathfrak{X}\) is strictly subordinate. In particular, there always exists a smooth stratification on \(X\)._
Proof.: The latter assertion immediately follows from the former assertion by considering the case where the stratification \(\mathfrak{X}\) is taken as \(X=X_{0}\supsetneq X_{1}=\emptyset\). Hence, it suffices to prove the former assertion.
Since \(k\) is of characteristic zero, each irreducible component of a reduced subscheme of \(X\) has a dense open subscheme that is smooth over \(k\). By applying this fact successively, we see that, for each \(j\in\{0,\cdots,m\}\), there exists a smooth stratification
\[X_{j}\setminus X_{j+1}=Y_{j,0}\supsetneq\cdots\supsetneq Y_{j,M_{j}}=\emptyset \tag{12}\]
(where \(M_{j}\in\mathbb{Z}_{>0}\)) on \(X_{j}\setminus X_{j+1}\,(\neq\emptyset)\). We shall set \(Y_{j,l}^{+}:=Y_{j,l}\cup X_{j+1}\ (l=0,\cdots,M_{j})\). Then, the decreasing sequence
\[X=X_{0}\left(=Y_{0,0}^{+}\right)\supsetneq Y_{0,1}^{+}\supsetneq\cdots \supsetneq Y_{0,M_{0}}^{+}\left(=Y_{1,0}^{+}\right)\supsetneq Y_{1,1}^{+} \supsetneq\cdots\supsetneq Y_{1,M_{1}}^{+}\supsetneq\cdots\supsetneq Y_{m,M_{ m}}^{+}=\emptyset \tag{13}\]
defines a smooth stratification on \(X\). Moreover, since \(X_{j}=Y_{j,0}^{+}\), \(\mathfrak{X}\) is strictly subordinate to this smooth stratification. This completes the proof of Proposition 2.3.
**Proposition 2.4**.: _Let \(\mathfrak{X}^{1},\mathfrak{X}^{2},\cdots,\mathfrak{X}^{n}\) (where \(n\in\mathbb{Z}_{>0}\) and \(\mathfrak{X}^{l}:=\{X^{l}_{j}\}_{j=0}^{m_{l}+1}\)) be stratifications on \(X\). Then, there exists a smooth stratification \(\mathfrak{X}:=\{X_{j}\}_{j=0}^{m+1}\) on \(X\) such that every \(\mathfrak{X}^{l}\) (\(l=1,\cdots,n\)) is subordinate to \(\mathfrak{X}\). Moreover, we can choose \(\mathfrak{X}\) as such that \(X^{1}_{m_{1}}=X_{j}\) for some \(j\in\{1,\cdots,m+1\}\)._
Proof.: First, we shall consider the former assertion. By induction on \(n\), it suffices to consider the case of \(n=2\). We shall set \(l:=\min\{m_{1},m_{2}\}\) and \(Y:=X^{1}_{m_{1}-l}\cap X^{2}_{m_{2}-l}\). Let us consider the following decreasing sequence consisting of reduced subschemes of \(Y\):
\[Y=X^{1}_{m_{1}-l}\cap X^{2}_{m_{2}-l} \supseteq X^{1}_{m_{l}-l}\cap X^{2}_{m_{2}-l+1}\supseteq X^{1}_{m _{l}-l+1}\cap X^{2}_{m_{2}-l+1}\] \[\supseteq X^{1}_{m_{l}-l+1}\cap X^{2}_{m_{2}-l+2}\supseteq X^{1}_ {m_{l}-l+2}\cap X^{2}_{m_{2}-l+2}\] \[\supseteq\cdots\] \[\supseteq X^{1}_{m_{1}}\cap X^{2}_{m_{2}+1}\supseteq X^{1}_{m_{1} +1}\cap X^{2}_{m_{2}+1}=\emptyset. \tag{14}\]
By removing the duplicate constituents from this sequence, we obtain a stratification \(\mathfrak{Y}\) on \(Y\). Moreover, if \(m_{1}>m_{2}\) (resp., \(m_{2}>m_{1}\)), then we add this stratification to the sequence \(X^{1}_{0}\supsetneq X^{1}_{1}\supsetneq\cdots\supsetneq X^{1}_{m_{1}-l-1}\) (resp., \(X^{2}_{0}\supsetneq X^{2}_{1}\supsetneq\cdots\supsetneq X^{2}_{m_{2}-l-1}\)); we denote the resulting stratification on \(X\) by \(\mathfrak{X}^{\prime}\). According to Proposition 2.3, there exists a smooth stratification \(\mathfrak{X}\) on \(X\) to which \(\mathfrak{X}^{\prime}\) is (strictly) subordinate. By construction, both \(\mathfrak{X}^{1}\) and \(\mathfrak{X}^{2}\) are subordinate to \(\mathfrak{X}\). This completes the proof of the former assertion.
The latter assertion follows from the construction of \(\mathfrak{X}\) discussed above.
### Derived category of holonomic \(\mathcal{D}\)-modules
We now move on to the discussion on \(\mathcal{D}\)-modules. Much of the notation used in the present paper follows [HTT], and we will refer to that reference at many places in this text until the end of SS 3. Although the discussions in _loc. cit._ only deal with algebraic varieties over the field of complex numbers, the various results shown there based on a purely algebraic treatment of \(\mathcal{D}\)-modules remain true even when the base field is replaced with any arbitrary algebraically closed field of characteristic \(0\), e.g., the field of algebraic numbers \(\overline{\mathbb{Q}}\).
Let \(R\) be a commutative ring and \(X\) a smooth scheme over \(\operatorname{Spec}(R)\) of constant relative dimension \(d\in\mathbb{Z}_{>0}\). Denote by \(\Omega_{X/R}\) the sheaf of \(1\)-forms on \(X\) over \(R\) and by \(\mathcal{T}_{X/R}\) its dual, i.e., the sheaf of vector fields on \(X\) over \(R\). Also, the canonical bundle \(\omega_{X}\) of \(X/R\) is the line bundle defined as \(\bigwedge^{d}\Omega_{X/R}\).
Recall that, for an \(\mathcal{O}_{X}\)-module \(\mathcal{F}\), an \(R\)**-connection** on \(\mathcal{F}\) is defined to be an \(R\)-linear morphism \(\nabla:\mathcal{F}\to\Omega_{X/R}\otimes\mathcal{F}\) satisfying \(\nabla(av)=da\otimes v+a\cdot\nabla(v)\) for any local sections \(a\in\mathcal{O}_{X}\) and \(v\in\mathcal{F}\). An \(R\)-connection is called **flat** (or, **integrable**) if it has vanishing curvature (cf., e.g., [Kat1, SS 1] for the definition of curvature). By a **(rank \(n\)) flat bundle** on \(X/R\), we mean a pair \(\mathscr{F}:=(\mathcal{F},\nabla)\) consisting of a (rank \(n\)) vector bundle \(\mathcal{F}\) on \(X\), i.e., a locally free \(\mathcal{O}_{X}\)-module (of rank \(n\)), and a flat \(R\)-connection \(\nabla\) on \(\mathcal{F}\).
Let us fix a smooth algebraic variety \(X\) over \(k\). Denote by \(\mathcal{D}_{X}\) the sheaf of differential operators on \(X/k\). Each \(\mathcal{D}_{X}\)-module is, by definition, given as an \(\mathcal{O}_{X}\)-module together with a left \(\mathcal{D}_{X}\)-action extending its \(\mathcal{O}_{X}\)-module structure. The class of flat bundles on \(X/k\) coincides with the class of \(\mathcal{D}_{X}\)-modules whose underlying sheaves are vector bundles (cf. [HTT, Theorem 1.4.10]).
Denote by \(D^{b}_{h}(\mathcal{D}_{X})\) the derived category of bounded chain complexes \(\mathscr{F}^{\bullet}\) of \(\mathcal{D}_{X}\)-modules having holonomic cohomology. For each chain complex of \(\mathcal{D}_{X}\)-modules \(\mathscr{F}^{\bullet}\) and each \(i\in\mathbb{Z}\), the \(i\)-th cohomology sheaf of \(\mathscr{F}^{\bullet}\) will be denoted by \(\mathcal{H}^{i}(\mathscr{F}^{\bullet})\).
Given another smooth algebraic variety \(Y\) over \(k\) and a morphism \(f:X\to Y\) over \(k\), we obtain the inverse image functor
\[Lf^{*}:D^{b}_{h}(\mathcal{D}_{Y})\to D^{b}_{h}(\mathcal{D}_{X}) \tag{15}\]
given by assigning \(\mathscr{F}^{\bullet}\mapsto D_{X\to Y}\otimes^{L}_{f^{-1}D_{Y}}f^{-1} \mathscr{F}^{\bullet}\), where \(\mathcal{D}_{X\to Y}\) denotes the \((\mathcal{D}_{X},f^{-1}\mathcal{D}_{Y})\)-bimodule \(\mathcal{O}_{X}\otimes_{f^{-1}\mathcal{O}_{Y}}f^{-1}\mathcal{D}_{Y}\). It induces the shifted inverse image functor
\[f^{\dagger}:=Lf^{*}[\dim X-\dim Y]:D^{b}_{h}(\mathcal{D}_{Y})\to D^{b}_{h}( \mathcal{D}_{X}) \tag{16}\]
(cf. [HTT, Theorem 3.2.3, (ii)]).
**Remark 2.5**.: We here recall two properties on the shifted inverse image functor that will be used in the present paper.
* Let \(f:X\to Y\) be a morphism of smooth algebraic varieties over \(k\). Also, let \(\mathscr{F}:=(\mathcal{F},\nabla)\) be a flat bundle on \(Y\); we regard it as an object of \(D^{b}_{h}(\mathcal{D}_{Y})\) concentrated at
degree \(0\). Since the forgetful functor \(D^{b}_{h}(\mathcal{D}_{(-)})\to D^{b}(\mathcal{O}_{(-)})\) (where \(D^{b}(\mathcal{O}_{(-)})\) denotes the derived category of bounded chain complexes of \(\mathcal{O}_{(-)}\)-modules) is exact, we obtain the following identification of \(\mathcal{O}_{X}\)-modules:
\[\mathcal{H}^{l}(f^{\dagger}\mathscr{F}^{\bullet})=\begin{cases}f^{*}\mathcal{F }\,(=D_{X\to Y}\otimes_{f^{-1}\mathcal{D}_{Y}}f^{-1}\mathscr{F})&\text{if $l=- \mathrm{dim}X+\mathrm{dim}Y$;}\\ 0&\text{if $l\neq-\mathrm{dim}X+\mathrm{dim}Y$.}\end{cases} \tag{17}\]
2. For each \(\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{X})\), there exists a smooth stratification \(\mathfrak{X}:=\{X_{j}\}_{j}\) on \(X\) such that the cohomology sheaves \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}^{\bullet})\) are (possibly zero) flat bundles for all \(l\) and \(j\) (cf. [HTT, Theorem 3.3.1]).
By taking account of Remark 2.5, (b), we make the following definition.
**Definition 2.6**.: Let \(\mathscr{F}^{\bullet}\) be a chain complex in \(D^{b}_{h}(\mathcal{D}_{X})\) and \(\mathfrak{X}:=\{\mathfrak{X}_{j}\}_{j}\) a smooth stratification on \(X\). We shall say that \(\mathfrak{X}\) is a **stratification for \(\mathscr{F}^{\bullet}\)** if \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}^{\bullet})\) are flat bundles for all \(l\) and \(j\).
Next, recall from [HTT, Theorem 3.2.3, (i)] that, for each \(f:X\to Y\) as above, we have the direct image functor
\[\int_{f}:D^{b}_{h}(\mathcal{D}_{X})\to D^{b}_{h}(\mathcal{D}_{Y}) \tag{18}\]
given by assigning \(\mathscr{F}^{\bullet}\mapsto\int_{f}\mathscr{F}^{\bullet}:=Rf_{*}(\mathcal{D} _{Y\gets X}\otimes_{\mathcal{D}_{X}}^{L}\mathscr{F}^{\bullet})\), where \(\mathcal{D}_{Y\gets X}\) denotes the \((f^{-1}\mathcal{D}_{Y},\mathcal{D}_{X})\)-module \(\omega_{X}\otimes_{\mathcal{O}_{X}}\otimes\mathcal{D}_{X\to Y}\otimes_{f^{-1} \mathcal{O}_{Y}}f^{-1}\omega_{Y}^{\vee}\).
### Holonomic \(\mathcal{D}\)-modules of type \(\mathcal{A}\)
Denote by \(\mathcal{S}m_{k}\) the category whose objects are smooth algebraic varieties over \(k\) and whose morphisms are smooth \(k\)-morphisms between them. Also, denote by \(\mathcal{FB}_{k}\) the category over \(\mathcal{S}m_{k}\) defined as follows:
* The objects are the pairs \((Y,\mathscr{F})\) consisting of a smooth algebraic variety \(Y\) over \(k\) and a flat bundle \(\mathscr{F}\) on \(Y\);
* The morphisms from \((Y,\mathscr{F})\) to \((Z,\mathscr{E})\) are the pairs \((f,\nu)\) consisting of a smooth morphism \(f:Y\to Z\) over \(k\) and an isomorphism of flat bundles \(\nu:\mathscr{F}\stackrel{{\sim}}{{\to}}f^{*}\mathscr{E}\);
* The projection \(\mathcal{FB}_{k}\to\mathcal{S}m_{k}\) is given by assigning \((Y,\mathscr{F})\mapsto Y\).
It is verified that \(\mathcal{FB}_{k}\) forms a category fibered in groupoids over \(\mathcal{S}m_{k}\) by putting \((Y,f^{*}\mathscr{E})\) (for each morphism \(f:Y\to Z\) and \((Z,\mathscr{E})\in\mathrm{ob}(\mathcal{FB}_{k})\)) as the _pull-back of \((Z,\mathscr{E})\) via \(f\)_.
Now, let us fix a fibered subcategory
\[\mathcal{A} \tag{19}\]
of \(\mathcal{FB}_{k}\) having the following three properties:
* Let \(Y\) be an arbitrary smooth algebraic variety over \(k\). Then, the category \(\mathcal{A}(Y)\) (i.e., the fiber of the projection \(\mathcal{A}\to\mathcal{S}m_{k}\) over \(Y\)) is closed under taking flat subbundles, flat quotient bundles, and the duals of flat bundles. Also, \(\mathcal{A}(Y)\) is closed under taking extensions and the tensor products of two flat bundles;
* If \(f:Y\to Z\) is a smooth morphism in \(\mathcal{S}m_{k}\) and \(\mathscr{F}\) is an object of \(\mathcal{A}(Y)\) (considered as an object of \(D^{b}_{h}(\mathcal{D}_{Y})\)), then there exists a dense open subscheme \(U\) of \(Z\) such that the cohomology sheaves \(\mathcal{H}^{l}(\int_{f}\mathscr{F})|_{U}\) belong to \(\mathcal{A}(U)\) for all \(l\).
* If \(Y=\mathrm{Spec}(k)\), then the equality \(\mathcal{A}(Y)=\mathcal{FB}_{k}(Y)\) holds.
**Remark 2.7**.: The property that a flat bundle belongs to \(\mathcal{A}\) naturally extends to the situation where the underlying space is a disjoint union \(Y:=\bigsqcup_{i=1}^{r}Y_{i}\) of smooth algebraic varieties \(Y_{i}\). To be precise, we say, in the subsequent discussion, that a flat bundle \(\mathscr{F}\) on such a scheme \(Y\) belongs to \(\mathcal{A}\) if the restriction \(\mathscr{F}|_{Y_{i}}\) to each component \(Y_{i}\) belongs to \(\mathcal{A}(Y_{i})\).
**Definition 2.8**.:
1. Let \(\mathscr{F}^{\bullet}\) be a chain complex in \(D^{b}_{h}(\mathcal{D}_{X})\). We shall say that \(\mathscr{F}^{\bullet}\) is **of type \(\mathcal{A}\)** if there exists a smooth stratification \(\mathfrak{X}:=\{X_{j}\}_{j}\) for \(\mathscr{F}^{\bullet}\) such that the flat bundle \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}^{\bullet})\) belongs to \(\mathcal{A}(X_{l}\setminus X_{l+1})\) for every \(l\) and \(j\). In this situation, \(\mathfrak{X}\) is called **an \(\mathcal{A}\)-stratification for \(\mathscr{F}^{\bullet}\)**. (In particular, the notion of an \(\mathcal{FB}_{k}\)-stratification, i.e., the case of \(\mathcal{A}=\mathcal{FB}_{k}\), is nothing but the notion of a stratification in the sense of Definition 2.6.)
2. Let \(\mathscr{F}\) be a holonomic \(\mathcal{D}_{X}\)-module. We shall say that \(\mathscr{F}\) is **of type \(\mathcal{A}\)** if it is of type \(\mathcal{A}\) when considered as an object of \(D^{b}_{h}(\mathcal{D}_{X})\).
**Remark 2.9**.: Let \(Y\) be another smooth algebraic variety over \(k\) and \(f:X\to Y\) a morphism over \(k\). Also, let \(\mathscr{F}:=(\mathcal{F},\nabla)\) be a flat bundle in \(\mathcal{A}(Y)\). According to Remark 2.5, (a), \(f^{\dagger}\mathscr{F}\) is quasi-isomorphic to the inverse image \(f^{*}\mathscr{F}\) shifted by \(-\mathrm{dim}X+\mathrm{dim}Y\). Since the inverse image functor \(f^{*}(-)\) sends \(\mathcal{A}(Y)\) to \(\mathcal{A}(X)\), \(f^{\dagger}\mathscr{F}\) belongs to \(\mathcal{A}(X)\). By applying this fact to the case where \(f=\iota_{\mathfrak{Y},j}\) for a smooth stratification \(\mathfrak{Y}:=\{Y_{j}\}_{j}\) on \(Y\) (hence \(X=Y_{j}\setminus Y_{j+1}\)), we see that _any smooth stratification on \(Y\) is an \(\mathcal{A}\)-stratification for \(\mathscr{F}\)_. Moreover, this implies that _a flat bundle \(\mathscr{F}\) on \(X\) belongs to \(\mathcal{A}(X)\) if and only if \(\mathscr{F}\) is of type \(\mathcal{A}\) in the sense of Definition 2.8_.
**Proposition 2.10**.: _Let \(\mathscr{F}^{\bullet}\) be a chain complex in \(D^{b}_{h}(\mathcal{D}_{X})\) and \(\mathfrak{X}:=\{X_{j}\}_{j=0}^{m+1}\), \(\mathfrak{X}^{\prime}:=\{X^{\prime}_{j^{\prime}}\}_{j^{\prime}=0}^{m^{\prime} +1}\) two smooth stratifications on \(X\) such that \(\mathfrak{X}\) is subordinate to \(\mathfrak{X}^{\prime}\). Suppose that \(\mathscr{F}^{\bullet}\) is of type \(\mathcal{A}\) and that \(\mathfrak{X}\) is an \(\mathcal{A}\)-stratification for \(\mathscr{F}^{\bullet}\). Then, \(\mathfrak{X}^{\prime}\) is an \(\mathcal{A}\)-stratification for \(\mathscr{F}^{\bullet}\)._
Proof.: Let us take an arbitrary \(j^{\prime}\in\{0,\cdots,m^{\prime}\}\). Since \(\mathfrak{X}\) is subordinate to \(\mathfrak{X}^{\prime}\), there exists \(j\in\{0,\cdots,m\}\) such that the immersion \(\iota_{\mathfrak{X}^{\prime},j^{\prime}}:X_{j^{\prime}}\setminus X_{j^{ \prime}+1}\hookrightarrow X\) factors through \(\iota_{\mathfrak{X},j}:X^{\prime}_{j}\setminus X^{\prime}_{j+1}\hookrightarrow X\). Denote by
\[\iota:X_{j^{\prime}}\setminus X_{j^{\prime}+1}\hookrightarrow X_{j}\setminus X _{j+1} \tag{20}\]
the resulting immersion (hence \(\iota_{\mathfrak{X},j}\circ\iota=\iota_{\mathfrak{X}^{\prime},j^{\prime}}\)). For each \(l\in\mathbb{Z}\), the \(\mathcal{O}_{X_{j}\setminus X_{j+1}}\)-module \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}^{\bullet})\) is locally free by assumption, so we have
\[\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X}^{\prime},j^{\prime}}\mathscr{F}^ {\bullet})\left(=\mathcal{H}^{l}((\iota_{\mathfrak{X},j}\circ\iota)^{\dagger} \mathscr{F}^{\bullet})\right)\cong\iota^{\dagger}\mathcal{H}^{l}(\iota^{ \dagger}_{\mathfrak{X},j}\mathscr{F}^{\bullet}). \tag{21}\]
Since \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}^{\bullet})\) belongs to \(\mathcal{A}(X_{j}\setminus X_{j+1})\) and the inverse image functor \(\iota^{\dagger}\) sends \(\mathcal{A}(X_{j}\setminus X_{j+1})\) to \(\mathcal{A}(X^{\prime}_{j^{\prime}}\setminus X^{\prime}_{j^{\prime}+1})\) (cf. Remark 2.9), we have \(\iota^{\dagger}\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}^{ \bullet})\in\mathcal{A}(X^{\prime}_{j^{\prime}}\setminus X^{\prime}_{j^{\prime}+1})\). Hence, (21) implies \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X}^{\prime},j^{\prime}}\mathscr{F}^ {\bullet})\in\mathcal{A}(X^{\prime}_{j^{\prime}}\setminus X^{\prime}_{j^{\prime}+1})\). It follows that \(\mathfrak{X}\) is an \(\mathcal{A}\)-stratification for \(\mathscr{F}^{\bullet}\), and this completes the proof of this proposition.
**Proposition 2.11**.: _Let \(\mathscr{E}^{\bullet}\), \(\mathscr{F}^{\bullet}\), \(\mathscr{G}^{\bullet}\) are chain complexes in \(D^{b}_{h}(\mathcal{D}_{X})\), and suppose that there exists a distinguished triangle_
\[\mathscr{E}^{\bullet}\rightarrow\mathscr{F}^{\bullet}\rightarrow\mathscr{G}^{ \bullet}\xrightarrow{+1} \tag{22}\]
_in \(D^{b}_{h}(\mathcal{D}_{X})\). Also, suppose that two of \(\mathscr{E}^{\bullet}\), \(\mathscr{F}^{\bullet}\), and \(\mathscr{G}^{\bullet}\) are of type \(\mathcal{A}\). Then, the remaining one is of type \(\mathcal{A}\)._
Proof.: We only consider the case where \(\mathscr{E}^{\bullet}\) and \(\mathscr{G}^{\bullet}\) are supposed to be of type \(\mathcal{A}\) because the proofs of the other cases are entirely similar. By Propositions 2.4 and 2.10 applied to \(\mathcal{A}=\mathcal{FB}_{k}\), there exists a smooth stratification \(\mathfrak{X}:=\{X_{j}\}_{j=0}^{m+1}\) for the three chain complexes \(\mathscr{E}^{\bullet}\), \(\mathscr{F}^{\bullet}\), \(\mathscr{G}^{\bullet}\) that is moreover an \(\mathcal{A}\)-stratification for both \(\mathscr{E}^{\bullet}\) and \(\mathscr{G}^{\bullet}\). Let us choose \(j\in\{0,\cdots,m\}\). Then, we obtain from (22) an exact sequence of \(\mathcal{D}_{X_{j}\setminus X_{j+1}}\)-modules
\[\cdots\to\mathcal{H}^{l-1}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{G}^{ \bullet})\to\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{E}^{ \bullet})\to\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}^{ \bullet})\to\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{G}^{ \bullet})\to\mathcal{H}^{l+1}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{E}^{ \bullet})\to\cdots. \tag{23}\]
By the definition of \(\mathfrak{X}\), every \(\mathcal{D}_{X_{j}\setminus X_{j+1}}\)-module in this sequence forms a flat bundle. Hence, the assertion follows from the assumption that the category \(\mathcal{A}(X_{j}\setminus X_{j+1})\) is closed under taking flat subbundles, flat quotient bundles, and extensions of two flat bundles.
**Corollary 2.12**.: _Let \(\alpha:\mathscr{E}^{\bullet}\to\mathscr{F}^{\bullet}\) be a morphism in \(D^{b}_{h}(\mathcal{D}_{X})\), and suppose that both \(\mathscr{E}^{\bullet}\) and \(\mathscr{F}^{\bullet}\) are of type \(\mathcal{A}\). Then, there exists a chain complex \(\mathscr{G}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{X})\) of type \(\mathcal{A}\) together with a distinguished triangle_
\[\mathscr{E}^{\bullet}\xrightarrow{\alpha}\mathscr{F}^{\bullet}\to\mathscr{G}^{ \bullet}\xrightarrow{+1}. \tag{24}\]
Proof.: The assertion follows from Proposition 2.11 and the fact that \(D^{b}_{h}(\mathcal{D}_{X})\) forms a triangulated category (cf. [HTT, Corollary 3.1.4]).
According to Corollary 2.12 just proved, one can define the full triangulated subcategory
\[D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}} \tag{25}\]
of \(D^{b}_{h}(\mathcal{D}_{X})\) consisting of chain complexes of type \(\mathcal{A}\).
## 3. Grothendieck's six functors for holonomic \(\mathcal{D}\)-modules of type \(\mathcal{A}\)
In this section, we discuss various functors on the derived categories of \(\mathcal{D}\)-modules and examine their stability properties with respect to a fixed fibered subcategory \(\mathcal{A}\) of \(\mathcal{FB}_{k}\). Theorem A can be obtained by combining the results of this section applied to the case where \(\mathcal{A}\) is taken to be one of the categories classifying certain arithmetic flat bundles.
Let \(k\), \(X\), and \(\mathcal{A}\) be as in the previous section.
### Direct image functor
The first section deals with the direct image functor. To begin with, let us prove the following proposition concerning closed or open immersions.
**Proposition 3.1**.: _The following assertions hold:_
* _Let_ \(\iota:X\hookrightarrow Y\) _be a closed immersion between smooth algebraic varieties over_ \(k\)_. Then, for each_ \(\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{X})\)_, we have_ (26) \[\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\iff\int_{ \iota}\!\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}.\]
* _Let_ \(\eta:X\to Y\) _be an open immersion between smooth algebraic varieties over_ \(k\)_. Then, for each_ \(\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{X})\)_, we have_ (27) \[\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\iff\int_{ \eta}\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}.\]
_In particular, the same equivalence assertion holds for every (locally closed) immersion between smooth algebraic varieties \(X\hookrightarrow Y\)._
Proof.: We first consider the implication "\(\Leftarrow\)" in assertion (i). Suppose that \(\int_{\iota}\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\). This means that there exists a smooth \(\mathcal{A}\)-stratification \(\mathfrak{Y}^{1}\) for \(\int_{\iota}\mathscr{F}^{\bullet}\). On the other hand, let us consider a smooth stratification \(\mathfrak{Y}^{2}:=\{Y_{j}^{2}\}_{j=0}^{2}\) on \(Y\) determined by \(Y_{0}^{2}:=Y\), \(Y_{1}^{2}:=\operatorname{Im}(\iota)\), and \(Y_{2}^{2}:=\emptyset\). By Proposition 2.4, we can find a smooth stratification \(\mathfrak{Y}:=\{Y_{j^{\prime}}\}_{j^{\prime}=0}^{m+1}\) on \(Y\) such that both \(\mathfrak{Y}^{1}\) and \(\mathfrak{Y}^{2}\) are subordinate to \(\mathfrak{Y}\) and \(Y_{j_{0}}=(Y_{1}^{2}=)\operatorname{Im}(\iota)\) for some \(j_{0}\in\{0,\cdots,m\}\). It follows from Proposition 2.10 that \(\mathfrak{Y}\) is an \(\mathcal{A}\)-stratification for \(\int_{\iota}\mathscr{F}^{\bullet}\). By putting \(X_{j}:=\iota^{-1}(Y_{j+j_{0}})\) (\(j=0,\cdots,m-j_{0}+1\)), we obtain a smooth stratification \(\mathfrak{X}:=\{X_{j}\}_{j=0}^{m-j_{0}+1}\) on \(X\). Then, for each \(j=0,\cdots,m-j_{0}\), we have
\[\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}^{\bullet}\cong\iota^{\dagger}_{ \mathfrak{X},j}\iota^{\dagger}\int_{\iota}\mathscr{F}^{\bullet}\cong(\iota \circ\iota_{\mathfrak{X},j})^{\dagger}\int_{\iota}\mathscr{F}^{\bullet}\cong \iota^{\dagger}_{\mathfrak{Y},j+j_{0}}\int_{\iota}\mathscr{F}^{\bullet} \tag{28}\]
under the identification \(X_{j}\setminus X_{j+1}=Y_{j+j_{0}}\setminus Y_{j+j_{0}+1}\) via \(\iota\). In particular, the cohomology sheaves \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}^{\bullet})\left( \cong\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{Y},j+j_{0}}\int_{\iota} \mathscr{F}^{\bullet})\right)\) belong to \(\mathcal{A}(X_{j}\setminus X_{j+1})\) for all \(l\). This implies \(\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\), and thus we have proved the implication "\(\Leftarrow\)" in (i).
Next, we shall consider the implication "\(\Leftarrow\)" in assertion (ii). Suppose that \(\int_{\eta}\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\), i.e., there exists an \(\mathcal{A}\)-stratification \(\mathfrak{Y}:=\{Y_{j^{\prime}}\}_{j^{\prime}=0}^{m^{\prime}+1}\) for \(\int_{\eta}\mathscr{F}^{\bullet}\). The decreasing sequence
\[\eta(X)=Y_{0}\cap\eta(X)\supseteq Y_{1}\cap\eta(X)\supseteq\cdots\supseteq Y_ {m^{\prime}+1}\cap\eta(X)=\emptyset \tag{29}\]
determines a smooth stratification \(\mathfrak{X}:=\{X_{j}\}_{j=0}^{m+1}\) on \(X\) after removing the duplicate constituents and identifying \(X\) with \(\iota(X)\). Let us take an arbitrary \(j\in\{0,\cdots,m\}\). By the definition of \(\mathfrak{X}\), there exists \(j^{\prime}\in\{0,\cdots,m^{\prime}\}\) with \(X_{j}=Y_{j^{\prime}}\cap\eta(X)\) (via the identification \(X=\iota(X)\)). Hence, for each \(l\in\mathbb{Z}\), we have
\[\mathcal{H}^{l}\left(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}^ {\bullet}\right) \cong\mathcal{H}^{l}\left(\iota^{\dagger}_{\mathfrak{X},j}\eta^{ \dagger}\int_{\eta}\mathscr{F}^{\bullet}\right)\] \[\cong\mathcal{H}^{l}\left(\left(\eta\circ\iota_{\mathfrak{X},j} \right)^{\dagger}\int_{\eta}\mathscr{F}^{\bullet}\right)\] \[\cong\mathcal{H}^{l}\left(\left(\iota^{\dagger}_{\mathfrak{Y},j^ {\prime}}\int_{\eta}\mathscr{F}^{\bullet}\right)\left|{}_{Y_{j^{\prime}}\cap \eta(X)}\right)\] \[\cong\mathcal{H}^{l}\left(\iota^{\dagger}_{\mathfrak{Y},j^{ \prime}}\int_{\eta}\mathscr{F}^{\bullet}\right)\left|{}_{Y_{j^{\prime}}\cap \eta(X)}. \tag{30}\]
It follows that \(\mathcal{H}^{l}\left(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}^{\bullet}\right)\) belongs to \(\mathcal{A}(X_{j}\setminus X_{j+1})\), which implies \(\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\). Thus, we have finished the proof of the implication "\(\Leftarrow\)" in (ii).
Finally, we shall consider the inverse implication "\(\Rightarrow\)" in assertion (i) (resp., (ii)). Suppose that \(\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\), i.e., there exists an \(\mathcal{A}\)-stratification \(\mathfrak{X}:=\{X_{j}\}_{j=0}^{m+1}\) for \(\mathscr{F}^{\bullet}\). Let \(\mathfrak{Y}:=\{Y_{j}\}_{j^{\prime}=0}^{m+2}\) be the smooth stratification on \(Y\) determined by the condition that \(Y_{0}:=Y\) and \(Y_{j^{\prime}}:=\iota(X_{j^{\prime}-1})\) if \(j^{\prime}=1,\cdots,m+2\) (resp., \(Y_{m+2}:=\emptyset\) and \(Y_{j^{\prime}}:=\eta(X_{j^{\prime}})\cup(Y\setminus X)\) if \(j^{\prime}=0,\cdots,m+1\)). Since \(Y_{0}\setminus Y_{1}=Y\setminus X\) (resp., \(Y_{m+1}\setminus Y_{m+2}=Y\setminus X\)), we have \(\iota^{\dagger}_{\mathfrak{Y},0}\int_{\iota}\mathscr{F}^{\bullet}=0\) (resp., \(\iota^{\dagger}_{\mathfrak{Y},0+1}\int_{\eta}\mathscr{F}^{\bullet}=0\)) (cf. [HTT, Proposition 1.7.1, (ii)]). Moreover, for each
\(1,\cdots,m+1\) (resp., \(j^{\prime}=0,\cdots,m\)) and each \(l\in\mathbb{Z}\), it is immediately verified that \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{Y},j^{\prime}}\int_{\iota}\mathscr{F}^ {\bullet})\) (resp., \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{Y},j^{\prime}}\int_{\eta}\mathscr{F}^ {\bullet})\) ) is isomorphic to \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j^{\prime}-1}\mathscr{F}^{ \bullet})\) (resp., \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j^{\prime}}\mathscr{F}^{\bullet})\)) via the natural identification \(X_{j^{\prime}-1}\setminus X_{j^{\prime}}=Y_{j^{\prime}}\setminus Y_{j^{\prime }+1}\) (resp., \(X_{j^{\prime}}\setminus X_{j^{\prime}+1}=Y_{j^{\prime}}\setminus Y_{j^{\prime }+1}\)). This implies that \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{Y},j^{\prime}}\int_{\iota}\mathscr{ F}^{\bullet})\) (resp., \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{Y},j^{\prime}}\int_{\eta}\mathscr{F}^ {\bullet})\)) belongs to \(\mathcal{A}(Y_{j^{\prime}}\setminus Y_{j^{\prime}+1})\), and hence, we have \(\int_{\iota}\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\) (resp., \(\int_{\eta}\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\)). This completes the proof of the implication "\(\Rightarrow\)".
**Corollary 3.2**.: _Let \(Z\) be a closed subscheme of \(X\) and \(\mathscr{F}^{\bullet}\) a chain complex in \(D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\). Then, the chain complex \(R\Gamma_{Z}(\mathscr{F}^{\bullet})\) (cf. [HTT, SS 1.7]) belongs to \(D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\)._
Proof.: Write \(U:=X\setminus Z\) and write \(\eta:U\to X\) for the natural open immersion. Let us consider the distinguished triangle
\[R\Gamma_{Z}(\mathscr{F}^{\bullet})\to\mathscr{F}^{\bullet}\to\int_{\eta}\eta^ {\dagger}\mathscr{F}^{\bullet}\xrightarrow{+1} \tag{31}\]
(cf. [HTT, Proposition 1.7.1, (i)]). By the assumption on \(\mathscr{F}^{\bullet}\) together with Proposition 3.1, (ii), \(\int_{\eta}\eta^{\dagger}\mathscr{F}^{\bullet}\left(=\int_{\eta}(\mathscr{F}^ {\bullet}|_{U})\right)\) belongs to \(D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\). Hence, the assertion follows from Proposition 2.11.
**Corollary 3.3**.: _Let \(Z\) be a reduced closed subscheme of \(X\) defined as a disjoint union of finitely many closed points. Also, let \(\mathscr{F}\) be a holonomic \(\mathcal{D}_{X}\)-module supported on \(Z\). Then, \(\mathscr{F}\) is of type \(\mathcal{A}\)._
Proof.: The assertion follows from Proposition 3.1, (i), [HTT, Corollary 1.6.2] (i.e., Kashiwara's equivalence), and the assumption "\(\mathcal{A}(\operatorname{Spec}(k))=\mathcal{FB}_{k}(\operatorname{Spec}(k))\)" on \(\mathcal{A}\) (i.e., the property \((\gamma)\) described in SS 2.3).
The following assertion will be used in the proof of Proposition 3.5 described below and also asserts a special case of that proposition.
**Lemma 3.4**.: _Let \(f:X\to Y\) be a smooth morphism between smooth algebraic varieties over \(k\). Then, for each flat bundle \(\mathscr{F}\) in \(\mathcal{A}(X)\), the direct image \(\int_{f}\mathscr{F}\) belongs to \(D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\)._
Proof.: Since \(\mathscr{F}\) belongs to \(\mathcal{A}(X)\) and \(f\) is smooth, it follows from the assumption on \(\mathcal{A}\) (i.e., the property \((\beta)\) described in SS2.3) that there exists a dense open subscheme \(U_{Y}\) of \(Y\) satisfying \(\mathcal{H}^{l}(\int_{f}\mathscr{F})|_{U_{Y}}\in\mathcal{A}(U_{Y})\) for every \(l\). Let \(\eta_{X}\) (resp., \(\eta_{Y}\)) denote the natural open immersion \(f^{-1}(U_{Y})\hookrightarrow X\) (resp., \(U_{Y}\hookrightarrow Y\)). Note that
\[\int_{\eta_{Y}}\eta^{\dagger}_{Y}\int_{f}\mathscr{F}\cong\int_{\eta_{Y}}\int _{fY}\eta^{\dagger}_{X}\mathscr{F}\cong\int_{f}\int_{\eta_{X}}\eta^{\dagger}_ {X}\mathscr{F}, \tag{32}\]
where \(f_{Y}:f^{-1}(U_{Y})\to U_{Y}\) denotes the smooth morphism obtained by restricting \(f\). Since \(\eta^{\dagger}_{Y}\int_{f}\mathscr{F}\left(=\left(\int_{f}\mathscr{F}\right)| _{U_{Y}}\right)\in D^{b}_{h}(\mathcal{D}_{U_{Y}})_{\mathcal{A}}\), it follows from Proposition 3.1, (ii), that the leftmost, hence also the rightmost, of (32) belongs to \(D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\). If we set \(Z:=Y\setminus U_{Y}\), which is a reduced subscheme of \(Y\), then the inverse image \(f^{-1}(Z)\) specifies a reduced closed subscheme of \(X\) with \(f^{-1}(Z)=X\setminus f^{-1}(U_{Y})\) and we obtain the distinguished triangle
\[\int_{f}R\Gamma_{f^{-1}(Z)}\mathscr{F}\to\int_{f}\mathscr{F}\to\int_{f}\int_{ \eta_{X}}\eta^{\dagger}_{X}\mathscr{F}\xrightarrow{+1} \tag{33}\]
induced from \(R\Gamma_{f^{-1}(Z)}\mathscr{F}\to\mathscr{F}\to\int_{\eta_{X}}\eta_{X}^{\dagger} \mathscr{F}\xrightarrow{+1}\). Hence, by Proposition 2.11 together with the fact that \(\int_{f}\int_{\eta_{X}}\eta_{X}^{\dagger}\mathscr{F}\in D_{h}^{b}(\mathcal{D}_{ Y})_{\mathcal{A}}\), the problem is reduced to proving that \(\int_{f}R\Gamma_{f^{-1}(Z)}\mathscr{F}\in D_{h}^{b}(\mathcal{D}_{Y})_{ \mathcal{A}}\).
Next, let us take a dense open subscheme \(U_{Z}\) of \(Z\) that is smooth over \(k\). We shall suppose that \(W:=Z\setminus U_{Z}\) is nonempty (hence the inequality \(\mathrm{dim}W<\mathrm{dim}Z\) holds if \(Z\) has positive dimension). Also, write \(\eta_{Z}\) (resp., \(\eta_{W}\)) for the natural immersion \(f^{-1}(U_{Z})\hookrightarrow X\) (resp., the natural open immersion \(X\setminus W\hookrightarrow X\)), which defines a morphism between smooth algebraic varieties. Let \(f_{Z}\) denote the smooth morphism \(f^{-1}(U_{Z})\to U_{Z}\) obtained by restricting \(f\). Since \(\eta_{Z}^{\dagger}\mathscr{F}\left(=\mathscr{F}|_{f^{-1}(U_{Z})}\right)\in \mathcal{A}(f^{-1}(U_{Z}))\), there exists a dense open subscheme of \(U_{Z}\) on which \(\mathcal{H}^{l}(\int_{f_{Z}}\eta_{Z}^{\dagger}\mathscr{F})\) belongs to \(\mathcal{A}\) for every \(l\). Hence, by possibly replacing \(U_{Z}\) with its open subscheme, we may assume that \(\int_{f_{Z}}\eta_{Z}^{\dagger}\mathscr{F}\in D_{h}^{b}(\mathcal{D}_{U_{Z}})_{ \mathcal{A}}\). Let us consider the distinguished triangle
\[R\Gamma_{f^{-1}(W)}\mathscr{F}\to R\Gamma_{f^{-1}(Z)}\mathscr{F}\to\int_{\eta _{Z}}\eta_{Z}^{\dagger}\mathscr{F}\xrightarrow{+1} \tag{34}\]
induced from the fact that \(\int_{\eta_{Z}}\eta_{Z}^{\dagger}\mathscr{F}\cong\int_{\eta_{W}}\eta_{W}^{ \dagger}R\Gamma_{f^{-1}(Z)}\mathscr{F}\) (cf. [HTT, Corollary 1.6.2]). It yields, via \(\int_{f}\), a distinguished triangle
\[\int_{f}R\Gamma_{f^{-1}(W)}\mathscr{F}\to\int_{f}R\Gamma_{f^{-1}(Z)}\mathscr{ F}\to\int_{f}\int_{\eta_{Z}}\eta_{Z}^{\dagger}\mathscr{F}\xrightarrow{+1}. \tag{35}\]
If \(\iota\) denotes the natural immersion \(U_{Z}\hookrightarrow Y\), then we have
\[\int_{f}\int_{\eta_{Z}}\eta_{Z}^{\dagger}\mathscr{F}\cong\int_{\iota}\int_{f_{ Z}}\eta_{Z}^{\dagger}\mathscr{F}. \tag{36}\]
The right-hand side, hence also the left-hand side, of (36) belongs to \(D_{h}^{b}(\mathcal{D}_{Y})_{\mathcal{A}}\) because of Proposition 3.1, (i) and (ii), together with the fact that \(\int_{f_{Z}}\eta_{Z}^{\dagger}\mathscr{F}\in D_{h}^{b}(\mathcal{D}_{U_{Z}})_{ \mathcal{A}}\). Hence, by Proposition 2.11 and (35), the problem is reduced to proving that \(\int_{f}R\Gamma_{f^{-1}(W)}\mathscr{F}\in D_{h}^{b}(\mathcal{D}_{Y})_{ \mathcal{A}}\).
By repeating the argument in the previous paragraph, the problem is eventually reduced to proving that \(\int_{f}R\Gamma_{f^{-1}(W)}\mathscr{F}\in D_{h}^{b}(\mathcal{D}_{Y})_{ \mathcal{A}}\) in the case of \(\mathrm{dim}W=0\). But, this is clear from Corollary 3.3. We have finished the proof of this lemma.
**Proposition 3.5**.: _Let \(f:X\to Y\) be a morphism of smooth algebraic varieties over \(k\). Then, the direct image functor \(\int_{f}:D_{h}^{b}(\mathcal{D}_{X})\to D_{h}^{b}(\mathcal{D}_{Y})\) (cf. (18)) restricts to a functor_
\[\int_{f}:D_{h}^{b}(\mathcal{D}_{X})_{\mathcal{A}}\to D_{h}^{b}(\mathcal{D}_{Y} )_{\mathcal{A}}. \tag{37}\]
Proof.: We prove this assertion by induction on the dimension \(\mathrm{dim}X\). The base step, i.e., the case of \(\mathrm{dim}X=0\), follows from Proposition 3.1, (i). In what follows, we shall consider the induction step.
Let \(\mathscr{F}^{\bullet}\) be a chain complex in \(D_{h}^{b}(\mathcal{D}_{X})_{\mathcal{A}}\). The problem is to show that \(\int_{f}\mathscr{F}^{\bullet}\in D_{h}^{b}(\mathcal{D}_{Y})_{\mathcal{A}}\). By induction on the cohomological length of \(\mathscr{F}^{\bullet}\), we may assume that \(\mathscr{F}^{\bullet}=\mathscr{F}\) for a holonomic \(\mathcal{D}_{X}\)-module \(\mathscr{F}\) of type \(\mathcal{A}\). Since \(k\) has characteristic zero, there exists a dense open subscheme \(U\) of the scheme-theoretic image \(\mathrm{Im}f\) that is smooth over \(k\). Also, there exists a dense open subscheme \(V\) of \(f^{-1}(U)\) such that the morphism \(f_{U}:V\to U\) obtained by restricting \(f\) is smooth. After possibly replacing \(V\) with its open subscheme, we may assume that \(\mathscr{F}|_{V}\) is a flat bundle in \(\mathcal{A}(V)\). Write \(\eta\) (resp., \(\iota\)) for the natural open immersion \(V\hookrightarrow X\)
(resp., the natural immersion \(U\hookrightarrow Y\)). It follows from Proposition 3.1 and Lemma 3.4 that \(\int_{\iota}\int_{f_{U}}\eta^{\dagger}\mathscr{F}\left(=\int_{\iota}\int_{f_{U}} \mathscr{F}|_{V}\right)\in D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\). Since \(\int_{f}\int_{\eta}\eta^{\dagger}\mathscr{F}\) is quasi-isomorphic to \(\int_{\iota}\int_{f_{U}}\eta^{\dagger}\mathscr{F}\), it also belongs to \(D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\).
If \(V=X\), then we have already finished the proof. Hence, it suffices to consider the case where the reduced closed subscheme \(Z:=X\setminus V\) of \(X\) is nonempty (and \(\mathrm{dim}Z<\mathrm{dim}X\)). By the distinguished triangle
\[\int_{f}R\Gamma_{Z}\mathscr{F}\to\int_{f}\mathscr{F}\to\int_{f}\int_{\eta}\eta ^{\dagger}\mathscr{F}\xrightarrow{+1} \tag{38}\]
induced from \(R\Gamma_{Z}\mathscr{F}\to\mathscr{F}\to\int_{\eta}\eta^{\dagger}\mathscr{F} \xrightarrow{+1}\), the problem is reduced to proving that \(\int_{f}R\Gamma_{Z}\mathscr{F}\in D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\).
Now, let us take a dense open subscheme \(U_{Z}\) of \(Z\) that is smooth over \(k\). Write \(W:=Z\setminus U_{Z}\) (considered as a reduced closed subscheme of \(X\)) and write \(\iota_{Z}\) for the natural immersion \(U_{Z}\hookrightarrow X\). After possibly replacing \(U_{Z}\) with its open subscheme, we may assume that the cohomology sheaves \(\mathcal{H}^{l}(\iota^{\dagger}_{Z}\mathscr{F})\) are flat bundles in \(\mathcal{A}(U_{Z})\) for all \(l\). (In fact, let \(\mathfrak{X}:=\{X_{j}\}_{j}\) be an \(\mathcal{A}\)-stratification for \(\mathscr{F}\). There exists \(j\) such that \(X_{j}/X_{j+1}\) contains the generic point of \(Z\), i.e., contains a dense open subscheme \(U_{Z}\) of \(Z\). If \(\overline{\iota}_{Z}\) denotes the natural immersion \(U_{Z}\hookrightarrow X_{j}\setminus X_{j+1}\), then we have
\[\mathcal{H}^{l}(\iota^{\dagger}_{Z}\mathscr{F})\cong\mathcal{H}^{l}( \overline{\iota}^{\dagger}_{Z}\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}) \cong\overline{\iota}^{\dagger}_{Z}\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{ X},j}\mathscr{F})\cong\overline{\iota}^{*}_{Z}\mathcal{H}^{l}(\iota^{ \dagger}_{\mathfrak{X},j}\mathscr{F}), \tag{39}\]
where the second "\(\cong\)" follows from Remark 2.5, (a), together with the fact that the \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F})\)'s are flat bundles. Hence, since \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F})\in\mathcal{A}(X _{j}\setminus X_{j+1})\), we obtain the claim, as desired.) Let us consider the distinguished triangle
\[R\Gamma_{W}\mathscr{F}\to R\Gamma_{Z}\mathscr{F}\to\int_{\iota_{Z}}\iota^{ \dagger}_{Z}\mathscr{F}\xrightarrow{+1} \tag{40}\]
induced from the fact that \(\iota^{\dagger}_{Z}R\Gamma_{Z}\mathscr{F}\cong\iota^{\dagger}_{Z}\mathscr{F}\) (cf. [HTT, Corollary 1.6.2 or Proposition 1.7.1, (iii)]). It yields, via \(\int_{f}\), a distinguished triangle
\[\int_{f}R\Gamma_{W}\mathscr{F}\to\int_{f}R\Gamma_{Z}\mathscr{F}\to\int_{f_{ \mathrm{o}\iota_{Z}}}\iota^{\dagger}_{Z}\mathscr{F}\xrightarrow{+1}. \tag{41}\]
Since \(\iota^{\dagger}_{Z}\mathscr{F}\) belongs to \(D^{b}_{h}(\mathcal{D}_{U_{Z}})_{\mathcal{A}}\), the induction hypothesis implies \(\int_{f_{\mathrm{o}\iota_{Z}}}\iota^{\dagger}_{Z}\mathscr{F}\in D^{b}_{h}( \mathcal{D}_{Y})_{\mathcal{A}}\). Hence, the problem is reduced to proving that \(\int_{f}R\Gamma_{W}\mathscr{F}\in D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\).
By repeating the argument in the previous paragraph, the problem is eventually reduced to proving that \(\int_{f}R\Gamma_{W}\mathscr{F}\in D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\) in the case of \(\mathrm{dim}W=0\). But, this is clear from Corollary 3.3. We have finished the proof of this proposition.
### Inverse image functor
Next, we prove that the inverse image functor preserves the subcategory \(\mathcal{D}^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\subseteq D^{b}_{h}( \mathcal{D}_{X})\).
**Proposition 3.6**.: _Let \(f:X\to Y\) be a morphism of smooth algebraic varieties over \(k\). Then, the functor \(f^{\dagger}\) (cf. (16)) restricts to a functor_
\[f^{\dagger}:D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\to D^{b}_{h}(\mathcal{D}_ {X})_{\mathcal{A}}. \tag{42}\]
_Also, the same assertion holds for the functor \(Lf^{*}\)._
Proof.: By decomposing \(f\) into the composite of the closed immersion \((\operatorname{id}_{X},f):X\hookrightarrow X\times_{k}Y\) and the projection \(X\times_{k}Y\twoheadrightarrow Y\), we may assume that \(f\) is either a closed immersion or a smooth morphism.
We first consider the case where \(f\) is smooth. Let \(\mathscr{F}^{\bullet}\) be a chain complex in \(D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\), and choose an \(\mathcal{A}\)-stratification \(\mathfrak{Y}:=\{Y_{j}\}_{j}\) for \(\mathscr{F}^{\bullet}\). For each \(j\), we denote by \(f_{j}\) the morphism \(f^{-1}(Y_{j})\to Y_{j}\) obtained by restricting \(f\). Then, the collection \(f^{-1}\mathfrak{Y}:=\{f^{-1}(Y_{j})\}_{j}\) forms a smooth stratification on \(X\). For each \(l\in\mathbb{Z}\), the smoothness of \(f_{j}\) implies that
\[\mathcal{H}^{l}(\iota^{\dagger}_{f^{-1}\mathfrak{Y},j}(f^{\dagger}\mathscr{F} ^{\bullet}))\cong\mathcal{H}^{l}(f_{j}^{\dagger}(\iota^{\dagger}_{\mathfrak{ Y},j}\mathscr{F}^{\bullet}))\cong f_{j}^{*}\mathcal{H}^{l}(\iota^{\dagger}_{ \mathfrak{Y},j}\mathscr{F}^{\bullet}). \tag{43}\]
In particular, the cohomology sheaves \(\mathcal{H}^{l}(\iota^{\dagger}_{f^{-1}\mathfrak{Y},j}(f^{\dagger}\mathscr{F} ^{\bullet}))\) belong to \(\mathcal{A}(f^{-1}(Y_{j}))\), so \(f^{\dagger}\mathscr{F}^{\bullet}\) is verified to be an object of \(D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\) (and \(f^{-1}\mathfrak{Y}\) forms an \(\mathcal{A}\)-stratification for \(f^{\dagger}\mathscr{F}^{\bullet}\)).
Next, let us consider the case of a closed immersion \(\iota:X\hookrightarrow Y\). We shall write \(\eta:U:=Y\setminus X\hookrightarrow Y\) for the open immersion. Let us consider the distinguished triangle
\[\int_{\iota}\iota^{\dagger}\mathscr{F}^{\bullet}\to\mathscr{F}^{\bullet}\to \int_{\eta}\eta^{\dagger}\mathscr{F}^{\bullet}\stackrel{{+1}}{{\longrightarrow}} \tag{44}\]
(cf. [HTT, Proposition 1.7.1, (i) and (iii)]). Since \(\eta^{\dagger}\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{Y\setminus X})_ {\mathcal{A}}\) (by the previous argument for a smooth morphism \(f\)), Proposition 3.1, (ii), implies \(\int_{\eta}\eta^{\dagger}\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{Y})_ {\mathcal{A}}\). Hence, by Proposition 2.11, \(\int_{\iota}\iota^{\dagger}\mathscr{F}^{\bullet}\) is verified to be a chain complex in \(D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\). By applying Proposition 3.1, (i), we have \(\iota^{\dagger}\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{X})_{ \mathcal{A}}\), thus completing the proof of this proposition.
### Tensor product functor
Let \(Y\) be another smooth algebraic variety over \(k\). Let \(\pi_{1}\) (resp., \(\pi_{2}\)) denote the projection from the product of two \(k\)-schemes onto the first (resp., second) factor, e.g., the projection \(X\times_{k}Y\to X\) (resp., \(X\times_{k}Y\to Y\)).
Recall that, for a \(\mathcal{D}_{X}\)-module \(\mathscr{F}\) and a \(\mathcal{D}_{Y}\)-module \(\mathscr{E}\), the _exterior tensor product_ of \(\mathscr{F}\) and \(\mathscr{E}\) is defined as the \(\mathcal{D}_{X\times_{k}Y}\)-module
\[\mathscr{F}\boxtimes\mathscr{E}:=\mathcal{D}_{X\times_{k}Y}\otimes_{\pi_{1}^{- 1}\mathcal{D}_{X}\otimes_{k}\pi_{2}^{-1}\mathcal{D}_{Y}}(\pi_{1}^{-1}\mathscr{ F}\otimes_{k}\pi_{2}^{-1}\mathscr{E}). \tag{45}\]
The assignment \((\mathscr{F},\mathscr{E})\mapsto\mathscr{F}\boxtimes\mathscr{E}\) extends to a functor
\[(-)\boxtimes(-):D^{b}_{h}(\mathcal{D}_{X})\times D^{b}_{h}(\mathcal{D}_{Y}) \to D^{b}_{h}(\mathcal{D}_{X\times_{k}Y}) \tag{46}\]
(cf. [HTT, Proposition 3.2.2]).
**Proposition 3.7**.: _The exterior tensor product \(\boxtimes\) restricts to a functor_
\[(-)\boxtimes(-):D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\times D^{b}_{h}( \mathcal{D}_{Y})_{\mathcal{A}}\to D^{b}_{h}(\mathcal{D}_{X\times_{k}Y})_{ \mathcal{A}}. \tag{47}\]
Proof.: Let \(\mathscr{F}^{\bullet}\) and \(\mathscr{E}^{\bullet}\) be chain complexes in \(D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\) and \(D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\), respectively. Choose \(\mathcal{A}\)-stratifications \(\mathfrak{X}:=\{X_{i}\}_{i=1}^{m+1}\), \(\mathfrak{Y}:=\{Y_{j}\}_{j=1}^{n+1}\) for \(\mathscr{F}^{\bullet}\) and \(\mathscr{E}^{\bullet}\), respectively. For each \(j\in\{0,\cdots,m+n+1\}\), we shall set \(Z_{j}\) to be the reduced closed subscheme of \(X\times_{k}Y\) defined as the union \(\bigcup_{l=0}^{j}X_{l}\times_{k}Y_{j-l}\). The subscheme \(Z_{j}\setminus Z_{j+1}\) of \(X\times_{k}Y\) decomposes as
\[Z_{j}\setminus Z_{j+1}=\bigsqcup_{l=0}^{j}(X_{l}\setminus X_{l+1})\times_{k}(Y _{j-l}\setminus Y_{j-l+1}). \tag{48}\]
In particular, it is a disjoint union of smooth subschemes of \(X\times_{k}Y\), and the resulting collection \(\mathfrak{Z}:=\{Z_{j}\}_{j=0}^{m+n+1}\) forms a smooth stratification on \(X\times_{k}Y\).
Now, let us take a connected component \(U\) of \(Z_{j}\setminus Z_{j+1}\), which coincides with \((X_{s}\setminus X_{s+1})\times_{k}(Y_{j-s}\setminus Y_{j-s+1})\) for some \(s\). For each \(a\in\mathbb{Z}\) (resp., \(b\in\mathbb{Z}\)), the local freeness of \(\mathcal{H}^{a}(\iota^{\dagger}_{\mathfrak{X},s}\mathscr{F}^{\bullet})\) (resp., \(\mathcal{H}^{b}(\iota^{\dagger}_{\mathfrak{Y},j-s}\mathscr{E}^{\bullet})\)) implies that \(\mathcal{H}^{a}(\pi^{\dagger}_{1}\iota^{\dagger}_{\mathfrak{X},s}\mathscr{F}^{ \bullet})\) (resp., \(\mathcal{H}^{b}(\pi^{\dagger}_{2}\iota^{\dagger}_{\mathfrak{Y},j-s}\mathscr{E }^{\bullet})\)) is isomorphic to \(\pi^{*}_{1}\mathcal{H}^{a}(\iota^{\dagger}_{\mathfrak{X},s}\mathscr{F}^{ \bullet})\) (resp., \(\pi^{*}_{2}\mathcal{H}^{b}(\iota^{\dagger}_{\mathfrak{Y},j-s}\mathscr{E}^{ \bullet})\)) and hence locally free. Since the forgetful functor \(D^{b}_{h}(\mathcal{D}_{(-)})\to D^{b}(\mathcal{O}_{(-)})\) is compatible with the exterior product functor \(\boxtimes\) (cf. the discussion preceding [HTT, Proposition 1.5.18]), the natural morphism of \(\mathcal{D}_{U}\)-modules
\[\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},s}\mathscr{F}^{\bullet}\boxtimes \iota^{\dagger}_{\mathfrak{Y},j-s}\mathscr{E}^{\bullet})\overset{\sim}{\to} \bigoplus_{a+b=l}\mathcal{H}^{a}(\pi^{\dagger}_{1}\iota^{\dagger}_{\mathfrak{X },s}\mathscr{F}^{\bullet})\otimes\mathcal{H}^{b}(\pi^{\dagger}_{2}\iota^{ \dagger}_{\mathfrak{Y},j-s}\mathscr{E}^{\bullet}) \tag{49}\]
is an isomorphism because of the Kunneth formula for chain complexes. Thus, we obtain the following sequence of isomorphisms defined for each \(l\):
\[\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{Z},j}(\mathscr{F}^{ \bullet}\boxtimes\mathscr{E}^{\bullet}))|_{U} \overset{\sim}{\to} \mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{Z},j}(\mathscr{F}^{ \bullet}\boxtimes\mathscr{E}^{\bullet})|_{U})\] \[\overset{\sim}{\to} \mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{X},s}\mathscr{F}^{ \bullet}\boxtimes\iota^{\dagger}_{\mathfrak{Y},j-s}\mathscr{E}^{\bullet})\] \[\overset{\sim}{\to} \bigoplus_{a+b=l}\mathcal{H}^{a}(\pi^{\dagger}_{1}\iota^{ \dagger}_{\mathfrak{X},s}\mathscr{F}^{\bullet})\otimes\mathcal{H}^{b}(\pi^{ \dagger}_{2}\iota^{\dagger}_{\mathfrak{Y},j-s}\mathscr{E}^{\bullet})\] \[\overset{\sim}{\to} \bigoplus_{a+b=l}\pi^{*}_{1}\mathcal{H}^{a}(\iota^{\dagger}_{ \mathfrak{X},s}\mathscr{F}^{\bullet})\otimes\pi^{*}_{2}\mathcal{H}^{b}(\iota^ {\dagger}_{\mathfrak{Y},j-s}\mathscr{E}^{\bullet}), \tag{50}\]
where the second arrow follows from [HTT, Proposition 1.5.18, (i)]. Since both \(\pi^{*}_{1}\mathcal{H}^{a}(\iota^{\dagger}_{\mathfrak{X},s}\mathscr{F}^{ \bullet})\) and \(\pi^{*}_{2}\mathcal{H}^{b}(\iota^{\dagger}_{\mathfrak{Y},j-s}\mathscr{E}^{ \bullet})\) belong to \(\mathcal{A}(U)\), we see that \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{Z},j}(\mathscr{F}^{\bullet} \boxtimes\mathscr{E}^{\bullet}))|_{U}\in\mathcal{A}(U)\), which implies \(\mathcal{H}^{l}(\iota^{\dagger}_{\mathfrak{Z},j}(\mathscr{F}^{\bullet} \boxtimes\mathscr{E}^{\bullet}))\in\mathcal{A}(Z_{j}\setminus Z_{j+1})\). That is to say, \(\mathscr{F}^{\bullet}\boxtimes\mathscr{E}^{\bullet}\) lies in \(D^{b}_{h}(\mathcal{D}_{X\times_{k}Y})_{\mathcal{A}}\) and \(\mathfrak{Z}\) forms an \(\mathcal{A}\)-stratification for \(\mathscr{F}^{\bullet}\boxtimes\mathscr{E}^{\bullet}\). This completes the proof of this proposition.
Also, the internal tensor product \((\mathscr{F},\mathscr{E})\mapsto\mathscr{F}\otimes_{\mathcal{O}_{X}}\mathscr{E}\) induces the derived functor
\[(-)\otimes^{L}_{\mathcal{O}_{X}}(-):D^{b}_{h}(\mathcal{D}_{X})\times D^{b}_{h} (\mathcal{D}_{X})\to D^{b}_{h}(\mathcal{D}_{X}). \tag{51}\]
**Proposition 3.8**.: _The functor \(\otimes^{L}_{\mathcal{O}_{X}}\) just mentioned restricts to a functor_
\[(-)\otimes^{L}_{\mathcal{O}_{X}}(-):D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}} \times D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\to D^{b}_{h}(\mathcal{D}_{X}) _{\mathcal{A}}. \tag{52}\]
Proof.: The assertion follows from Propositions 3.6 and 3.7 because \((-)\otimes^{L}_{\mathcal{O}_{X}}(-)=L\Delta^{*}_{X}((-)\boxtimes(-))\), where \(\Delta_{X}\) denotes the diagonal embedding \(X\hookrightarrow X\times X\).
### Duality functor
Recall from [HTT, Proposition 3.2.1] the duality functor
\[\mathbb{D}:D^{b}_{h}(\mathcal{D}_{X})\overset{\sim}{\to}D^{b}_{h}(\mathcal{D} _{X})^{\mathrm{op}}, \tag{53}\]
which is an equivalence of categories given by assigning \(\mathscr{F}^{\bullet}\mapsto\mathbb{D}\mathscr{F}^{\bullet}:=R\mathcal{H}om_{ \mathcal{D}_{X}}(\mathscr{F}^{\bullet},\mathcal{D}_{X}\otimes_{\mathcal{O}_{X}} \omega^{\vee}_{X}[\mathrm{dim}X])\). If \(\mathscr{F}^{\bullet}=\mathscr{F}\) for a flat bundle \(\mathscr{F}\) on \(X\), then \(\mathbb{D}\mathscr{F}\) may be identified with the dual of \(\mathscr{F}\) in the usual sense.
**Proposition 3.9**.: _The functor \(\mathbb{D}\) restricts to an equivalence of categories_
\[\mathbb{D}:D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\overset{\sim}{\to}D^{b}_{h} (\mathcal{D}_{X})^{\mathrm{op}}_{\mathcal{A}}. \tag{54}\]
Proof.: Let \(\mathscr{F}^{\bullet}\) be a chain complex in \(D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\), and choose an \(\mathcal{A}\)-stratification \(\mathfrak{X}:=\{X_{j}\}_{j=0}^{m+1}\) for \(\mathscr{F}^{\bullet}\). By descending induction on \(j\), we shall prove the claim that \(\mathbb{D}R\Gamma_{X_{j}}\mathscr{F}^{\bullet}\) lies in \(D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\).
First, let us consider the case of \(j=m\), as the base step. Since \(\mathcal{H}^{-l}(\iota^{\dagger}_{\mathfrak{X},m}\mathscr{F}^{\bullet})\) (for each \(l\)) is a flat bundle in \(\mathcal{A}(X_{m})\), its dual \(\mathbb{D}\mathcal{H}^{-l}(\iota^{\dagger}_{\mathfrak{X},m}\mathscr{F}^{ \bullet})\left(\cong\mathcal{H}^{l}(\mathbb{D}\iota^{\dagger}_{\mathfrak{X},m }\mathscr{F}^{\bullet})\right)\) belongs to \(\mathcal{A}(X_{m})\). This implies \(\mathbb{D}\iota^{\dagger}_{\mathfrak{X},m}\mathscr{F}^{\bullet}\in D^{b}_{h} (\mathcal{D}_{X_{m}})_{\mathcal{A}}\). Hence, by Proposition 3.1, \(\int_{\iota_{\mathfrak{X},m}}\mathbb{D}\iota^{\dagger}_{\mathfrak{X},m} \mathscr{F}^{\bullet}\) lies in \(D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\). On the other hand, observe that
\[\int_{\iota_{\mathfrak{X},m}}\mathbb{D}\iota^{\dagger}_{\mathfrak{X},m} \mathscr{F}^{\bullet}\cong\mathbb{D}\int_{\iota_{\mathfrak{X},m}}\iota^{ \dagger}_{\mathfrak{X},m}\mathscr{F}^{\bullet}\cong\mathbb{D}R\Gamma_{X_{m}} \mathscr{F}^{\bullet}, \tag{55}\]
where the first "\(\cong\)" follows from [HTT, Theorem 2.7.2] together with the fact that \(\iota_{\mathfrak{X},m}\) is a closed immersion, and the second "\(\cong\)" follows from the smoothness of \(X_{m}\) together with [HTT, Proposition 1.7.1, (iii)]. It follows that \(\mathbb{D}R\Gamma_{X_{m}}\mathscr{F}^{\bullet}\) belongs to \(D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\), which proves the base step.
Next, we shall consider the induction step. To do this, we suppose that we have proved the claim with \(j\) replaced by \(j+1\). Let us consider the distinguished triangle
\[R\Gamma_{X_{j+1}}\mathscr{F}^{\bullet}\to R\Gamma_{X_{j}}\mathscr{F}^{ \bullet}\to\int_{\iota_{\mathfrak{X},j}}\iota^{\dagger}_{\mathfrak{X},j}R \Gamma_{X_{j}}\mathscr{F}^{\bullet}\xrightarrow{+1}. \tag{56}\]
By applying \(\mathbb{D}(-)\) to it, we obtain a distinguished triangle
\[\mathbb{D}\int_{\iota_{\mathfrak{X},j}}\iota^{\dagger}_{\mathfrak{X},j}R \Gamma_{X_{j}}\mathscr{F}^{\bullet}\to\mathbb{D}R\Gamma_{X_{j}}\mathscr{F}^{ \bullet}\to\mathbb{D}R\Gamma_{X_{j+1}}\mathscr{F}^{\bullet}\xrightarrow{+1}. \tag{57}\]
Observe that
\[\mathbb{D}\int_{\iota_{\mathfrak{X},j}}\iota^{\dagger}_{\mathfrak{X},j}R \Gamma_{X_{j}}\mathscr{F}^{\bullet}\cong\mathbb{D}\int_{\iota_{\mathfrak{X},j} }\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}^{\bullet}\cong\int_{\iota_{ \mathfrak{X},j}}\mathbb{D}\iota^{\dagger}_{\mathfrak{X},j}\mathscr{F}^{\bullet}, \tag{58}\]
where the first "\(\cong\)" follows from [HTT, Proposition 1.7.1, (iii)]. Similarly to the argument in the base step, we see that \(\int_{\iota_{\mathfrak{X},j}}\mathbb{D}\iota^{\dagger}_{\mathfrak{X},j} \mathscr{F}^{\bullet}\) belongs to \(D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\), so (58) implies \(\mathbb{D}\int_{\iota_{\mathfrak{X},j}}\iota^{\dagger}_{\mathfrak{X},j}R \Gamma_{X_{j}}\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\). Hence, by the induction hypothesis together with (57), \(\mathbb{D}R\Gamma_{X_{j}}\mathscr{F}^{\bullet}\) is verified to be a chain complex in \(D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\). The induction step, hence also the claim, has been proved.
Consequently, we conclude that \(\mathscr{F}^{\bullet}\left(=R\Gamma_{X_{0}}\mathscr{F}^{\bullet}\right)\) belongs to \(D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\), and this completes the proof of this proposition.
### Proper direct/inverse image functors
Let \(f:X\to Y\) be a morphism of smooth algebraic varieties over \(k\). By using the duality functor \(\mathbb{D}\), we obtain functors
\[\int_{f^{\dagger}}:=\mathbb{D}\circ\int_{f}\circ\mathbb{D}:D^{b}_{h}( \mathcal{D}_{X})_{\mathcal{A}}\to D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}, \tag{59}\]
i.e., the proper direct and inverse image functors, respectively. These functors together with \(\int_{f}\) and \(f^{\dagger}\) satisfy all the usual adjointness properties that one has in the theory of the derived
category of \(\mathcal{D}\)-modules. In particular, for each \(\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\) and \(\mathscr{E}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}\), there exist natural isomorphisms
\[\operatorname{Hom}_{D^{b}_{h}(\mathcal{D}_{Y})_{\mathcal{A}}}\left(\int_{f^{!}}\mathscr{F}^{\bullet},\mathscr{E}^{\bullet}\right)\cong\operatorname{Hom}_ {D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}}\left(\mathscr{F}^{\bullet},f^{!} \mathscr{E}^{\bullet}\right), \tag{60}\]
(cf. [HTT, Corollary 3.2.15]).
As summarized below, we have now obtained various functors on the derived categories of \(\mathcal{D}\)-modules of type \(\mathcal{A}\), namely \(\int_{f}\), \(f^{!}\), \(\int_{f!}\), \(f^{!}\), \(\mathbb{D}\), \(\otimes^{L}_{\mathcal{O}_{X}}\) (or \(\boxtimes\)), which forms an example of the six-functor formalism of Grothendieck (cf. [Meb]).
**Theorem 3.10**.: _Let \(\mathcal{A}\) be a fibered subcategory of \(\mathcal{FB}_{k}\) satisfying the three conditions (\(\alpha\))-(\(\gamma\)) described at the beginning of SS2.3. Then, the full triangulated subcategory \(D^{b}_{h}(\mathcal{D}_{X})_{\mathcal{A}}\) of \(D^{b}_{h}(\mathcal{D}_{X})\) is stable under the functors \(\int_{f^{!}}\), \(f^{!}\), \(\int_{f!}\), \(f^{!}\) (for each morphism of smooth algebraic varieties \(f:X\to Y\)), \(\otimes^{L}_{\mathcal{O}_{X}}\), and \(\mathbb{D}\)._
### Minimal extensions
In this final subsection of SS 3, our discussion is restricted to the case where \(X\)_is a smooth curve, i.e., a smooth algebraic variety over \(k\) of dimension \(1\)_.
Let \(Y\) be a (locally closed) subvariety of \(X\), and denote by \(\iota\) the natural immersion \(Y\hookrightarrow X\). Also, let \(\mathscr{F}\) be a holonomic \(\mathcal{D}_{Y}\)-module. Since \(\mathcal{H}^{l}\left(\int_{\iota}\mathscr{F}\right)=\mathcal{H}^{l}\left(\int _{\iota!}\mathscr{F}\right)=0\) for any \(l\neq 0\), both \(\int_{\iota}\mathscr{F}\) and \(\int_{\iota!}\mathscr{F}\) can be regarded as \(\mathcal{D}_{X}\)-modules. Recall that there exists a natural morphism \(\int_{\iota!}\mathscr{F}\to\int_{\iota}\mathscr{F}\) (cf. [HTT, Theorem 3.2.16]) and that the _minimal extension_ of \(\mathscr{F}\) is defined as the image
\[\iota_{!*}\mathscr{F}:=\operatorname{Im}\left(\int_{\iota!}\mathscr{F}\to \int_{\iota}\mathscr{F}\right) \tag{61}\]
of this morphism (cf. [HTT, Definition 3.4.1]).
**Proposition 3.11**.:
1. _Let_ \(Y\) _and_ \(\iota\) _be as above and and_ \(\mathscr{F}\) _an irreducible holonomic_ \(\mathcal{D}_{Y}\)_-module of type_ \(\mathcal{A}\)_. Then, the minimal extension_ \(\iota_{!*}\mathscr{F}\) _(which is irreducible, by_ _[_HTT_, Theorem 3.4.2]__) is of type_ \(\mathcal{A}\)_._
2. _Any irreducible holonomic_ \(\mathcal{D}_{X}\)_-module of type_ \(\mathcal{A}\) _is isomorphic to the minimal extension_ \(\iota_{!*}\mathscr{F}\) _for some pair_ \((\iota,\mathscr{F})\)_, where_ \(\iota\) _denotes the natural immersion_ \(Y\hookrightarrow X\) _determined by a locally closed subscheme_ \(Y\) _of_ \(X\)_, and_ \(\mathscr{F}\) _denotes an irreducible flat bundle on_ \(Y\) _of type_ \(\mathcal{A}\)_._
3. _Let_ \(\mathscr{F}\) _be a holonomic_ \(\mathcal{D}_{X}\)_-module, and consider a finite sequence_ (62) \[\mathscr{F}=\mathscr{F}_{0}\supseteq\mathscr{F}_{1}\supseteq\cdots\supseteq \mathscr{F}_{m}\supseteq\mathscr{F}_{m+1}=0\] _of holonomic_ \(\mathcal{D}_{X}\)_-submodules such that_ \(\mathscr{F}_{j}/\mathscr{F}_{j+1}\) _is irreducible for each_ \(j\)_. (Such a sequence always exists because of_ _[_HTT_, Proposition 3.1.2, (ii)]__.) If_ \(\mathscr{F}\) _is of type_ \(\mathcal{A}\)_, then each_ \(\mathscr{F}_{j}/\mathscr{F}_{j+1}\) _is of type_ \(\mathcal{A}\)_._
Proof.: First, we shall consider assertion (i). Note that \(Y\) is either an open subscheme of \(X\) or a disjoint union of finitely many closed points. The latter case follows immediately from Corollary 3.3. In what follows, we shall suppose that \(Y\) is an open subscheme of \(X\). Denote
by \(\mathscr{E}\) the cokernel of the natural inclusion \(\iota_{!*}\mathscr{F}\hookrightarrow\int_{\iota}\mathscr{F}\). In particular, we obtain a short exact sequence of holonomic \(\mathcal{D}_{X}\)-modules
\[0\longrightarrow\iota_{!*}\mathscr{F}\longrightarrow\int_{\iota}\mathscr{F} \longrightarrow\mathscr{E}\longrightarrow 0. \tag{63}\]
Since \(\iota^{\dagger}\iota_{!*}\mathscr{F}=\iota^{\dagger}\int_{\iota}\mathscr{F}= \mathscr{F}\) and the functor \(\iota^{\dagger}\) is exact, we have \(\iota^{\dagger}\mathscr{E}=0\). It follows that \(\mathscr{E}\) is supported in \(X\setminus Y\) (which is a disjoint union of closed points). According to Corollary 3.3, \(\mathscr{E}\) is of type \(\mathcal{A}\). Hence, the exactness of (63) and Proposition 2.11 together imply that \(\iota_{!*}\mathscr{F}\) is of type \(\mathcal{A}\). This completes the proof of assertion (i).
Next, we shall consider assertion (ii). Let \(\mathscr{G}\) be an irreducible holonomic \(\mathcal{D}_{X}\)-module of type \(\mathcal{A}\). By [HTT, Theorem 3.4.2, (ii)], it is isomorphic to \(\iota_{!*}\mathscr{F}\), where \(\iota\) denotes the natural immersion \(Y\hookrightarrow X\) determined by a locally closed subscheme \(Y\) of \(X\), and \(\mathscr{F}\) denotes an irreducible flat bundle on \(Y\). Moreover, since \(\iota^{\dagger}\mathscr{G}\cong\iota^{\dagger}\iota_{!*}\mathscr{F}\cong \mathscr{F}\), it follows from Proposition 3.6 that \(\mathscr{F}\) is of type \(\mathcal{A}\).
Finally, we shall consider (iii). Suppose that \(\mathscr{F}\) is of type \(\mathcal{A}\). Let us consider the short exact sequence
\[0\longrightarrow\mathscr{F}_{1}\longrightarrow\mathscr{F}\longrightarrow \mathscr{F}/\mathscr{F}_{1}\longrightarrow 0. \tag{64}\]
All the \(\mathcal{D}_{X}\)-modules in this sequence are holonomic, so there exists a dense open subscheme \(U\) of \(X\) such that their restrictions to \(U\) are flat bundles. Denote by \(\eta\) the natural open immersion \(U\hookrightarrow X\) (hence \(\eta^{*}(-)\cong\eta^{\dagger}(-)\) on holonomic \(\mathcal{D}_{X}\)-modules, see [HTT, Theorem 2.7.1, (ii)]). Since \(\mathscr{F}/\mathscr{F}_{1}\) is irreducible, the morphism \(\mathscr{F}/\mathscr{F}_{1}\rightarrow\int_{\eta}\eta^{\dagger}(\mathscr{F}/ \mathscr{F}_{1})\) induced by the adjunction relation \(``\eta^{*}(-)\left(\cong\eta^{\dagger}(-)\right)\dashv\int_{\eta}(-)"\) is verified to be injective. Thus, by putting \(\mathscr{G}\) as the cokernel of this injection, we obtain a short exact sequence
\[0\longrightarrow\mathscr{F}/\mathscr{F}_{1}\longrightarrow\int_{\eta}\eta^ {\dagger}(\mathscr{F}/\mathscr{F}_{1})\longrightarrow\mathscr{G}\longrightarrow 0. \tag{65}\]
Since \(\eta^{\dagger}(\mathscr{F}/\mathscr{F}_{1})=\eta^{\dagger}\left(\int_{\eta} \eta^{\dagger}(\mathscr{F}/\mathscr{F}_{1})\right)\), the sequence (65) implies the equality \(\eta^{\dagger}\mathscr{G}=0\), meaning that \(\mathscr{G}\) is supported in \(X\setminus U\). By Corollary 3.3, \(\mathscr{G}\) turns out to be of type \(\mathcal{A}\). On the other hand, the flat bundle \(\eta^{\dagger}\mathscr{F}\) belongs to \(\mathcal{A}(U)\), so the property \((\alpha)\) on \(\mathcal{A}\) implies that its quotient flat bundle \(\eta^{\dagger}(\mathscr{F}/\mathscr{F}_{1})\) belongs to \(\mathcal{A}(U)\). Hence, the direct image \(\int_{\eta}\eta^{\dagger}(\mathscr{F}/\mathscr{F}_{1})\) is of type \(\mathcal{A}\) (cf. Proposition 3.1, (ii)). By the exactness of (65) and Proposition 2.11, \(\mathscr{F}/\mathscr{F}_{1}\) is verified to be of type \(\mathcal{A}\). Moreover, it follows from Proposition 2.11 again and the exactness of (64) that \(\mathscr{F}_{1}\in D^{b}_{h}(\mathcal{D}_{X})\). By applying successively an argument similar to this, we see that every subquotient \(\mathscr{F}_{j}/\mathscr{F}_{j+1}\) (\(j=0,\cdots,m\)) is of type \(\mathcal{A}\). This completes the proof of assertion (iii).
## 4. Holonomic \(\mathcal{D}\)-modules of arithmetic types
In the rest of the present paper, we focus on certain specific types of "\(\mathcal{A}\)"'s, i.e., the fibered categories of \(G\)-connections, globally nilpotent connections, and almost everywhere nilpotent connections, respectively. We first recall their definitions and then introduce the derived categories of \(\mathcal{D}\)-modules of such types. The goal of this section is to prove Theorem A.
For each commutative ring \(R_{0}\), we shall write \(\mathbb{A}^{1}_{R_{0}}\) for the affine line over \(R_{0}\), i.e., \(\mathbb{A}^{1}_{R_{0}}:=\operatorname{Spec}(R_{0}[t])\).
### Global inverse radius of a flat bundle
Let \(K\) be a number field, and denote by \(\mathcal{O}_{K}\) its ring of integers. Consider a nonempty open subscheme \(\operatorname{Spec}(R)\) of \(\operatorname{Spec}(\mathcal{O}_{K})\) (where \(R\) denotes a Dedekind domain with \(\mathcal{O}_{K}\subseteq R\subseteq K\)). Denote by \(\Sigma_{R}\) the set of closed points of \(\operatorname{Spec}(R)\), i.e., the set of finite places of \(K\) having center on \(R\). For each prime number \(p\) and each \(v\in\Sigma_{R}\) with \(v|p\), we denote by \(|-|_{v}\) the non-archimedean absolute value of \(K\) corresponding to \(v\), normalized as \(|p|_{v}=p^{-[\widehat{K}_{v}\cdot\mathbb{Q}_{p}]/[K\cdot\mathbb{Q}]}\), where \(\widehat{K}_{v}\) denotes the \(v\)-completion of \(K\). Also, denote by \(\widehat{\mathcal{O}}_{v}\) the ring of integers of \(\widehat{K}_{v}\), and by \(k(v)\) the residue field of \(\widehat{K}_{v}\) of characteristic \(p=:p(v)>0\).
Next, let \(X_{K}\) be a smooth algebraic variety over \(K\) and \(f:X_{R}\to\operatorname{Spec}(R)\) a smooth \(R\)-scheme of finite type with geometrically connected non-empty fibers equipped with an isomorphism \(X_{R}\times_{R}K\stackrel{{\sim}}{{\to}}X_{K}\); such an \(R\)-scheme \(X_{R}\) will be called a _model_ of \(X_{K}\) over \(R\). Denote by \(K_{X_{K}}\) the function field of \(X_{K}\) and by \(d\) the relative dimension of \(X_{R}/R\). For each \(v\in\Sigma_{R}\), there exists a unique extension \(|-|_{X_{R},v}\) of \(|-|_{v}\) to a non-archimedean absolute value of \(K_{X_{K}}\) such that the local ring \(\mathcal{O}_{X_{R},\eta_{v}}\) corresponding to the generic point \(\eta_{v}\) of the closed fiber \(X_{k(v)}:=X_{R}\times_{R}k(v)\) coincides with \(\{x\in K_{X_{K}}\,|\,|x|_{X_{R},v}\leq 1\}\). For example, when \(X_{K}=\mathbb{A}^{1}_{K}\), the absolute value on \(K(t)\left(=K_{\mathbb{A}^{1}_{K}}\right)\) associated to both the model \(\mathbb{A}^{1}_{\mathcal{O}_{K}}\) of \(\mathbb{A}^{1}_{K}\) and \(v\in\Sigma_{\mathcal{O}_{K}}\) coincides with the Gauss absolute value, i.e., the absolute value given by
\[\left|\frac{\sum_{i}a_{i}t^{i}}{\sum_{i}b_{i}t^{i}}\right|_{\operatorname{ Gauss},v}:=\frac{\sup_{i}|a_{i}|_{v}}{\sup_{i}|b_{i}|_{v}}. \tag{66}\]
Also, \(|-|_{X_{R},v}\) naturally induces a norm \(|\!|-|\!|_{X_{R},v}\) on \(M_{n\times n}(K_{X_{K}})\) (= the \(K_{X_{K}}\)-vector space of \(n\times n\) matrices with entries in \(K_{X_{K}}\)) given by \(|\!|G|\!|_{X_{R},v}:=\max\left\{|g_{i,j}|_{X_{R},v}\,|\,1\leq i\leq n,1\leq j \leq n\right\}\) for any \(G:=(g_{i,j})_{i,j}\in M_{n\times n}(K_{X_{K}})\).
Let \(\mathscr{F}:=(\mathcal{F},\nabla)\) be a rank \(n\) flat bundle defined on a nonempty open subscheme of \(X_{K}\). We shall choose an etale local (relative) coordinate \(\underline{x}:=(x_{1},\cdots,x_{d})\) of \(X_{R}/R\) around the point \(\eta_{v}\) and a basis \(\underline{e}:=(e_{1},\cdots,e_{n})\) of \(\mathcal{F}\) over that point. For any \(\underline{\alpha}:=(\alpha_{1},\cdots,\alpha_{d})\in\mathbb{Z}^{d}_{\geq 0}\), we shall set
\[\nabla_{[\underline{\alpha}]}:=\prod_{i=1}^{d}\nabla\left(\frac{\partial}{ \partial x_{i}}\right)^{\alpha_{i}}. \tag{67}\]
Then, there exists a unique \(n\times n\) matrix
\[G(\nabla)_{[\underline{\alpha}]}\in M_{n\times n}(K_{X}) \tag{68}\]
with entries in \(K_{X}\) satisfying \(\nabla_{[\underline{\alpha}]}\underline{e}=\underline{e}G(\nabla)_{[ \underline{\alpha}]}\), where \(G(\nabla)_{[\underline{\alpha}]}:=I_{n}\), i.e., the identity matrix of size \(n\).
Recall that the **radius of convergence of \(\mathscr{F}\) at \(v\in\Sigma_{R}\)** is defined as the value
\[\operatorname{Rad}_{X_{R},v}(\mathscr{F}):=\left(\max\left\{1,\limsup_{ \left|\underline{\alpha}\right|\to\infty}\left\|\frac{1}{\underline{\alpha}!} \cdot G(\nabla)_{[\underline{\alpha}]}\right\|_{X_{R},v}^{\left|\frac{1}{ \underline{\alpha}}\right|_{X_{R},v}^{\left|\frac{1}{\underline{\alpha}}} \right\}\right)^{-1}\in(0,1], \tag{69}\]
where \(\underline{\alpha}!:=\prod_{i=1}^{d}\alpha_{i}!\) and \(\left|\underline{\alpha}\right|:=\sum_{i=1}^{d}\alpha_{i}\). This value depends neither on the choices of \(\underline{x}\) nor \(\underline{e}\) (cf. [DiV, Proposition 2.9] or [ChDw, Proposition 1.3]). Moreover, by using these values for
various elements \(v\in\Sigma_{R}\), one may define the **global inverse radius of \(\mathscr{F}\)** as
\[\rho_{X_{R}}(\mathscr{F}):=\sum_{v\in\Sigma_{R}}\log\frac{1}{\operatorname{Rad} _{X_{R},v}(\mathscr{F})}\in\mathbb{R}_{\geq 0}\sqcup\{\infty\}. \tag{70}\]
**Remark 4.1**.: We here describe several properties of radii defined above.
1. If \(\mathscr{G}\) is another flat bundle, then the tensor product \(\mathscr{F}\otimes\mathscr{G}\) of flat bundles \(\mathscr{F}\), \(\mathscr{G}\) satisfies (71) \[\operatorname{Rad}_{X_{R},v}(\mathscr{F}\otimes\mathscr{G})\geq\min\left\{ \operatorname{Rad}_{X_{R},v}(\mathscr{F}),\operatorname{Rad}_{X_{R},v}( \mathscr{G})\right\}\] for every \(v\in\Sigma_{R}\) (cf. [Ked, Lemma 6.2.8, (c)]). This implies the inequality (72) \[\rho_{X_{R}}(\mathscr{F}\otimes\mathscr{G})\leq\rho_{X_{R}}(\mathscr{F})+ \rho_{X_{R}}(\mathscr{G}).\]
2. Let \(0\to\mathscr{E}\to\mathscr{F}\to\mathscr{G}\to 0\) be a short exact sequence of flat bundles. Then, it follows from [And, Chap. IV, SS 2.5, Proposition 1] that, for each \(v\in\Sigma_{R}\), the following inequality holds: (73) \[\operatorname{Rad}_{X_{R},v}(\mathscr{F})=\min\left\{\operatorname{Rad}_{X_{R },v}(\mathscr{E}),\operatorname{Rad}_{X_{R},v}(\mathscr{G})\right\}.\] Hence, it is verified that (74) \[\max\left\{\rho_{X_{R}}(\mathscr{E}),\rho_{X_{R}}(\mathscr{G})\right\}\leq \rho_{X_{R}}(\mathscr{F})\leq\rho_{X_{R}}(\mathscr{E})+\rho_{X_{R}}(\mathscr{G}).\]
3. Let \(g:Y_{R}\to\operatorname{Spec}(R)\) be another smooth \(R\)-scheme of finite type with geometrically connected non-empty fibers. Also, let \(h_{R}:Y_{R}\to X_{R}\) be a smooth \(R\)-morphism. The pull-back of \(\mathscr{F}\) via the generic fiber \(h\) of \(h_{R}\) specifies a flat bundle \(h^{*}\mathscr{F}\) on an open subscheme of \(Y_{K}:=Y\times_{R}K\). Then, we can prove the inequality (75) \[\operatorname{Rad}_{Y_{R},v}(h^{*}\mathscr{F})\geq\operatorname{Rad}_{X_{R},v}( \mathscr{F})\] for every \(v\in\Sigma_{R}\). This implies (76) \[\rho_{Y_{R}}(h^{*}\mathscr{F})\leq\rho_{X_{R}}(\mathscr{F}).\]
### Arithmetic properties on flat bundles
**Definition 4.2**.: Let \(\mathscr{F}:=(\mathcal{F},\nabla)\) be a flat bundle on \(X_{K}/K\).
1. We shall say that \(\mathscr{F}\), or \(\nabla\), is **globally convergent** (resp., **of type \(G\)**) if \(\operatorname{Rad}_{X_{R},v}(\mathscr{F})=1\) for all but finitely many elements \(v\in\Sigma_{R}\) (resp., \(\rho_{X_{R}}(\mathscr{F})<\infty\)).
2. We shall say that \(\mathscr{F}\), or \(\nabla\), is **almost everywhere (a.e.) nilpotent** (resp., **globally nilpotent**) if there exists a flat bundle \(\mathscr{F}_{R}\) on a nonempty open subscheme of \(X_{R}\) relative to \(\operatorname{Spec}(R)\) satisfying the two conditions (1) and (2) (resp., (1) and (3)) described below: 1. \(\mathscr{F}_{R}\) is isomorphic to \(\mathscr{F}\) at the generic point of \(X_{K}\); 2. Let \(\Sigma_{\mathscr{F}_{R}}^{\operatorname{nilp}}\) denote the set of prime numbers \(p\) such that the flat bundle \(\mathscr{F}_{R}\otimes_{R}k(v)\) induced by \(\mathscr{F}_{R}\) has nilpotent \(p\)-curvature for any \(v\in\Sigma_{R}\), \(v|p\). Then, \(\Sigma_{\mathscr{F}_{R}}^{\operatorname{nilp}}\) has Dirichlet density one (cf., e.g., [Kat1, SS 5] for the definition of \(p\)-curvature). 3. The flat bundles \(\mathscr{F}_{R}\otimes_{R}k(v)\) have nilpotent \(p\)-curvature for all but finitely many elements \(v\in\Sigma_{R}\).
**Remark 4.3**.: It is immediate that the various definitions described in Definition 4.2 are independent of the choices of the Dedekind domain \(R\) and the model \(X_{R}\) of \(X_{K}\).
Next, we shall fix a smooth algebraic variety \(X\) over \(\overline{\mathbb{Q}}\).
**Definition 4.4**.: Let \(\mathscr{F}:=(\mathcal{F},\nabla)\) be a flat bundle on \(X/\overline{\mathbb{Q}}\). We shall say that \(\mathscr{F}\) is **globally convergent** (resp., **of type \(G\)** ; resp., **a.e. nilpotent**; resp., **globally nilpotent**) if there exists a smooth algebraic variety \(X_{K}\) over a number field \(K\) and a flat bundle \(\mathscr{F}_{K}\) on \(X_{K}/K\) such that \((X_{K},\mathscr{F}_{K})\times_{K}\overline{\mathbb{Q}}\cong(X,\mathscr{F})\) and that \(\mathscr{F}_{K}\) is globally convergent (resp., of type \(G\); resp., a.e. nilpotent; resp., globally nilpotent) in the sense of Definition 4.2. If \(\mathscr{F}\) is of type \(G\), then we sometimes call \(\nabla\) a \(G\)**-connection**.
Regarding the various notions mentioned above, we have the following implications for an arbitrary flat bundle \(\mathscr{F}\) (as mentioned in [AnBa, SS 1.4]):
\[\mathscr{F} \tag{77}\]
is globally convergent
\[\Longrightarrow\mathscr{F}\]
is of type
\[G\]
(resp., globally nilpotent)
\[\Longrightarrow\mathscr{F}\]
is a.e. nilpotent.
The following assertion is a direct consequence of the previous study by T. Honda concerning the Grothendieck-Katz \(p\)-curvature for the rank one case.
**Proposition 4.5**.: _Suppose that \(X=X_{R}\times_{R}\overline{\mathbb{Q}}\) for some open subscheme \(X_{R}\) of \(\mathbb{A}^{1}_{R}\) (where \(R\) is as above). Also, let \(\mathscr{F}_{R}\) be a rank one flat bundle on \(X_{R}/R\), and write \(\mathscr{F}:=\mathscr{F}_{R}\times_{R}\overline{\mathbb{Q}}\). Then, the conditions described in (77) are equivalent to each other, and moreover, these are equivalent to each of the following two conditions:_
* _The flat bundles_ \(\mathscr{F}_{R}\otimes_{R}k(v)\) _have vanishing_ \(p(v)\)_-curvature for all but finitely many elements_ \(v\) _of_ \(\Sigma_{R}\)_._
* \(\mathscr{F}\) _becomes generically trivial (i.e., trivial at the generic point) after pulling back via a finite etale covering of_ \(X\)_._
Proof.: Suppose that \(\mathscr{F}\) is a.e. nilpotent. Then, the set \(\Sigma^{\mathrm{nilp}}_{\mathscr{F}_{R}}\) (cf. Definition 4.2, (ii)) has Dirichlet density one. Since \(\mathscr{F}_{R}\) is of rank one, the flat bundle \(\mathscr{F}_{R}\otimes_{R}k(v)\) has vanishing \(p(v)\)-curvature for every \(v\) with \(p(v)\in\Sigma^{\mathrm{nilp}}_{\mathscr{F}_{R}}\). Here, we recall the proof of the Grothendieck-Katz conjecture for rank one flat bundles on \(X\) given by T. Honda (cf. [Hon]), in which he proved that the assertion of this conjecture is equivalent to Chebotarev's density theorem; in particular, by applying that discussion to our situation, we see that the condition \((**)\) is satisfied. The implication \((**)\Rightarrow(*)\) is clear and well-known. Finally, it follows from [DiV, Proposition 3.3] that \((*)\) implies the global convergency of \(\mathscr{F}\). This completes the proof of this proposition.
### Holonomic \(\mathcal{D}\)-modules of arithmetic types
We shall set
\[\mathcal{A}_{\overline{\mathbb{Q}},G}\ \big{(}\mathrm{resp.},\ \mathcal{A}_{\overline{\mathbb{Q}},\mathrm{nilp}};\mathrm{resp.},\ \mathcal{A}_{\overline{\mathbb{Q}},\mathrm{aen}}\big{)} \tag{78}\]
to be the fibered subcategory of \(\mathcal{FB}_{\overline{\mathbb{Q}}}\) classifying flat bundles of type \(G\) (resp., globally nilpotent flat bundles; resp., a.e. nilpotent flat bundles). The following assertion in the case of \(\mathcal{A}_{\overline{\mathbb{Q}},G}\) is a direct consequence of the main result of [AnBa].
**Proposition 4.6**.: _The fibered subcategories \(\mathcal{A}_{\overline{\mathbb{Q}},G}\), \(\mathcal{A}_{\overline{\mathbb{Q}},\mathrm{nilp}}\), and \(\mathcal{A}_{\overline{\mathbb{Q}},\mathrm{aen}}\) satisfy the three properties (\(\alpha\))-(\(\gamma\)) described in SS 2.3._
Proof.: Let \(\mathcal{A}\in\big{\{}\mathcal{A}_{\overline{\mathbb{Q}},G},\mathcal{A}_{ \overline{\mathbb{Q}},\mathrm{nilp}},\mathcal{A}_{\overline{\mathbb{Q}}, \mathrm{aen}}\big{\}}\). As mentioned in [AnBa, SS 1.4], the subcategory \(\mathcal{A}(Y)\) of \(\mathcal{FB}_{\overline{\mathbb{Q}}}(Y)\) determined by each \(Y\in\mathrm{ob}(\mathcal{S}m_{\overline{\mathbb{Q}}})\) is closed under various operations so that \(\mathcal{A}\) satisfies \((\alpha)\). Also, the property \((\gamma)\) can be verified from the definitions involved. Finally, the
property \((\beta)\) follows from [AnBa, Main Theorem] (if \(\mathcal{A}=\mathcal{A}_{\overline{\mathbb{Q}},G}\)) and [Kat1, Theorem 5.10] (if \(\mathcal{A}=\mathcal{A}_{\overline{\mathbb{Q}},\mathrm{nilp}}\) or \(\mathcal{A}_{\overline{\mathbb{Q}},\mathrm{aen}}\)).
By applying Proposition 4.6 and the arguments in the previous section, we obtain the triangulated subcategory
\[D_{h}^{b}(\mathcal{D}_{X})_{G}:=D_{h}^{b}(\mathcal{D}_{X})_{ \mathcal{A}_{\overline{\mathbb{Q}},G}}\] \[\left(\mathrm{resp.},\,D_{h}^{b}(\mathcal{D}_{X})_{\mathrm{nilp} }:=D_{h}^{b}(\mathcal{D}_{X})_{\mathcal{A}_{\overline{\mathbb{Q}},\mathrm{nilp }}};\mathrm{resp.},\,D_{h}^{b}(\mathcal{D}_{X})_{\mathrm{aen}}:=D_{h}^{b}( \mathcal{D}_{X})_{\mathcal{A}_{\overline{\mathbb{Q}},\mathrm{an}}}\right) \tag{79}\]
of \(D_{h}^{b}(\mathcal{D}_{X})\). The implications between the conditions displayed in (77) give rise to inclusion relations of categories
\[D_{h}^{b}(\mathcal{D}_{X})_{G}\subseteq D_{h}^{b}(\mathcal{D}_{X})_{\mathrm{ aen}},\quad D_{h}^{b}(\mathcal{D}_{X})_{\mathrm{nilp}}\subseteq D_{h}^{b}( \mathcal{D}_{X})_{\mathrm{aen}}. \tag{80}\]
In particular, by Theorem 3.10 applied to \(\mathcal{A}\in\left\{\mathcal{A}_{\overline{\mathbb{Q}},G},\mathcal{A}_{ \overline{\mathbb{Q}},\mathrm{nilp}},\mathcal{A}_{\overline{\mathbb{Q}}, \mathrm{aen}}\right\}\), we obtain six functors on the respective derived categories. This proves Theorem A.
**Remark 4.7**.: Let \(\Box\in\{G,\mathrm{nilp},\mathrm{aen}\}\), and denote by \(D_{rh}^{b}(\mathcal{D}_{X})\) the full subcategory of \(D_{h}^{b}(\mathcal{D}_{X})\) consisting of chain complexes having _regular_ holonomic cohomology. If \(\mathscr{F}\) is an a.e. nilpotent flat bundle on \(X/\overline{\mathbb{Q}}\), then it follows from a result by Katz (cf. [Kat3, Theorem 8.1] or [DGS, Theorem 6.1 and Remark 6.3]) that it has at most regular singularities at infinity along any smooth curve in \(X\) and has rational exponents. This fact together with (77) implies that, for any chain complex \(\mathscr{F}^{\bullet}\) in \(D_{h}^{b}(\mathcal{D}_{X})_{\Box}\), the cohomology sheaf \(\mathcal{H}^{l}(\mathscr{F}^{\bullet})\) (for each \(l\in\mathbb{Z}\)) defines a regular holonomic \(\mathcal{D}_{X}\)-module. In particular, the inclusion \(D_{h}^{b}(\mathcal{D}_{X})_{\Box}\subseteq D_{h}^{b}(\mathcal{D}_{X})\) factors through \(D_{rh}^{b}(\mathcal{D}_{X})\subseteq D_{h}^{b}(\mathcal{D}_{X})\), i.e., we have
\[D_{h}^{b}(\mathcal{D}_{X})_{\Box}\subseteq D_{rh}^{b}(\mathcal{D}_{X}). \tag{81}\]
### Global inverse radius of a holonomic \(\mathcal{D}_{\mathbb{A}^{1}}\)-modules
We have extended the class of \(G\)-connections to chain complexes in \(D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})\). The global inverse radius can be generalized to an invariant \(\rho(\mathscr{F}^{\bullet})\in\mathbb{R}_{\geq 0}\sqcup\{\infty\}\) associated to each \(\mathscr{F}^{\bullet}\in\mathrm{ob}(D_{h}^{b}(\mathcal{D}_{(-)})_{G})\) accordingly. For simplicity, we only consider the case where \(X\) is the affine line \(\mathbb{A}^{1}:=\mathbb{A}^{1}_{\overline{\mathbb{Q}}}\) over \(\overline{\mathbb{Q}}\).
First, let \(\mathscr{F}\) be a holonomic \(\mathcal{D}_{\mathbb{A}^{1}}\)-module that belongs to \(D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})_{G}\) as a complex concentrated at degree \(0\). There exist a number field \(K\) and a dense open subscheme \(U\) of \(\mathbb{A}^{1}_{\mathcal{O}_{K}}\) such that the restriction \(\mathscr{F}|_{U\times_{\mathcal{O}_{K}}\overline{\mathbb{Q}}}\) of \(\mathscr{F}\) to the open subscheme \(U\times_{\mathcal{O}_{K}}\overline{\mathbb{Q}}\,(\subseteq\mathbb{A}^{1})\) can be obtained as the pull-back of a flat bundle \(\mathscr{F}_{K}\) on \((U\times_{\mathcal{O}_{K}}K)/K\). Then, the value \(\rho(\mathscr{F}):=\rho_{\mathbb{A}^{1}_{\mathcal{O}_{K}}}(\mathscr{F}_{K})\) (cf. (70)) can be defined and depends neither on the choices of \(K\), \(U\), nor \(\mathscr{F}_{K}\) (cf. [AnBa, SS 1.3]). More generally, for each \(\mathscr{F}^{\bullet}\in D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})_{G}\), we set
\[\rho(\mathscr{F}^{\bullet}):=\max\left\{\rho(\mathcal{H}^{l}(\mathscr{F}^{ \bullet}))\,|\,l\in\mathbb{Z}\right\}. \tag{82}\]
In this way, we have obtained a well-defined map
\[\rho:\mathrm{ob}(D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})_{G})\to\mathbb{R}_{ \geq 0}. \tag{83}\]
extending \(\rho_{\mathbb{A}^{1}_{(-)}}\).
We shall generalize the second inequality in (74), as follows.
**Proposition 4.8**.: _Let \(\mathscr{E}^{\bullet}\to\mathscr{F}^{\bullet}\to\mathscr{G}^{\bullet}\xrightarrow{ +1}\) be a distinguished triangle of chain complexes in \(\mathcal{D}_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})_{G}\). Then, the following inequality holds:_
\[\left|\rho(\mathscr{E}^{\bullet})-\rho(\mathscr{G}^{\bullet})\right|\leq\rho( \mathscr{F}^{\bullet})\leq\rho(\mathscr{E}^{\bullet})+\rho(\mathscr{G}^{ \bullet}). \tag{84}\]
Proof.: The distinguished triangle \(\mathscr{E}^{\bullet}\to\mathscr{F}^{\bullet}\to\mathscr{G}^{\bullet}\xrightarrow{ +1}\) induces a long exact sequence of \(\mathcal{D}_{\mathbb{A}^{1}}\)-modules of type \(G\):
\[\cdots\xrightarrow{\beta_{l-1}}\mathcal{H}^{l-1}(\mathscr{G}^{\bullet}) \xrightarrow{\gamma_{l-1}}\mathcal{H}^{l}(\mathscr{E}^{\bullet})\xrightarrow{ \alpha_{l}}\mathcal{H}^{l}(\mathscr{F}^{\bullet})\xrightarrow{\beta_{l}} \mathcal{H}^{l}(\mathscr{G}^{\bullet})\xrightarrow{\eta}\mathcal{H}^{l+1}( \mathscr{E}^{\bullet})\xrightarrow{\alpha_{l+1}}\cdots. \tag{85}\]
To prove the second inequality of (84), let us consider the following short exact sequences arising from (85):
\[0\to\operatorname{Coker}(\gamma_{l-1})\left(=\operatorname{Im}( \alpha_{l})\right) \to\mathcal{H}^{l}(\mathscr{F}^{\bullet})\to\operatorname{Im}(\beta_{l}) \left(=\operatorname{Coker}(\alpha_{l})\right)\to 0, \tag{87}\] \[0\to\operatorname{Im}(\gamma_{l-1}) \to\mathcal{H}^{l}(\mathscr{E}^{\bullet})\to\operatorname{Coker}( \gamma_{l-1})\to 0,\] (88) \[0\to\operatorname{Im}(\beta_{l}) \to\mathcal{H}^{l}(\mathscr{G}^{\bullet})\to\operatorname{Coker}( \beta_{l})\to 0. \tag{86}\]
By the fact mentioned in Remark 4.1, (ii), these induce inequalities
\[\rho(\mathcal{H}^{l}(\mathscr{F}^{\bullet})) \leq\rho(\operatorname{Coker}(\gamma_{l-1}))+\rho(\operatorname{ Im}(\beta_{l})),\] \[\rho(\operatorname{Coker}(\gamma_{l-1})) \leq\rho(\mathcal{H}^{l}(\mathscr{E}^{\bullet})),\text{and}\] \[\rho(\operatorname{Im}(\beta_{l})) \leq\rho(\mathcal{H}^{l}(\mathscr{G}^{\bullet})), \tag{89}\]
respectively. This implies \(\rho(\mathcal{H}^{l}(\mathscr{F}^{\bullet}))\leq\rho(\mathcal{H}^{l}(\mathscr{ E}^{\bullet}))+\rho(\mathcal{H}^{l}(\mathscr{G}^{\bullet}))\), and hence, we obtain the second inequality of (84), as desired.
Also, entirely similar arguments enable us to obtain inequalities \(\rho(\mathscr{E}^{\bullet})\leq\rho(\mathscr{G}^{\bullet})+\rho(\mathscr{F}^{ \bullet})\) and \(\rho(\mathscr{G}^{\bullet})\leq\rho(\mathscr{F}^{\bullet})+\rho(\mathscr{E}^ {\bullet})\), which together are equivalent to the first inequality of (84). This completes the proof of this assertion.
## 5. Middle convolution on holonomic \(\mathcal{D}\)-modules of arithmetic types
This section discusses the middle convolution functors (with rational parameters) on the derived categories under consideration. As a consequence, we prove Theorem B.
### Middle convolution
Let \(q\) be a \(\overline{\mathbb{Q}}\)-rational point of \(\mathbb{A}^{1}:=\mathbb{A}^{1}_{\overline{\mathbb{Q}}}\left(=\operatorname{ Spec}(\overline{\mathbb{Q}}[t])\right)\); it determines an open immersion \(\eta^{q}:\mathbb{A}^{1}\setminus\{q\}\hookrightarrow\mathbb{A}^{1}\). We will use the same notation "\(q\)" to denote the corresponding element in \(\overline{\mathbb{Q}}\). For each \(\lambda\in\overline{\mathbb{Q}}\), we shall set
\[\mathscr{K}_{q}^{\lambda}:=\eta_{*}^{q}\left(\mathcal{O}_{\mathbb{A}^{1} \setminus\{q\}},d+\lambda\cdot\frac{d(t-q)}{t-q}\right), \tag{90}\]
which is a holonomic \(\mathcal{D}_{\mathbb{A}^{1}}\)-module; it is irreducible when \(\lambda\notin\mathbb{Z}\). Up to isomorphism, \(\mathscr{K}_{q}^{\lambda}\) depends only on the image of \(\lambda\) in \(\overline{\mathbb{Q}}/\mathbb{Z}\).
**Proposition 5.1**.: _Let \(\Box\in\{G,\operatorname{nilp},\operatorname{aen}\}\). Then, the \(\mathcal{D}_{\mathbb{A}^{1}}\)-module \(\mathscr{K}_{q}^{\lambda}\) belongs to \(D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})_{\Box}\) if and only if \(\lambda\in\mathbb{Q}\)._
Proof.: Let \(K\) and \(R\) be as in SS 4.1; we may assume (after possibly replacing these with different ones) that \(\lambda,q\in R\). Write \(\nabla_{R}\) for the \(R\)-connection \(d+\lambda\cdot\frac{d(t-q)}{t-q}\) on \(\mathcal{O}_{X_{R}}\), where \(X_{R}:=\mathbb{A}^{1}_{R}\setminus\overline{\{q\}}\), and write \((X,\nabla):=(X_{R},\nabla_{R})\times_{R}\overline{\mathbb{Q}}\).
Now, let us consider the "if" part of the required equivalence. Suppose that \(\lambda\in\mathbb{Q}\). For a prime number \(p\) and an element \(v\in\Sigma_{R}\) with \(v|p\), one can reduce \(\nabla_{R}\) to obtain a \(k(v)\)-connection \(\nabla_{R,v}\) on \(\mathcal{O}_{X_{R}\otimes_{R}k(v)}\). The reduction of \(\lambda\) modulo \(p\) belongs to \(\mathbb{F}_{p}:=\mathbb{Z}/p\mathbb{Z}\). Hence,
if \(C\) denotes the Cartier operator on \(\Omega_{X_{R}\otimes_{R}k(v)/k(v)}\), then the equality \(C(\lambda\cdot\frac{d(t-q)}{t-q})=\lambda\cdot\frac{d(t-q)}{t-q}\) holds. According to [Kat2, Corollary 7.1.3], the connection \(\nabla_{R,v}\) has vanishing \(p\)-curvature, so Proposition 4.5 implies that \((\mathcal{O}_{X},\nabla)\) is globally convergent. By the implications in (77), this flat bundle is also of type \(G\) (resp., globally nilpotent; resp., a.e. nilpotent). It follows from Proposition 3.1, (i), that \(\mathscr{K}_{q}^{\lambda}\left(\cong\int_{\eta^{\prime}}(\mathcal{O}_{X}, \nabla)\right)\) is contained in \(D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})_{G}\) (resp., \(D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})_{\mathrm{nilp}}\); resp., \(D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})_{\mathrm{aen}}\)). This completes the proof of the "if" part.
Next, we shall prove the inverse direction of the required equivalence. To this end, it suffices to consider the case where "\(\Box=\mathrm{aen}\)" because of the implications in (77). Suppose that \(\mathscr{K}_{q}^{\lambda}\) belongs to \(D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})_{\mathrm{aen}}\). Since \(\eta^{q\dagger}(\mathscr{K}_{q}^{\lambda})\cong(\mathcal{O}_{X},\nabla)\), it follows from Proposition 3.6 that \((\mathcal{O}_{X},\nabla)\) is a.e. nilpotent. Hence, by Proposition 4.5, \((\mathcal{O}_{X},\nabla)\) is verified to be globally covergent. By reversing the steps in the above discussion, we see that the mod \(v\) reduction of \(\lambda\) lies in \(\mathbb{F}_{p(v)}\) for every element \(v\) in \(\Sigma_{R}\). This implies \(\lambda\in\mathbb{Q}\) by Chebotarev's density theorem, thus completing the proof of the "only if" part.
Denote by \(\mathbb{P}^{1}\left(\supseteq\mathbb{A}^{1}\right)\) the projective line over \(\overline{\mathbb{Q}}\) and by \(\pi_{1}\) (resp., \(\pi_{2}\)) the first projection \(\mathbb{A}^{1}\times_{\overline{\mathbb{Q}}}\mathbb{A}^{1}\to\mathbb{A}^{1}\) (resp., the second projection \(\mathbb{P}^{1}\times_{\overline{\mathbb{Q}}}\mathbb{A}^{1}\to\mathbb{A}^{1}\)). Also, denote by \(\mu:\mathbb{A}^{1}\times_{\overline{\mathbb{Q}}}\mathbb{A}^{1}\to\mathbb{A}^{1}\) the morphism given by \((x,y)\mapsto y-x\) and by \(\eta_{\mathbb{A}}:\mathbb{A}^{1}\times_{\overline{\mathbb{Q}}}\mathbb{A}^{1} \hookrightarrow\mathbb{P}^{1}\times_{\overline{\mathbb{Q}}}\mathbb{A}^{1}\) the natural open immersion. For a chain complex \(\mathscr{F}^{\bullet}\) in \(D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})\) and \(\lambda\in\overline{\mathbb{Q}}\setminus\mathbb{Z}\), we shall set
\[\mathrm{mc}_{\lambda}(\mathscr{F}^{\bullet}):=\int_{\pi_{2}}\eta_{\mathbb{A}!*}(L\pi_{1}^{*}\mathscr{F}\otimes_{\mathcal{C}_{\mathbb{A}^{1}\times\mathbb{ A}^{1}}}^{L}L\mu^{*}\mathscr{K}_{0}^{\lambda})\in D_{h}^{b}(\mathcal{D}_{ \mathbb{A}^{1}}) \tag{91}\]
(cf. [Kat4, SS 2.8] for the corresponding definition in the \(l\)-adic setting). Also, for each \(l\in\mathbb{Z}\), we shall set
\[\mathrm{mc}_{\lambda}^{l}(\mathscr{F}^{\bullet}):=\mathcal{H}^{l}(\mathrm{mc}_ {\lambda}(\mathscr{F}^{\bullet})). \tag{92}\]
We refer to \(\mathrm{mc}_{\lambda}(\mathscr{F}^{\bullet})\) as the **(additive) middle convolution of \(\mathscr{F}\) with the parameter \(\lambda\)**. The resulting assignment \(\mathscr{F}^{\bullet}\mapsto\mathrm{mc}_{\lambda}(\mathscr{F}^{\bullet})\) defines an endofunctor on \(D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})\):
\[\mathrm{mc}_{\lambda}:D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})\to D_{h}^{b}( \mathcal{D}_{\mathbb{A}^{1}}). \tag{93}\]
In the case of flat bundles, the definition of middle convolution just mentioned coincides with the definition given in [Ari2] (cf. [Ari1, Lemma 6.9]); see the former assertion of Theorem 5.4 described later.
The following assertion is a direct consequence of results obtained so far (cf. [DeRe, Theorem 1] for the case of globally nilpotent connections).
**Theorem 5.2**.: _Let \(\Box\in\{G,\mathrm{nilp},\mathrm{aen}\}\), and suppose that \(\lambda\) belongs to \(\mathbb{Q}\setminus\mathbb{Z}\). Then, the functor \(\mathrm{mc}_{\lambda}\) restricts to an endofunctor_
\[\mathrm{mc}_{\lambda}:D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})_{\Box}\to D_{h}^{ b}(\mathcal{D}_{\mathbb{A}^{1}})_{\Box} \tag{94}\]
_on \(D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})_{\Box}\)._
Proof.: It follows from Proposition 5.1 that \(\mathscr{K}_{0}^{\lambda}\) is contained in \(D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})_{\Box}\). Hence, by the definition of \(\mathrm{mc}_{\lambda}(-)\) together with Theorem 3.10, Propositions 3.11, (i), and 4.6, we have \(\mathrm{mc}_{\lambda}(\mathscr{F}^{\bullet})\in\mathrm{ob}(D_{h}^{b}(\mathcal{D} _{\mathbb{A}^{1}})_{\Box})\) whenever \(\mathscr{F}^{\bullet}\) lies in \(D_{h}^{b}(\mathcal{D}_{\mathbb{A}^{1}})_{\Box}\).
### Estimate of global inverse radii I
The rest of this section is devoted to estimating the effect of middle convolution on the global inverse radii. We first deal with \(\mathcal{D}_{\mathbb{A}^{1}}\)-modules supported on a single point. For each prime \(p\) and \(\lambda\in\overline{\mathbb{Q}}\setminus\mathbb{Z}\), we shall set
\[\mathrm{ord}_{p}\lambda:=-\sum_{v\in\Sigma_{\mathcal{O}_{K}},v|p}\log_{p}| \lambda|_{v}, \tag{95}\]
where \(K\) denotes a number field with \(\lambda\in K\); this value is immediately verified to be independent of the choice of \(K\) and satisfies \(\mathrm{ord}_{p}p=1\). Moreover, we shall set
\[H(\lambda):=\sum_{\begin{subarray}{c}p:\,\mathrm{prime}\\ \mathrm{s.t.}\,\mathrm{ord}_{p}\lambda<\,0\end{subarray}}\left(\frac{1}{p-1}- \mathrm{ord}_{p}\lambda\right)\cdot\log p\;\left(>0\right). \tag{96}\]
**Proposition 5.3**.: _Denote by \(\mathscr{O}\) the trivial \(\mathcal{D}_{\mathrm{Spec}(\overline{\mathbb{Q}})}\)-module associated to the \(\overline{\mathbb{Q}}\)-vector space \(\overline{\mathbb{Q}}\)._
1. _Let_ \(\lambda\) _be an element of_ \(\overline{\mathbb{Q}}\setminus\mathbb{Z}\) _and_ \(q\) \(a\) \(\overline{\mathbb{Q}}\)_-rational point of_ \(\mathbb{A}^{1}\)_. Then, we have_ (97) \[\mathrm{mc}_{\lambda}(\int_{q}\mathscr{O})\cong\mathscr{K}_{q}^{\lambda}.\]
2. _Let_ \(\lambda\) _be an element of_ \(\mathbb{Q}\setminus\mathbb{Z}\) _and_ \((q_{1},\cdots,q_{n})\) _(where_ \(n\in\mathbb{Z}_{>0}\)_) an_ \(n\)_-tuple of_ \(\overline{\mathbb{Q}}\)_-rational points of_ \(\mathbb{A}^{1}\)_. Then, the following inequality holds:_ (98) \[\rho(\mathrm{mc}_{\lambda}(\bigoplus_{i=1}^{n}\int_{q_{i}}\mathscr{O}))\left( =\rho(\bigoplus_{i=1}^{n}\mathscr{K}_{q_{i}}^{\lambda})\right)\leq H(\lambda).\]
Proof.: First, we shall consider assertion (i). Denote by \(q_{\mathbb{A}}\) the composite closed immersion \(\mathbb{A}^{1}\stackrel{{\sim}}{{\to}}\{q\}\times_{\overline{ \mathbb{Q}}}\mathbb{A}^{1}\hookrightarrow\mathbb{A}^{1}\times_{\overline{ \mathbb{Q}}}\mathbb{A}^{1}\). The composite \(\mu\circ q_{\mathbb{A}}\) coincides with the automorphism of \(\mathbb{A}^{1}\) given by \(x\mapsto x-q\). This implies \(L(\mu\circ q_{\mathbb{A}})^{*}(\mathscr{K}_{0}^{\lambda})\cong(\mu\circ q_{ \mathbb{A}})^{*}(\mathscr{K}_{0}^{\lambda})\cong\mathscr{K}_{q}^{\lambda}\). Hence, we have
\[\int_{\pi_{2}}\eta_{\mathbb{A}^{1*}}(L\pi_{1}^{*}\int_{q}\mathscr{ O}\otimes^{L}L\mu^{*}\mathscr{K}_{0}^{\lambda}) \cong\int_{\pi_{2}}\eta_{\mathbb{A}^{1*}}(\left(\int_{q_{\mathbb{ A}}}(\mathscr{O}_{\mathbb{A}^{1}},d)\right)\otimes^{L}L\mu^{*}\mathscr{K}_{0}^{ \lambda})\] \[\cong\int_{\pi_{2}}\eta_{\mathbb{A}^{1*}}(\int_{q_{\mathbb{A}}}L( \mu\circ q_{\mathbb{A}})^{*}(\mathscr{K}_{0}^{\lambda}))\] \[\cong\int_{\pi_{2}}\eta_{\mathbb{A}^{1*}}(\int_{q_{\mathbb{A}}} \mathscr{K}_{q}^{\lambda})\] \[\cong\mathscr{K}_{q}^{\lambda}, \tag{99}\]
where
* the first "\(\cong\)" follows from the base-change theorem (cf. [HTT, Theorem 1.7.3]),
* the second "\(\cong\)" follows from the projection formula (cf. [HTT, Corollary 1.7.5]), and
* the last "\(\cong\)" follows from some properties of minimal extensions asserted in [HTT, Theorem 3.4.2] together with the simplicity of \(\mathscr{K}_{q}^{\lambda}\).
This completes the proof of assertion (i).
Next, let us consider assertion (ii). Suppose that \(\lambda\in\mathbb{Q}\setminus\mathbb{Z}\). Also, let us take a number field \(K\) with \(q\in K\). For each prime \(p\) and each \(v\in\Sigma_{\mathcal{O}_{K}}\) with \(v|p\), we have
\[\limsup_{s\to\infty}\left|\frac{1}{s!}\cdot\prod_{j=0}^{s-1}(\lambda-j)\right|_{ v}^{\frac{1}{s}}\leq\begin{cases}1&\text{if }\operatorname{ord}_{p}\lambda\geq 0;\\ p^{\left(\frac{1}{p-1}-\operatorname{ord}_{p}\lambda\right),\frac{|\widehat{R} _{v}:\mathbb{Q}_{p}|}{|K:\mathbb{Q}|}}&\text{if }\operatorname{ord}_{p}\lambda<0\end{cases} \tag{100}\]
(cf. [Sch, Theorem 47.8]). It follows that
(101) \[\operatorname{Rad}_{\mathbb{A}^{1}_{\mathcal{O}_{K}},v}(\bigoplus_ {i=1}^{n}\mathscr{K}^{\lambda}_{q_{i}})^{-1}\] \[=\max\left(\{1\}\cup\bigcup_{i=1}^{n}\left\{\limsup_{s\to\infty} \left|\frac{1}{s!}\cdot\frac{1}{(t-q_{i})^{s}}\cdot\prod_{j=0}^{s-1}(\lambda-j )\right|_{\operatorname{Gauss},v}^{\frac{1}{s}}\right\}\right)\] \[=\max\left(\{1\}\cup\bigcup_{i=1}^{r}\left\{\frac{1}{\max\left\{ 1,|q_{i}|_{v}\right\}}\cdot\limsup_{s\to\infty}\left|\frac{1}{s!}\cdot\prod_{ j=0}^{s-1}(\lambda-j)\right|_{v}^{\frac{1}{s}}\right\}\right)\] \[\stackrel{{\eqref{eq:cond
isomorphic to a direct sum of finitely many \(\mathscr{K}_{q}^{\lambda}\)'s (for some \(q\)'s). Hence, Proposition 5.3, (ii), implies
\[\rho(\operatorname{mc}_{\lambda}(\mathscr{G}))\leq H(\lambda). \tag{104}\]
Also, for each \(x\in U(\overline{\mathbb{Q}})\) and \(l\in\mathbb{Z}\setminus\{0\}\), the fiber of \(\operatorname{mc}_{\lambda}^{l}(\int_{\iota}\mathscr{F})\) over \(x\) satisfies
\[\operatorname{mc}_{\lambda}^{l}(\int_{\iota}\mathscr{F})_{x}\cong H_{\text{dR }}^{l+1}(\mathbb{P}^{1},\eta_{\ast}(\int_{\iota}\mathscr{F}\otimes\mathscr{K}_{ x}^{\lambda}))=H_{\text{dR}}^{l+1}(\mathbb{A}^{1},\int_{\iota}\mathscr{F} \otimes\mathscr{K}_{x}^{\lambda})=0, \tag{105}\]
where
* \(\eta\) denotes the natural open immersion \(\mathbb{A}^{1}\hookrightarrow\mathbb{P}^{1}\) and the first "\(\cong\)" follows essentially from [13, Corollary 2.8.5] and [12] because \(\mathscr{F}\) has at most regular singularities (cf. Remark 4.7),
* the last "\(=\)" for \(l=-1\) follows from the fact that, since \(x\in U(\overline{\mathbb{Q}})\), there is no horizontal section of \(\int_{\iota}\mathscr{F}\otimes\mathscr{K}_{x}^{\lambda}\) (on an open neighborhood of \(x\)), and
* the last "\(=\)" for \(l\neq-1,0\) follows from the affineness of \(\mathbb{A}^{1}\).
Similarly, we have \(\operatorname{mc}_{\lambda}^{l}(\iota_{\ast}\mathscr{F})_{x}=0\) for every \(l\in\mathbb{Z}\setminus\{0\}\). It follows that \(\operatorname{mc}_{\lambda}^{l}(\int_{\iota}\mathscr{F})|_{U}=\operatorname{ mc}_{\lambda}^{l}(\iota_{\ast}\mathscr{F})|_{U}=0\) for every \(l\in\mathbb{Z}\setminus\{0\}\). In particular, we have finished the proof of the former assertion. Moreover, the natural short exact sequence \(0\to\iota_{\ast}\mathscr{F}\to\int_{\iota}\mathscr{F}\to\mathscr{G}\to 0\) induces a short exact sequence of \(\mathcal{D}_{U}\)-modules
\[0\longrightarrow\operatorname{mc}_{\lambda}^{0}(\iota_{\ast}\mathscr{F})|_{U} \longrightarrow\operatorname{mc}_{\lambda}^{0}(\int_{\iota}\mathscr{F})|_{U} \longrightarrow\operatorname{mc}_{\lambda}^{0}(\mathscr{G})|_{U}\longrightarrow 0. \tag{106}\]
By Proposition 4.8 and (104), we obtain the inequalities
\[\left|\rho(\operatorname{mc}_{\lambda}^{0}(\int_{\iota}\mathscr{F}))-\rho( \operatorname{mc}_{\lambda}^{0}(\iota_{\ast}\mathscr{F}))\right|\leq\rho( \operatorname{mc}_{\lambda}^{0}(\mathscr{G}))\leq H(\lambda). \tag{107}\]
This completes the proof of the first inequality in (103).
Next, we shall prove the second inequality in (103). Let us take a number field \(K\) such that there exists an open subscheme \(U_{K}\) of \(\mathbb{A}_{K}^{1}\) and a flat bundle \(\mathscr{F}_{K}\) on \(U_{K}/K\) with \((U_{K},\mathscr{F}_{K})\times_{K}\overline{\mathbb{Q}}=(U,\mathscr{F})\). Denote by \(\kappa:(U\times U)\setminus\Delta_{U}\hookrightarrow\mathbb{P}^{1}\times \mathbb{A}^{1}\) the natural open immersion, where \(\Delta_{U}\) denotes the image of the diagonal embedding \(U\hookrightarrow U\times U\). Also, for each \(i=1,2\), we shall write \(\varpi_{i}\) for the projection \((U\times U)\setminus\Delta_{U}\to U\) to the \(i\)-th factor. The \(\mathcal{D}_{\mathbb{P}^{1}\times\mathbb{A}^{1}}\)-module \(\mathscr{E}:=\eta_{\mathbb{A}^{1}\mathfrak{l}_{\ast}}(L\pi_{1}^{\ast}\int_{ \iota}\mathscr{F}\otimes^{L}L\mu^{\ast}\mathscr{K}_{0}^{\lambda})\) satisfies
\[\mathscr{E}|_{\varpi_{1}^{-1}(U)} \cong\left(\int_{\eta_{\mathbb{A}^{1}}}L\pi_{1}^{\ast}\int_{ \iota}\mathscr{F}\otimes^{L}L\mu^{\ast}\mathscr{K}_{0}^{\lambda}\right)\Big{|} _{\varpi_{1}^{-1}(U)}\] \[\cong\int_{\eta_{\mathbb{A}^{1}}}\left(\int_{\iota_{U}}L\pi_{1,U }^{\ast}\mathscr{F}\otimes^{L}L\mu^{\ast}\mathscr{K}_{0}^{\lambda}\right) \Big{|}_{\varpi_{1}^{-1}(U)}\] \[\cong\left(\int_{\kappa}\left(L\pi_{1,U}^{\ast}\mathscr{F} \otimes^{L}L\mu^{\ast}\mathscr{K}_{0}^{\lambda}\Big{|}_{U\times\mathbb{A}^{1} }\right)\Big{|}_{U\times U\setminus\Delta_{U}}\right)\Big{|}_{\varpi_{1}^{-1}( U)}, \tag{108}\]
where
* \(\iota_{U}\) denotes the open immersion \(U\times\mathbb{A}^{1}\hookrightarrow\mathbb{A}^{1}\times\mathbb{A}^{1}\) and \(\pi_{1,U}\) denotes the projection \(U\times\mathbb{A}^{1}\to U\) onto the first factor,
* the first "\(\cong\)" follows from [13, Corollary 2.8.5, (2)] (together with [12, Theorem 7.1.1]),
* the second "\(\cong\)" follows from [HTT, Theorem 1.7.3], and
* the third "\(\cong\)" follows from [HTT, Corollary 1.7.5].
If we write \(Z:=(\mathbb{P}^{1}\times\mathbb{A}^{1})\setminus\operatorname{Im}(\kappa)\) (equipped with a structure of reduced scheme) and write \(\zeta:Z\hookrightarrow\mathbb{P}^{1}\times\mathbb{A}^{1}\) for the natural closed immersion, then
\[\left(\int_{\zeta}\zeta^{\dagger}\mathscr{E}\right)\Big{|}_{ \varpi_{1}^{-1}(U)} \cong\int_{\zeta}\zeta^{\dagger}(\mathscr{E}|_{\varpi_{1}^{-1}(U)})\] \[\stackrel{{\eqref{eq:2.1}}}{{\cong}}\left(\int_{ \zeta}\zeta^{\dagger}\left(\int_{\kappa}\left(L\pi_{1,U}^{*}\mathscr{F}\otimes ^{L}L\mu^{*}\mathscr{K}_{0}^{\lambda}\Big{|}_{U\times\mathbb{A}^{1}}\right) \Big{|}_{U\times U\setminus\Delta_{U}}\right)\right)\Big{|}_{\varpi_{1}^{-1}( U)}\] \[=0, \tag{109}\]
where the last equality follows from \(\zeta^{\dagger}\int_{\kappa}(-)=0\) asserted in [HTT, Proposition 1.7.1, (ii)]. In particular, the equality \(\left(\int_{\pi_{2}}\int_{\zeta}\zeta^{\dagger}\mathscr{E}\right)\Big{|}_{U}=0\) holds. Since the natural short exact sequence \(0\to\int_{\zeta}\zeta^{\dagger}\mathscr{E}\to\mathscr{E}\to\int_{\kappa}\kappa ^{\dagger}\mathscr{E}\to 0\) induces a distinguished triangle
\[\left(\int_{\pi_{2}}\int_{\zeta}\zeta^{\dagger}\mathscr{E}\right)\Big{|}_{U} \to\left(\int_{\pi_{2}}\mathscr{E}\right)\Big{|}_{U}\to\left(\int_{\pi_{2}} \int_{\kappa}\kappa^{\dagger}\mathscr{E}\right)\Big{|}_{U}\stackrel{{ +1}}{{\longrightarrow}}, \tag{110}\]
\(\left(\int_{\pi_{2}}\mathscr{E}\right)\Big{|}_{U}\) is quasi-isomorphic to \(\left(\int_{\pi_{2}}\int_{\kappa}\kappa^{\dagger}\mathscr{E}\right)\Big{|}_{U}\). On the other hand, we have
\[\left(\int_{\pi_{2}}\int_{\kappa}\kappa^{\dagger}\mathscr{E} \right)\Big{|}_{U} \cong\int_{\varpi_{2}}\left(L\pi_{1}^{*}\int_{\iota}\mathscr{F} \otimes^{L}L\mu^{*}\mathscr{K}_{0}^{\lambda}\right)\Big{|}_{(U\times U) \setminus\Delta_{U}}\] \[\cong\int_{\varpi_{2}}L\varpi_{1}^{*}\mathscr{F}\otimes^{L} \left(L\mu^{*}\mathscr{K}_{0}^{\lambda}|_{(U\times U)\setminus\Delta_{U}} \right). \tag{111}\]
It follows that the following sequence of equalities holds:
\[\rho(\operatorname{mc}_{\lambda}(\int_{\iota}\mathscr{F})) =\rho(\int_{\pi_{2}}\mathscr{E})\] \[=\rho(\left(\int_{\pi_{2}}\mathscr{E}\right)\Big{|}_{U})\] \[=\rho(\int_{\varpi_{2}}L\varpi_{1}^{*}\mathscr{F}\otimes^{L} \left(L\mu^{*}\mathscr{K}_{0}^{\lambda}|_{(U\times U)\setminus\Delta_{U}} \right)). \tag{112}\]
Next, by Remark 4.1, (iii), and Proposition 5.3, (ii), we obtain the inequalities
\[\rho_{\mathbb{A}_{\mathcal{O}_{K}}^{2}}(L\varpi_{1}^{*}\mathscr{F})\leq\rho( \mathscr{F}),\quad\rho_{\mathbb{A}_{\mathcal{O}_{K}}^{2}}(L\mu^{*}\mathscr{K}_ {0}^{\lambda}|_{(U\times U)\setminus\Delta_{U}})\leq\rho(\mathscr{K}_{0}^{ \lambda})\leq H(\lambda), \tag{113}\]
where \(\mathbb{A}_{\mathcal{O}_{K}}^{2}:=\mathbb{A}_{\mathcal{O}_{K}}^{1}\times_{ \mathcal{O}_{K}}\mathbb{A}_{\mathcal{O}_{K}}^{1}\). Hence, Remark 4.1, (i), implies
\[\rho_{\mathbb{A}_{\mathcal{O}_{K}}^{2}}(L\varpi_{1}^{*}\mathscr{F}\otimes^{L} L\mu^{*}\mathscr{K}_{0}^{\lambda}|_{(U\times U)\setminus\Delta_{U}})\leq\rho( \mathscr{F})+H(\lambda). \tag{114}\]
Also, since \(L\varpi_{1}^{*}\mathscr{F}\otimes^{L}L\mu^{*}\mathscr{K}_{0}^{\lambda}|_{(U \times U)\setminus\Delta_{U}}\) is of type \(G\) (cf. Proposition 5.1), it has at most regular singularities and the exponents at any point at infinity on (the smooth proper model of) \(U\) is contained in \(\mathbb{Q}\) (cf. Remark 4.7). It follows that, after possibly replacing \(U\) with its open subscheme, we can apply [AnBa, Theorem 3.1.2] to the case where "\(\mathcal{M}_{K_{v}}\)" is taken to be the
\(v\)-adic completion of the flat bundle \(L\varpi_{1}^{*}\mathscr{F}\otimes^{L}L\mu^{*}\mathscr{K}_{0}^{\wedge}|_{(U\times U )\setminus\Delta_{U}}\left(=\varpi_{1}^{*}\mathscr{F}\otimes\mu^{*}\mathscr{K}_ {0}^{\wedge}|_{(U\times U)\setminus\Delta_{U}}\right)\) for every \(v\in\Sigma_{\mathcal{O}_{K}}\); hence, the inequality
\[\rho(\int_{\varpi_{2}}L\varpi_{1}^{*}\mathscr{F}\otimes^{L}L\mu^{*}\mathscr{K}_ {0}^{\wedge}|_{(U\times U)\setminus\Delta_{U}})\leq(n^{2}+1)\cdot\rho_{\mathbb{ A}_{\mathcal{O}_{K}}^{2}}(L\varpi_{1}^{*}\mathscr{F}\otimes^{L}L\mu^{*}\mathscr{K}_ {0}^{\wedge}|_{(U\times U)\setminus\Delta_{U}}) \tag{115}\]
holds. As a consequence, the second inequality in (103) can be verified by combining (112), (114), and (115).
**Remark 5.5**.: If we know specific knowledge about a given chain complex \(\mathscr{F}^{\bullet}\in D^{b}_{h}(\mathcal{D}_{\mathbb{A}^{1}})_{G}\), an upper bound for the generic inverse radius of its middle convolution may be obtained explicitly. In fact, by induction on the cohomological length of \(\mathscr{F}^{\bullet}\), the result of Proposition 4.8 enables us to reduce the situation to the case where \(\mathscr{F}^{\bullet}\) is a holonomic \(\mathcal{D}_{\mathbb{A}^{1}}\)-module \(\mathcal{F}\). Moreover, by induction on the length of the composition series of \(\mathscr{F}\), it can be assumed to be irreducible. Since \(\mathscr{F}\) is the minimal extension of a flat bundle on a locally closed smooth subvariety of \(\mathbb{A}^{1}\), \(\mathscr{F}\) is isomorphic to either \(\int_{q}\mathscr{O}\) (for some \(q\in\mathbb{A}^{1}(\overline{\mathbb{Q}})\)) or \(\iota_{t*}\mathscr{G}\) for some open immersion \(\iota:U\hookrightarrow\mathbb{A}^{1}\) and an irreducible flat bundle \(\mathscr{G}\) on \(U\). Thus, by applying Proposition 5.3, (ii), and Theorem 5.4, we can estimate the value \(\rho(\operatorname{mc}_{\lambda}(\mathscr{F}))\), as desired.
## 6. Equivalence among various arithmetic properties on rigid flat bundles
The purpose of this final section is to prove Theorem C, asserting a comparison among the classes of arithmetic flat bundles discussed so far. We do so by restricting flat bundles to rigid ones and applying Katz's middle convolution algorithm in order to reduce the problem to the rank one case. We refer the reader to [10] for a reasonable reference on related topics.
Given a flat bundle \(\mathscr{F}\) on a nonempty open subscheme \(U\) of \(\mathbb{A}^{1}\) (where \(\iota\) denotes the open immersion \(U\hookrightarrow\mathbb{A}^{1}\)), we abuse notation by writing \(\operatorname{mc}^{0}_{\lambda}(\mathscr{F})\) (where \(\lambda\in\mathbb{Q}\setminus\mathbb{Z}\)) for the sheaf \(\operatorname{mc}^{0}_{\lambda}(\int_{\iota}\mathscr{F})|_{U}\); this is none other than the classical definition of the middle convolution of \(\mathscr{F}\). In particular, it follows from a well-known result that \(\operatorname{mc}^{0}_{-\lambda}(\operatorname{mc}^{0}_{\lambda}(\mathscr{F}) )\cong\mathscr{F}\).
### Katz's middle convolution algorithm
Let \(q\) be a closed point of the projective line \(\mathbb{P}^{1}\) over \(\overline{\mathbb{Q}}\). Each flat bundle \(\mathscr{F}\) on a nonempty open subscheme \(U\) of \(\mathbb{P}^{1}\) induces, via restriction, a flat bundle \(\Psi_{q}(\mathscr{F})\) on the punctured formal neighborhood of \(q\). The isomorphism class \([\Psi_{q}(\mathscr{F})]\) of \(\Psi_{q}(\mathscr{F})\) is called the **formal type of \(\mathscr{F}\) at \(q\)**. Then, we define the **formal type** of \(\mathscr{F}\) to be the collection
\[\left\{[\Psi_{q}(\mathscr{L})]\right\}_{q\in\mathbb{P}^{1}}. \tag{116}\]
Since \(\Psi_{q}(\mathscr{F})\) is trivial when \(q\in U\), this collection is essentially determined by the subset \(\left\{[\Psi_{q}(\mathscr{F})]\right\}_{q\in\mathbb{P}^{1}\setminus U}\).
Given a flat bundle \(\mathscr{F}\) as above, we shall say that \(\mathscr{F}\) is **rigid** (cf. [10, Definition 2.2]) if \(\mathscr{F}\) is determined by its formal type up to isomorphism, meaning that any flat bundle \(\mathscr{F}^{\prime}\) on \(U\) with \(\Psi_{q}(\mathscr{F})\cong\Psi_{q}(\mathscr{F}^{\prime})\) for every \(q\in\mathbb{P}^{1}\) is isomorphic to \(\mathscr{F}\).
We shall prove the following assertion based on the discussion in [10, SS 4].
**Theorem 6.1**.: _Let \(\mathscr{F}\) be an a.e. nilpotent flat bundle on a nonempty open subscheme \(U\subseteq\mathbb{A}^{1}\). Suppose that \(\mathscr{F}\) is irreducible and rigid, and that \(\operatorname{rk}(\mathscr{F})>1\). Then, there exists a pair_
\[(\lambda,\mathscr{M}) \tag{117}\]
_consisting of \(\lambda\in\mathbb{Q}\setminus\mathbb{Z}\) and a globally convergent flat bundle \(\mathscr{M}\) of rank one on \(U\) such that the flat bundle \(\operatorname{mc}^{0}_{\lambda}(\mathscr{M}\otimes\mathscr{F})\) has rank smaller than \(\mathscr{F}\)._
Proof.: Let us take an arbitrary closed point \(q\in\mathbb{P}^{1}\setminus U\). Denote by \(\mathscr{V}_{q}\) be the irreducible component \(\mathscr{V}^{\prime}\) of the formal type \(\Psi_{q}(\mathscr{F})\) of \(\mathscr{F}\) at \(q\) that minimizes the value
\[\frac{\delta(\mathcal{H}om(\mathscr{V}^{\prime},\Psi_{q}(\mathscr{F})))}{ \operatorname{rk}(\mathscr{V}^{\prime})}, \tag{118}\]
where \(\delta(-)\) denotes the quantity defined in [Ari2, SS 3.1]. Since \(\Psi_{q}(\mathscr{F})\) is a.e. nilpotent, it has at most regular singularities (cf. Remark 4.7). It follows that \(\operatorname{rk}(\mathscr{V}_{q})=1\) and \(\mathscr{V}_{q}\) has regular singularities (cf. [Ari2, Example 4.1]) and that the residue \(\operatorname{res}(\mathscr{V}_{q})\) at \(q\) lies in \(\mathbb{Q}\).
Here, we shall suppose that the rational number \(\lambda:=\sum_{q\in\mathbb{P}^{1}\setminus U}\operatorname{res}(\mathscr{V}_{q})\) belongs to \(\mathbb{Z}\). Then, one can find a rank one flat bundle \(\mathscr{N}\) on \(U\) with \(\Psi_{q}(\mathscr{N})\cong\mathscr{V}_{q}\) for every \(q\in\mathbb{P}^{1}\setminus U\). By the Euler-Poincare formula, either \(\operatorname{Hom}(\mathscr{F},\mathscr{N})\) or \(\operatorname{Hom}(\mathscr{N},\mathscr{F})\) is nonzero (cf. [Ari2, Proposition 4.5]). Since \(\mathscr{F}\) is irreducible, this implies \(\mathscr{F}\cong\mathscr{N}\). In particular, \(\mathscr{F}\) has rank one, and this contradicts the assumptions. It follows that \(\lambda\in\mathbb{Q}\setminus\mathbb{Z}\).
Next, note that there exists a rank one flat bundle \(\mathscr{M}\) on \(U\) satisfying
\[\Psi_{q}(\mathscr{M}^{\vee})\cong\begin{cases}\mathscr{V}_{q}&\text{if }q\in \mathbb{A}^{1}\setminus U,\\ \mathscr{V}_{\infty}\otimes\hat{\mathscr{M}}_{\infty}^{-\lambda}&\text{if }q= \infty,\end{cases} \tag{119}\]
where \(\hat{\mathscr{M}}_{\infty}^{-\lambda}\) denotes the unique (up to isomorphism) regular singular flat bundle of rank one on the punctured formal neighborhood of \(\infty\in\mathbb{P}^{1}\) with \(\operatorname{res}(\hat{\mathscr{M}}_{\infty}^{-\lambda})=-\lambda\). Since \(\operatorname{res}(\mathscr{V}_{q})\) lies in \(\mathbb{Q}\) for each \(q\), the flat bundle \(\mathscr{M}^{\vee}\) (hence also \(\mathscr{M}\)) is globally convergent (cf. Proposition 5.1). Moreover, it follows from [Ari2, Propositions 3.6 and 4.6] that
\[\operatorname{rk}(\operatorname{mc}_{\lambda}(\mathcal{H}om(\mathscr{M}^{ \vee},\mathscr{F})))\left(=\operatorname{rk}(\operatorname{mc}_{\lambda}( \mathscr{M}\otimes\mathscr{F}))\right)<\operatorname{rk}(\mathscr{F}). \tag{120}\]
This completes the proof of this proposition.
### Equivalence for rigid Fuchsian systems
We shall conclude this paper by proving the following theorem.
**Theorem 6.2** (cf. Theorem C).: _Let \(U\) be a nonempty open subscheme of \(\mathbb{P}^{1}\), and \(\nabla\) a rigid connection on a vector bundle over \(U\). Then, the following three conditions are equivalent to each other:_
1. \(\nabla\) _is a_ \(G\)_-connection;_
2. \(\nabla\) _is a.e. nilpotent;_
3. \(\nabla\) _is globally nilpotent._
Proof.: Since we already know the implications (a) \(\Rightarrow\) (b) and (c) \(\Rightarrow\) (b) (cf. (77)), it suffices to prove their inverse directions. Let \(\mathscr{F}\) be an a.e. nilpotent rigid flat bundle on \(U\). We may assume, without loss of generality, that \(\mathscr{F}\) is irreducible; this is because the properties under consideration are all closed under taking flat subbundles, flat quotient bundles, and extensions of two flat bundles. Also, after possibly shrinking \(U\), we suppose that \(U\subseteq\mathbb{A}^{1}\).
First, the case where \(\mathscr{F}\) has rank one follows from Proposition 4.5. Next, let us consider the case of \(\operatorname{rk}(\mathscr{F})>1\). By Theorem 6.1, there exists a sequence of flat bundles
\[\mathscr{F}=\mathscr{F}_{0}\mapsto\mathscr{F}_{1}\mapsto\cdots\mapsto\mathscr{F} _{n} \tag{121}\]
(\(n\geq 1\)) on \(U\) such that \(\mathscr{F}_{m+1}\) (for each \(m=0,\cdots,n-1\)) is obtained as the result of the middle convolution of \(\mathscr{F}_{m}\) with the parameter in \(\mathbb{Q}\setminus\mathbb{Z}\) (i.e., \(\mathscr{F}_{m+1}:=\operatorname{mc}_{\lambda_{m}}^{0}(\mathscr{F}_{m})\) for some \(\lambda_{m}\in\mathbb{Q}\setminus\mathbb{Z}\)) and that \(\mathscr{F}_{n}\) has rank one. Since the operation of takings the middle convolution preserves the property of being a.e. nilpotent (cf. Theorem 5.2), \(\mathscr{F}_{n}\) turns out to be a.e. nilpotent. By the assertion for the rank one case (considered above), we see that \(\mathscr{F}_{n}\) is both globally nilpotent and of type \(G\). According to Theorem 5.2, the inverse operation of each step in (121) (which is described as middle convolution because \(\operatorname{mc}_{-\lambda}^{0}(\operatorname{mc}_{\lambda}^{0}(-))\cong(-)\)) preserves the property of being globally nilpotent (resp., of type \(G\)). Thus, \(\mathscr{F}\) is verified to be globally nilpotent (resp., of type \(G\)). This completes the proof of this theorem.
### Acknowledgements
The author would like to thank algebraic varieties over \(\overline{\mathbb{Q}}\) for giving me a heartfelt encouragement and constructive comments about the globally inverse radius of a connection. Our work was partially supported by Grant-in-Aid for Scientific Research (KAKENHI No. 21K13770).
|
2309.04923 | On improvements of the Hardy, Copson and Rellich inequalities | Using a method of factorization and by introducing a generalized discrete
Dirichlet's Laplacian matrix $(-\Delta_{\Lambda})$, we establish an extended
improved discrete Hardy's inequality and Rellich inequality in one dimension.
We prove that the discrete Copson inequality (E.T. Copson, \emph{Notes on a
series of positive terms}, J. London Math. Soc., 2 (1927), 9-12.) in
one-dimension admits an improvement. We also prove that the improved Copson's
weights are optimal (in fact \emph{critical}). It is shown that improvement of
the Knopp inequalities (Knopp in J. London Math. Soc. 3(1928), 205-211 and
5(1930), 13-21) lies on improvement of the Rellich inequalities. Further, an
improvement of the generalized Hardy's inequality (Hardy in Messanger of Math.
54(1925), 150-156) in a special case is obtained. | Bikram Das, Atanu Manna | 2023-09-10T03:02:56Z | http://arxiv.org/abs/2309.04923v3 | # On improvements of the Hardy, Copson and Rellich inequalities
###### Abstract.
Using a method of factorization and by introducing a generalized discrete Dirichlet's Laplacian matrix \((-\Delta_{\Delta})\), we establish an extended improved discrete Hardy's inequality and Rellich inequality in one dimension. We prove that the discrete Copson inequality (E.T. Copson, _Notes on a series of positive terms_, J. London Math. Soc., 2 (1927), 9-12.) in one-dimension admits an improvement. It is shown that improvement of the Knopp inequalities (Knopp in J. London Math. Soc. 3(1928), 205-211 and 5(1930), 13-21) lies on improvement of the Rellich inequalities. Further, an improvement of the generalized Hardy's inequality (Hardy in Messanger of Math. 54(1925), 150-156) in a special case is obtained.
Key words and phrases:Discrete Hardy's inequality; Improvement; Copson's inequality; Rellich inequality, Knopp inequality.
such that \(q_{n}>0\), and denote \(A_{n}=q_{1}a_{1}+q_{2}a_{2}+\ldots+q_{n}a_{n}\) and \(Q_{n}=q_{1}+q_{2}+\ldots+q_{n}\) for \(n\in\mathbb{N}\). If \(\{\sqrt{q_{n}}a_{n}\}\in\ell_{2}\) then
\[\sum_{n=1}^{\infty}q_{n}Q_{n}^{-2}|A_{n}|^{2}<4\sum_{n=1}^{\infty}q_{n}|a_{n}|^ {2}, \tag{1.3}\]
unless all \(a_{n}\) is null. Also the constant term '4' is sharp.
E. T. Copson [2] further introduced and studied an extended version of inequality (1.3) as below. Let \(1<c\leq 2\). Then
\[\sum_{n=1}^{\infty}q_{n}Q_{n}^{-c}|A_{n}|^{2}\leq\Big{(}\frac{2}{c-1}\Big{)}^{ 2}\sum_{n=1}^{\infty}q_{n}Q_{n}^{2-c}|a_{n}|^{2}, \tag{1.4}\]
where the associated constant term is best possible, equality holds good when all \(a_{n}\) are '0'.
It is observed that the constant '\(\frac{1}{4}\)' in (1.2) is best possible but the whole weight '\(\frac{1}{4n^{2}}\)' can not, and this surprising discovery is due to Keller, Pinchover and Pogorzelski [16] (see also [17]), and they proved that the following inequality
\[\sum_{n=1}^{\infty}|A_{n}-A_{n-1}|^{2}\geq\sum_{n=1}^{\infty}w_{n}|A_{n}|^{2} >\frac{1}{4}\sum_{n=1}^{\infty}\frac{|A_{n}|^{2}}{n^{2}}, \tag{1.5}\]
where the weight sequence \(w_{n}\) for which the inequality (1.2) is improved defined as follows:
\[w_{n}=\frac{(-\Delta)n^{\frac{1}{2}}}{n^{\frac{1}{2}}}=2-\sqrt{1-\frac{1}{n}}- \sqrt{1+\frac{1}{n}}>\frac{1}{4n^{2}},\,n\in\mathbb{N},\]
and the discrete Dirichlet Laplacian operator \((-\Delta)\) acting on the sequence \(\{A_{n}\}\) was introduced (see [4]) as below:
\[((-\Delta)A)_{n}=\left\{\begin{array}{ll}2A_{0}-A_{1}&\mbox{if $n=0$},\\ 2A_{n}-A_{n-1}-A_{n+1}&\mbox{if $n\in\mathbb{N}$}.\end{array}\right.\]
An elementary proof of the inequality (1.5) was given by Krejcirik and Stampach [19], and a different proof of (1.5) was presented by Huang in [14]. By using a sequence \(\mu=\{\mu_{n}\}\) of strictly positive real numbers such that \(\mu_{0}=0\), an extension of the inequality (1.5) was established by Krejcirik, Laptev and Stampach (see Theorem 10, [20]) as provided below
\[\sum_{n=1}^{\infty}|A_{n}-A_{n-1}|^{2}\geq\sum_{n=1}^{\infty}w_{n}(\mu)|A_{n}| ^{2}, \tag{1.6}\]
where \(w_{n}(\mu)\) is given by
\[w_{n}(\mu)=\frac{((-\Delta)\mu)_{n}}{\mu_{n}}=2-\frac{\mu_{n-1}}{\mu_{n}}- \frac{\mu_{n+1}}{\mu_{n}},\,n\in\mathbb{N}.\]
The authors in [20] also obtained a criteria for optimality of \(w_{n}(\mu)\), and some further results on it. If one chooses \(\alpha=2-c\), and \(q_{n}=1\) for all \(n\in\mathbb{N}\) in (1.4) then we obtain a power type Hardy inequality
\[\sum_{n=1}^{\infty}|A_{n}-A_{n-1}|^{2}n^{\alpha}\geq\frac{(\alpha-1)^{2}}{4} \sum_{n=1}^{\infty}\frac{|A_{n}|^{2}}{n^{2}}n^{\alpha}. \tag{1.7}\]
In 2022, Gupta [8] studied the improvement of (1.7) and proved that when \(\alpha\in\{0\}\cup[1/3,1)\), the following inequality holds:
\[\sum_{n=1}^{\infty}|A_{n}-A_{n-1}|^{2}n^{\alpha}\geq\sum_{n=1}^{\infty}w_{n}( \alpha,\beta)|A_{n}|^{2}>\frac{(\alpha-1)^{2}}{4}\sum_{n=1}^{\infty}\frac{|A_{ n}|^{2}}{n^{2}}n^{\alpha}, \tag{1.8}\]
where \(\alpha,\beta\in\mathbb{R}\), \(w_{1}(\alpha,\beta):=1+2^{\alpha}-2^{\alpha+\beta}\), and for \(n\geq 2\)
\[w_{n}(\alpha,\beta)=n^{\alpha}\Big{[}1+\Big{(}1+\frac{1}{n}\Big{)}^{\alpha}- \Big{(}1-\frac{1}{n}\Big{)}^{\beta}-\Big{(}1+\frac{1}{n}\Big{)}^{\alpha+\beta }\Big{]},\ \beta=\frac{1-\alpha}{2}.\]
By considering all the latest developments, recently the authors in [3] have obtained a generalized improved discrete Hardy's inequality in the following form
\[\sum_{n=1}^{\infty}\frac{|A_{n}-A_{n-1}|^{2}}{\lambda_{n}}\geq\sum_{n=1}^{ \infty}w_{n}(\lambda,\mu)|A_{n}|^{2}, \tag{1.9}\]
where \(\lambda=\{\lambda_{n}\}\) is a real sequence such that \(\lambda_{n}>0\), \(n\in\mathbb{N}\). The criteria for optimality of the weight sequence \(w_{n}(\lambda,\mu)\) is also discussed in [3], where \(w_{n}(\lambda,\mu)\) was defined as below
\[w_{n}(\lambda,\mu)=\frac{1}{\lambda_{n}}+\frac{1}{\lambda_{n+1}}-\frac{\mu_{n -1}}{\lambda_{n}\mu_{n}}-\frac{\mu_{n+1}}{\lambda_{n+1}\mu_{n}}.\]
Now if we choose \(q_{n}=n\) for \(n\in\mathbb{N}\) in (1.3), then we get a particular type of generalized Hardy inequality with sharp constant
\[\sum_{n=1}^{\infty}nQ_{n}^{-2}|A_{n}|^{2}<4\sum_{n=1}^{\infty}n|a_{n}|^{2}, \tag{1.10}\]
where \(Q_{n}=\frac{n(n+1)}{2}\). As far as our knowledge is concerned, we couldn't find any point-wise improvement study of (1.10) in the literature. Therefore, first we start with the following investigation.
_Q(a) Is the generalized Hardy's inequality (1.10) admits an improvement?_
The authors in [3] also studied an improvement of the Copson inequality (1.4) in case when \(q_{n}=n\), and \(\alpha=2-c\), and obtained an improved Copson inequality when \(c=\frac{3}{2}\). Now if we choose \(q_{n}=n^{2}\), and \(c=3/2\) in (1.4) then we don't have any information about improvement of the corresponding Copson inequality stated as below
\[\sum_{n=1}^{\infty}\frac{1}{16}\frac{n^{2}}{\widetilde{S_{n}}^{\frac{3}{2}}}| A_{n}|^{2}\leq\sum_{n=1}^{\infty}\sqrt{\widetilde{S}_{n}}\frac{|A_{n}-A_{n-1}|^{2 }}{n^{2}}, \tag{1.11}\]
where \(\widetilde{S}_{n}=\frac{n(n+1)(2n+1)}{6}\), and \(A_{n}=a_{1}+4a_{2}+\ldots+n^{2}a_{n}\), \(A_{0}=0\). Then it is natural to ask the following question:
_Q(b) Is there exists a weight sequence for which the Copson inequality (1.11) can be improved?_
On the other hand, if we replace \((-\Delta)\) by bi-Laplacian operator \((-\Delta)^{2}\) then one gets the discrete Rellich inequality [4] in one dimension as below:
\[\sum_{n=2}^{\infty}((-\Delta)^{2}A)_{n}\bar{A}_{n}=\sum_{n=1}^{\infty}|((- \Delta)A)_{n}|^{2}\geq\frac{9}{16}\sum_{n=2}^{\infty}\frac{|A_{n}|^{2}}{n^{4}}, \tag{1.12}\]
where \(A_{0}=0\), \(A_{1}=0\), and the operator \((-\Delta)^{2}\) acting on the sequence \(\{A_{n}\}\) is defined as
\[((-\Delta)^{2}A)_{n}=\left\{\begin{array}{ll}5A_{0}-4A_{1}+A_{2}&\mbox{ if }n=0,\\ -4A_{0}+6A_{1}-4A_{2}+A_{3}&\mbox{ if }n=1,\\ 6A_{n}-4A_{n-1}-4A_{n+1}+A_{n-2}+A_{n+2}&\mbox{ if }n\geq 2.\end{array}\right.\]
The discrete Rellich inequality (1.12) with a sharp constant \(\frac{9}{16}\) (as compare to the continuous analogue (1.13)), and its improvement is studied for the first time by Gerhat, Krejcirik, and Stampach in [4]. Although, Gupta [10] (see also [9] for higher dimensions) obtained a similar discrete Rellich inequality (1.12) but with a weaker constant \(\frac{8}{16}\) instead of \(\frac{9}{16}\). The continuous analogue of (1.12) was appeared in [23], and is reads as follows
\[\int_{0}^{\infty}|f^{\prime\prime}(x)|^{2}dx\geq\frac{9}{16}\int_{0}^{\infty} \frac{|f(x)|^{2}}{x^{4}}dx, \tag{1.13}\]
where \(f\in L^{2}(0,\infty)\) such that \(f(0)=f^{\prime}(0)=0\), and the associated constant term \(\frac{9}{16}\) is best possible. Similar to the case of point-wise improvement of (1.1), Gerhat, Krejcirik and Stampach [4] shown that there exist a weight sequence \(\rho_{n}^{(2)}\) for which the whole weight \(\frac{9}{16n^{4}}\) in (1.12) can be improved. In fact, the authors in [4] proved that
\[\sum_{n=1}^{\infty}|((-\Delta)A)_{n}|^{2}\geq\sum_{n=2}^{\infty}\rho_{n}^{(2) }\frac{|A_{n}|^{2}}{n^{4}}>\frac{9}{16}\sum_{n=2}^{\infty}\frac{|A_{n}|^{2}}{ n^{4}}, \tag{1.14}\]
where improved Rellich weights \(\rho_{n}^{(2)}\) for \(n\geq 2\) is defined as follows:
\[\rho_{n}^{(2)}=\frac{(-\Delta)^{2}n^{\frac{3}{2}}}{n^{\frac{3}{2}}}=6-4\Big{(} 1+\frac{1}{n}\Big{)}^{3/2}-4\Big{(}1-\frac{1}{n}\Big{)}^{3/2}+\Big{(}1+\frac{ 2}{n}\Big{)}^{3/2}+\Big{(}1-\frac{2}{n}\Big{)}^{3/2}>\frac{9}{16n^{4}}.\]
It is shown that the Rellich weight \(\rho_{n}^{(2)}\) is not critical but is non-attainable and optimal near infinity. In a more recent preprint, the authors in [5] investigated the criticality and subcriticality of the positive powers of the discrete Laplacian \((-\Delta)\) on the half-line. The Rellich inequality is studied in various domains such as in graph setting (see [15], [18]), on integers and lattices (see [9], [10]). To establish the inequality (1.14), the authors considered a method of factorization from [6] (see also [7]) and a remainder approach (see [19]) of proving the inequality (1.5). At the end, a conjecture was given for the higher order Rellich inequalities (see Section 4, [4]). Indeed, it is conjectured that
\[\sum_{n=1}^{\infty}|((-\Delta)^{\alpha}A)_{n}|^{2}=\sum_{n=\alpha}^{\infty}((- \Delta)^{\alpha}A)_{n}\bar{A}_{n}\geq\sum_{n=\alpha}^{\infty}\rho_{n}^{( \alpha)}|A_{n}|^{2}>\sum_{n=1}^{\infty}\frac{((2\alpha)!)^{2}}{16^{\alpha}( \alpha!)^{2}}\frac{1}{n^{2\alpha}}|A_{n}|^{2}, \tag{1.15}\]
where \(A_{n}=0\) for \(0\leq n\leq\alpha-1\) and higher order Rellich weight is given as below:
\[\rho_{n}^{(\alpha)}=\frac{((-\Delta)^{\alpha}\mu^{(\alpha)})_{n}}{\mu_{n}^{( \alpha)}},\,\mu_{n}^{(\alpha)}=n^{\alpha-\frac{1}{2}}.\]
With the above discussions, one may be interested to know answer of the following query:
_Q(c) Is it possible to extend the Rellich inequality (1.14) for an aribitrary sequence \(\{\lambda_{n}\}\) (say)?_
When passing through a literature of Hardy-type inequalities, we have found the existence of Knopp inequality [21](see also [1]) which states that for \(\alpha\geq 1\) the following sharp inequality
\[\sum_{n=1}^{\infty}\Big{(}\frac{1}{{n-1+\alpha\choose n-1}}\sum_{k=1}^{n}{n- k+\alpha-1\choose n-k}|a_{k}|\Big{)}^{2}<\Big{(}\frac{\Gamma(\alpha+1)\Gamma( \frac{1}{2})}{\Gamma(\alpha+\frac{1}{2})}\Big{)}^{2}\sum_{n=1}^{\infty}|a_{n} |^{2}, \tag{1.16}\]
holds unless \(a_{n}\) is null. Note that in case of \(\alpha=1\), Knopp inequality (1.16) is nothing but a classical discrete Hardy's inequality (1.1). We now have another question, which will also be investigated here in the sequel.
_Q(d) Can the Knoop inequality (1.16) be improved?_
Therefore, the main aim of this paper is to provide answer of all queries (Q(a) to Q(d)) raised above. With this aim, first we define an elongated discrete Dirichlet Laplacian \((-\Delta_{\Lambda})\) acting on a sequence \(\{A_{n}\}\), and establish the corresponding improved discrete Hardy's inequality by using a factorization technique as considered in [4] (see also [6], [7]). As an application of this result, we give an answer to the Question (a). In a response to Question (b), we prove that there exists a weight sequence for which improvement of the Copson inequality (1.11) is possible. Later, we define the square of Dirichlet Laplacian \((-\Delta_{\Lambda})\) through which generalized improved Rellich inequality will be established, and come up with an answer to Question (c). Finally, we give an affirmative answer to Question (d), that is we prove that the Knopp inequality (1.16) admits an improvement. In particular, it is proved that improvement of the Knopp inequality lies on the improvement of the Rellich inequalities.
The paper is arranged as follows. In Section 2, we establish an extended improved discrete Hardy's inequality in one dimension, and improve a generalized Hardy inequality in a particular case. Section 3, deals with improvement of the Copson inequality. In Section 4, we establish an extended improved discrete Rellich inequality in one dimension for an arbitrary sequence \(\{\lambda_{n}\}\) by using a method of factorization. Also validity of the remainder term is established by proving the existence of such \(\{\lambda_{n}\}\). Finally, Section 5 demonstrates that Knopp inequality can be improved via Rellich inequality.
## 2. Further extension of improved Hardy's inequality
Before proving main results in this section, we first define an elongated version of the Dirichlet Laplacian \((-\Delta)\). For this, suppose that \(\{\lambda_{n}\}\) is a strictly positive sequence of real numbers, and denote \(\Lambda_{n}=\lambda_{1}+\ldots+\lambda_{n}\), \(n\in\mathbb{N}\). Suppose further that \(1<c\leq 2\) and \(A=\{A_{n}\}\in C_{c}(\mathbb{N}_{0})\). The discrete operator Dirichlet Laplacian \((-\Delta_{\Lambda})\) acting on \(A\) is defined as below:
\[((-\Delta_{\Lambda})A)_{n}=\left\{\begin{array}{ll}(\frac{\Lambda_{0}^{2-c}} {\lambda_{0}}+\frac{\Lambda_{1}^{2-c}}{\lambda_{1}})A_{0}-\frac{\Lambda_{1}^{2 -c}}{\lambda_{1}}A_{1}&\mbox{if $n=0$,}\\ (\frac{\Lambda_{n+1}^{2-c}}{\lambda_{n+1}}+\frac{\Lambda_{n}^{2-c}}{\lambda_{n }})A_{n}-\frac{\Lambda_{n}^{2-c}}{\lambda_{n}}A_{n-1}-\frac{\Lambda_{n+1}^{2- c}}{\lambda_{n+1}}A_{n+1}&\mbox{if $n\in\mathbb{N}$,}\end{array}\right.\]
where we assume that \(\Lambda_{0}=\lambda_{0}\) is a strictly positive real number.
It is easy to observed that the corresponding infinite matrix representation of \((-\Delta_{\Lambda})\) acting on \(A=\{A_{n}\}\in C_{c}(\mathbb{N})\) has the following form:
\[(-\Delta_{\Lambda})=\begin{pmatrix}\left(\frac{\Lambda_{1}^{2-c}}{\lambda_{1}} +\frac{\Lambda_{2}^{2-c}}{\lambda_{2}}\right)&-\frac{\Lambda_{2}^{2-c}}{ \lambda_{2}}&0&0&\cdots\\ -\frac{\Lambda_{2}^{2-c}}{\lambda_{2}}&\left(\frac{\Lambda_{2}^{2-c}}{\lambda _{2}}+\frac{\Lambda_{3}^{2-c}}{\lambda_{3}}\right)&-\frac{\Lambda_{3}^{2-c}}{ \lambda_{3}}&0&\cdots\\ 0&-\frac{\Lambda_{3}^{2-c}}{\lambda_{3}}&\left(\frac{\Lambda_{3}^{2-c}}{ \lambda_{3}}+\frac{\Lambda_{4}^{2-c}}{\lambda_{4}}\right)&-\frac{\Lambda_{4}^ {2-c}}{\lambda_{4}}&\cdots\\ \vdots&\vdots&\vdots&\ddots\end{pmatrix}\]
In the following, we establish an extended improved discrete Hardy's inequality for a large class of sequences by using a factorization technique as considered in [4]. In what follows, we denote \(E=diag\{\eta_{1},\eta_{2},\ldots\}\). Then we have the following result stated as below.
**Theorem 2.1**.: _Let \(A=\{A_{n}\}\) be any sequence of complex numbers such that \(A_{n}\in C_{c}(\mathbb{N})\) with \(A_{0}=0\) and \(\mu=\{\mu_{n}\}\) be any strictly positive sequence of real numbers. Then we have the following identity:_
\[A(-\Delta_{\Lambda})\bar{A}^{T}=AE\bar{A}^{T}+A(\overline{R}_{\Lambda}^{(1)}{} ^{T}R_{\Lambda}^{(1)})\bar{A}^{T}, \tag{2.1}\]
_where the sequence \(\eta=\{\eta_{n}\}\) is defined as below:_
\[\eta_{n}=\frac{((-\Delta_{\Lambda})\mu)_{n}}{\mu_{n}}=\frac{\Lambda_{n}^{2-c} }{\lambda_{n}}+\frac{\Lambda_{n+1}^{2-c}}{\lambda_{n+1}}-\frac{\mu_{n-1} \Lambda_{n}^{2-c}}{\lambda_{n}\mu_{n}}-\frac{\mu_{n+1}\Lambda_{n+1}^{2-c}}{ \lambda_{n+1}\mu_{n}}\]
_and \(R_{\Lambda}^{(1)}\) has the following matrix representation:_
\[R_{\Lambda}^{(1)}=\begin{pmatrix}\sqrt{\frac{\mu_{2}\Lambda_{2}^{2-c}}{\mu_{ 1}\lambda_{2}}}&-\sqrt{\frac{\mu_{1}\Lambda_{2}^{2-c}}{\mu_{2}\lambda_{2}}}&0& 0&0&\cdots\\ 0&\sqrt{\frac{\mu_{3}\Lambda_{3}^{2-c}}{\mu_{2}\lambda_{3}}}&-\sqrt{\frac{\mu _{2}\Lambda_{3}^{2-c}}{\mu_{3}\lambda_{3}}}&0&0&\cdots\\ 0&0&\sqrt{\frac{\mu_{4}\Lambda_{4}^{2-c}}{\mu_{3}\lambda_{4}}}&-\sqrt{\frac{ \mu_{3}\Lambda_{4}^{2-c}}{\mu_{4}\lambda_{4}}}&0&\cdots\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{pmatrix}.\]
Proof.: It is observed that the operator \((-\Delta_{\Lambda})-E\) has a tri-diagonal matrix representation. The method factorization enable us to write \((-\Delta_{\Lambda})-E\) as a decomposition \(\overline{R}_{\Lambda}^{(1)}{}^{T}R_{\Lambda}^{(1)}\) (see [4], [6] and [7]). To establish the identity (2.1), it is only required to determine the exact expression for \(R_{\Lambda}^{(1)}\) as the information about \((-\Delta_{\Lambda})\), and \(E\) is known to us. We claim that the remainder term \(R_{\Lambda}^{(1)}\) acting on the sequence \(\{A_{n}\}\) is of the following form:
\[(R_{\Lambda}^{(1)}A)_{n}=\Big{(}\sqrt{\frac{\mu_{n+1}}{\mu_{n}\lambda_{n+1}}}A _{n}-\sqrt{\frac{\mu_{n}}{\mu_{n+1}\lambda_{n+1}}}A_{n+1}\Big{)}\sqrt{\Lambda _{n+1}^{2-c}}.\]
We suppose that the identity (2.1), that is
\[\sum_{n=1}^{\infty}\Big{|}\frac{A_{n-1}-A_{n}}{\sqrt{\lambda_{n}}}\Big{|}^{2} \Lambda_{n}^{2-c}=\sum_{n=1}^{\infty}\eta_{n}|A_{n}|^{2}+\sum_{n=1}^{\infty}|(R _{\Lambda}^{(1)}A)_{n}|^{2} \tag{2.2}\]
holds, where we assume that
\[(R_{\Lambda}^{(1)}A)_{n}=\Big{(}\frac{\alpha_{n}A_{n}}{\sqrt{\lambda_{n+1}}}- \frac{A_{n+1}}{\alpha_{n}\sqrt{\lambda_{n+1}}}\Big{)}\sqrt{\Lambda_{n+1}^{2-c }}. \tag{2.3}\]
Plugging the assumed value of \((R_{\Lambda}^{(1)}A)_{n}\) from (2.3) in (2.2), then after simplifications and comparing the coefficients, we get a recurrence relation as below:
\[\alpha_{1}^{2}=\frac{\mu_{2}}{\mu_{1}},\]
and
\[\frac{\mu_{n-1}\Lambda_{n}^{2-c}}{\mu_{n}\lambda_{n}}+\frac{\mu_{n+1}\Lambda_ {n+1}^{2-c}}{\mu_{n}\lambda_{n+1}}=\frac{\alpha_{n}^{2}\Lambda_{n+1}^{2-c}}{ \lambda_{n+1}}+\frac{\Lambda_{n}^{2-c}}{\alpha_{n-1}^{2}\lambda_{n}}\ \forall\ n\geq 2\]
Using the value of \(\alpha_{1}^{2}\), we get from above the value \(\alpha_{2}^{2}\), and so on. In fact, we have
\[\alpha_{n}=\sqrt{\frac{\mu_{n+1}}{\mu_{n}}}\ \forall n\in\mathbb{N}.\]
Inserting the value of \(\alpha_{n}\) in (2.3), we get the exact expression for \(R_{\Lambda}^{(1)}\), and this proves our claim.
An important application of Theorem 2.1 is the following improvement of the generalized discrete Hardy's inequality (1.3) in a particular case. In fact, when one chooses \(q_{n}=n\) in (1.3) then we get
\[\sum_{n=1}^{\infty}\frac{|A_{n}|^{2}}{n(n+1)^{2}}\leq\sum_{n=1}^{\infty}n|a_{n }|^{2}. \tag{2.4}\]
Surprisingly, we observed that the inequality (2.4) admits an improvement. Indeed, we have the following result.
Corollary 2.1.: _Suppose that \(A\in C_{c}(\mathbb{N}_{0})\) with \(A_{0}=0\). Then the following inequality holds:_
\[\sum_{n=1}^{\infty}\frac{|A_{n}|^{2}}{n(n+1)^{2}}<\sum_{n=1}^{\infty}\eta_{n} |A_{n}|^{2}\leq\sum_{n=1}^{\infty}n|a_{n}|^{2}, \tag{2.5}\]
_where \(\eta_{n}=\frac{1}{n^{2}(n+1)}>\frac{1}{n(n+1)^{2}}\), \(n\in\mathbb{N}\)._
Proof.: Let us put \(c=2\) in (2.2), we get
\[\sum_{n=1}^{\infty}\frac{\big{|}A_{n-1}-A_{n}\big{|}^{2}}{\lambda_{n}}\geq\sum _{n=1}^{\infty}\eta_{n}\big{|}A_{n}\big{|}^{2}, \tag{2.6}\]
where \(A_{n}=\lambda_{1}a_{1}+\ldots+\lambda_{n}a_{n}\). Now choose \(\lambda_{n}=n\) in (2.6), and \(\mu_{n}=n\), \(c=2\) in \(\eta_{n}\) of Theorem 2.1, one gets
\[\sum_{n=1}^{\infty}n\big{|}a_{n}\big{|}^{2}\geq\sum_{n=1}^{\infty}\eta_{n}|A_{n }|^{2}>\sum_{n=1}^{\infty}\frac{|A_{n}|^{2}}{n(n+1)^{2}}. \tag{2.7}\]
Hence the proof.
We have another result (without proof) from Theorem 2.1 in case of \(c=2\). Indeed, when one chooses \(c=2\) then operator \((-\Delta_{\Lambda})\) reduced to the operator \((-\Delta_{\lambda})\), and we replace \(R_{\Lambda}^{(1)}\) by \(R_{\lambda}^{(1)}\). The following corollary demonstrates our result (see Theorem 2.1, [3]).
Corollary 2.2.: _Let \(A\in C_{c}(\mathbb{N}_{0})\) be such that \(A_{0}=0\). Suppose further that \(\mu=\{\mu_{n}\}\) be any strictly positive sequence of real numbers and \(S=\text{diag}\{\sigma_{1},\sigma_{2},\ldots\}\). Then we have_
\[A(-\Delta_{\lambda})\bar{A}^{T}=AS\bar{A}^{T}+A(\overline{R}_{\lambda}^{(1)}{} ^{T}R_{\lambda}^{(1)})\bar{A}^{T},\]
_which means the following inequality holds:_
\[\sum_{n=1}^{\infty}\sigma_{n}|A_{n}|^{2}\leq\sum_{n=1}^{\infty}\frac{|A_{n}-A_ {n-1}|^{2}}{\lambda_{n}}, \tag{2.8}\]
_where_
\[\sigma_{n}=\frac{((-\Delta_{\lambda})\mu)_{n}}{\mu_{n}}=\frac{1}{\lambda_{n}} +\frac{1}{\lambda_{n+1}}-\frac{\mu_{n-1}}{\lambda_{n}\mu_{n}}-\frac{\mu_{n+1} }{\lambda_{n+1}\mu_{n}}.\]
## 3. Improvement of Copson inequality
In this section, we prove that the Copson inequality (1.4) admits an improvement in the special case of \(c=\frac{3}{2}\). Indeed, we establish an improved Copson inequality (1.11). The statement of our result is given as follows.
**Theorem 3.1**.: _Let \(\{A_{n}\}\) be a sequence of complex numbers such that \(A_{0}=0\). Then we have_
\[\sum_{n=1}^{\infty}\frac{1}{16}\frac{n^{2}}{\widetilde{S}_{n}^{\frac{3}{2}}}| A_{n}|^{2}<\sum_{n=1}^{\infty}\widetilde{V}_{n}|A_{n}|^{2}\leq\sum_{n=1}^{ \infty}\sqrt{\widetilde{S}_{n}}\frac{|A_{n}-A_{n-1}|^{2}}{n^{2}}, \tag{3.1}\]
_where the sequence \(\widetilde{V}_{n}\) is defined as below:_
\[\widetilde{V}_{n}=\frac{((-\Delta_{\lambda})n^{3/4})}{n^{3/4}}=\frac{\sqrt{ \widetilde{S}_{n}}}{n^{2}}+\frac{\sqrt{\widetilde{S}_{n+1}}}{(n+1)^{2}}-\frac{ \sqrt{\widetilde{S}_{n}}}{n^{2}}\Big{(}1-\frac{1}{n}\Big{)}^{\frac{3}{4}}- \frac{\sqrt{\widetilde{S}_{n+1}}}{(n+1)^{2}}\Big{(}1+\frac{1}{n}\Big{)}^{ \frac{3}{4}},\]
_with \(\lambda=\{\lambda_{n}\}\), \(\lambda_{n}=\frac{n^{2}}{\sqrt{\widetilde{S}_{n}}}\), and \(\widetilde{S}_{n}=\frac{n(n+1)(2n+1)}{6}\)._
To prove this theorem, we require a sequence of lemma as provided below. Before stating these Lemma, we denote the following:
\[F(n) =n^{\frac{1}{2}}(2n+3)^{2}\big{(}n^{\frac{3}{4}}-(n-1)^{\frac{3}{4} }\big{)}+(2n+1)^{3/2}\{(n+2)(2n+3)\}^{\frac{1}{2}}\big{(}n^{\frac{3}{4}}-(n+1)^ {\frac{3}{4}}\big{)},\] \[G(n) =\Big{\{}n^{\frac{1}{2}}(2n+3)^{2}\Big{(}\frac{3}{4n^{\frac{1}{4} }}+\frac{3}{32n^{\frac{5}{4}}}\Big{)}+(2n+1)^{3/2}\{(n+2)(2n+3)\}^{\frac{1}{2}} \Big{(}\frac{3}{32n^{\frac{5}{4}}}-\frac{3}{4n^{\frac{1}{4}}}\Big{)}\Big{\}}.\]
Lemma 3.1.: _For any \(n\in\mathbb{N}\), we have \(G(n)>\frac{36}{16}n^{\frac{5}{4}}\)._
Proof.: A direct computation of the following difference gives
\[G(n)-\frac{36}{16}n^{\frac{5}{4}}=\frac{3}{32n^{5/4}}H(n),\]
where
\[H(n) =\sqrt{n}(32n^{3}+76n^{2}+84n+9)-(2n+1)(8n-1)\{(n+2)(2n+1)(2n+3) \}^{\frac{1}{2}}\] \[=\sqrt{n}\Big{\{}(32n^{3}+76n^{2}+84n+9)-\frac{(2n+1)(8n-1)}{n}\{ n(n+2)(2n+1)(2n+3)\}^{\frac{1}{2}}\Big{\}}\] \[>\sqrt{n}\Big{\{}(32n^{3}+76n^{2}+84n+9)-\frac{(4n+2)(8n-1)}{n}(n +1)^{2}\Big{\}}\ (\text{by A.M.-G.M. inequality})\] \[=\frac{1}{\sqrt{n}}(30n^{2}+n+2)>0.\]
This proves the Lemma.
Lemma 3.2.: _Let \(n\in\mathbb{N}\). Then we have the following inequality:_
\[F(n)>G(n).\]
Proof.: We have
\[F(n)=n^{3/4}\Big{\{}n^{\frac{1}{2}}(2n+3)^{2}\big{(}1-(1-\frac{1}{n})^{\frac{3 }{4}}\big{)}+(2n+1)^{3/2}\sqrt{(n+2)(2n+3)}\big{(}1-(1+\frac{1}{n})^{\frac{3}{ 4}}\big{)}\Big{\}}\]
Expanding \(F(n)\) as an infinite series, we have
\[F(n) =\sum_{k=1}^{\infty}(-1)^{k+1}\binom{\frac{3}{4}}{k}\frac{n^{ \frac{3}{4}}}{n^{k}}\Big{[}n^{\frac{1}{2}}(2n+3)^{2}+(-1)^{k}(2n+1)^{3/2}\sqrt {(n+2)(2n+3)}\Big{]}\] \[=n^{\frac{3}{4}}\Big{[}n^{\frac{1}{2}}(2n+3)^{2}-(2n+1)^{3/2} \sqrt{(n+2)(2n+3)}\Big{]}\sum_{k\in\text{odd}}\binom{\frac{3}{4}}{k}\frac{1}{ n^{k}}\] \[\qquad\ \ \ +n^{\frac{3}{4}}\Big{[}n^{\frac{1}{2}}(2n+3)^{2}+(2n+1)^{3/2} \sqrt{(n+2)(2n+3)}\Big{]}\sum_{k\in\text{even}}-\binom{\frac{3}{4}}{k}\frac{1 }{n^{k}}.\]
Note that \(\binom{\frac{3}{4}}{k}<0\) for all even \(k\in\mathbb{N}\), and \(\binom{\frac{3}{4}}{k}>0\) for all odd \(k\in\mathbb{N}\). Also since \(n(2n+3)^{3}>(n+2)(2n+1)^{3}\) holds for any \(n\in\mathbb{N}\), so one gets
\[n^{\frac{1}{2}}(2n+3)^{2}>(2n+1)^{3/2}\sqrt{(n+2)(2n+3)},\]
which says that all terms of the R.H.S. of \(F(n)\) are positive. Hence from above, we get
\[F(n) >\sum_{k=1}^{2}(-1)^{k+1}\binom{\frac{3}{4}}{k}\frac{n^{\frac{3}{4} }}{n^{k}}\Big{[}n^{\frac{1}{2}}(2n+3)^{2}+(-1)^{k}(2n+1)^{3/2}\sqrt{(n+2)(2n+3) }\Big{]}\] \[=\Big{\{}n^{\frac{1}{2}}(2n+3)^{2}\Big{(}\frac{3}{4n^{\frac{1}{4} }}+\frac{3}{32n^{\frac{5}{4}}}\Big{)}+(2n+1)^{3/2}\sqrt{(n+2)(2n+3)}\Big{(} \frac{3}{32n^{\frac{5}{4}}}-\frac{3}{4n^{\frac{1}{4}}}\Big{)}\Big{\}}\] \[=G(n).\]
This proves the Lemma.
Lemma 3.3.: _Suppose that \(n\in\mathbb{N}\). Then we have_
\[\widetilde{V}_{n}>\frac{1}{16}\frac{n^{2}}{\tilde{S}_{n}^{\frac{3}{2}}}.\]
Proof.: Let us choose \(f(n)=\widetilde{V}_{n}-\frac{1}{16}\frac{n^{2}}{\tilde{S}_{n}^{\frac{3}{2}}}\). It is now enough to prove that \(f(n)>0\). Observed that \(f(n)\) can be written in the following simplified form.
\[f(n) =\frac{\{n(n+1)(2n+1)\}^{\frac{1}{2}}}{\sqrt{6}n^{2}}+\frac{\{(n+ 1)(n+2)(2n+3)\}^{\frac{1}{2}}}{\sqrt{6}(n+1)^{2}}-\frac{\{n(n+1)(2n+1)\}^{ \frac{1}{2}}}{\sqrt{6}n^{2}}\Big{(}1-\frac{1}{n}\Big{)}^{\frac{3}{4}}\] \[\quad-\frac{\{(n+1)(n+2)(2n+3)\}^{\frac{1}{2}}}{\sqrt{6}(n+1)^{2} }\Big{(}1+\frac{1}{n}\Big{)}^{\frac{3}{4}}-\frac{1}{16}\frac{6\sqrt{6}n^{2}}{ \Big{(}n(n+1)(2n+1)\Big{)}^{\frac{3}{2}}}\] \[=\frac{n^{2}(n+1)^{2}R(n)}{16\sqrt{6}n^{\frac{11}{4}}(n+1)^{2} \big{(}n(n+1)(2n+1)\big{)}^{\frac{3}{2}}},\]
where \(R(n)\) is given as follows:
\[R(n) =16(2n^{2}+3n+1)^{2}\big{(}n^{\frac{3}{4}}-(n-1)^{\frac{3}{4}} \big{)}+16n^{\frac{9}{4}}(2n+1)(4n^{3}+16n^{2}+19n+6)^{\frac{1}{2}}\] \[\quad\quad-16(n+1)^{\frac{1}{4}}(2n^{2}+n)^{\frac{3}{2}}(2n^{3}+9 n^{2}+13n+6)^{\frac{1}{2}}-36n^{\frac{11}{4}}\] \[=16(2n^{2}+3n+1)^{2}\big{(}n^{\frac{3}{4}}-(n-1)^{\frac{3}{4}} \big{)}+16n^{\frac{3}{2}}(2n+1)^{\frac{3}{2}}\sqrt{(n+2)(2n+3)}\big{(}n^{\frac {3}{4}}-(n+1)^{\frac{3}{4}}\big{)}\] \[\quad\quad-36n^{\frac{11}{4}}\] \[>16(2n^{2}+3n)^{2}\big{(}n^{\frac{3}{4}}-(n-1)^{\frac{3}{4}} \big{)}+16n^{\frac{3}{2}}(2n+1)^{\frac{3}{2}}\sqrt{(n+2)(2n+3)}\big{(}n^{ \frac{3}{4}}-(n+1)^{\frac{3}{4}}\big{)}\] \[\quad\quad-36n^{\frac{11}{4}}\] \[=16n^{\frac{3}{2}}\Big{(}F(n)-\frac{36}{16}n^{\frac{5}{4}}\Big{)}\] \[>16n^{\frac{3}{2}}\Big{(}G(n)-\frac{36}{16}n^{\frac{5}{4}}\Big{)} \text{ (by Lemma \ref{lem:2})}\] \[>0\text{ (by Lemma \ref{lem:3.1}).}\]
Hence \(f(n)>0\), and this establishes the desired result.
Proof.: of _Theorem 3.1_: The proof of this result is an immediate consequence of our result stated in Corollary 2.2. In fact, when we choose \(\lambda_{n}=\frac{n^{2}}{\sqrt{\tilde{S}_{n}}}\) and \(\mu_{n}=n^{\frac{3}{4}}\), then the corresponding weight sequence \(\sigma_{n}\) becomes \(\widetilde{V}_{n}\) as given above, and hence the R.H.S. of inequality (3.1) is proved. To establish the L.H.S. of inequality (3.1), it is sufficient to prove that for all \(n\in\mathbb{N}\)
\[\widetilde{V}_{n}>\frac{1}{16}\frac{n^{2}}{\tilde{S}_{n}^{\frac{3}{2}}}\text { holds},\]
which immediately follows from Lemma 3.3. This completes the proof.
To avoid complications, and large computations we will choose \(c=2\) in the rest of the work. We note that the operator \((-\Delta_{\Lambda})\) has now become \((-\Delta_{\lambda})\), and we consider the square of the Dirichlet Laplacian operator \((-\Delta_{\lambda})^{2}\) to get extended improved discrete Rellich inequalities in one-dimension.
## 4. Extension of improved Rellich inequality
We have already mentioned in Section 1, that similar to the case of discrete Hardy's inequality, the discrete Rellich inequality (1.12) can be improved pointwise (see [4] for details). In this section, we define a generalized bi-Laplacian operator \((-\Delta_{\lambda})^{2}\) acting on \(A\in C_{c}(\mathbb{N}_{0})\) such that \(A_{0}=A_{1}=0\), and obtain an extended improved discrete Rellich inequality in one-dimension.
The operator \((-\Delta_{\lambda})^{2}\) acting on the sequence \(A\) is defined as below:
\[((-\Delta_{\lambda})^{2}A)_{n}\] \[=\left\{\begin{array}{ll}\Big{(}\Big{(}\frac{1}{\lambda_{0}}+ \frac{1}{\lambda_{1}})^{2}+\frac{1}{\lambda_{1}^{2}}\Big{)}A_{0}-\Big{(}\frac{ 2}{\lambda_{1}^{2}}+\frac{1}{\lambda_{0}\lambda_{1}}+\frac{1}{\lambda_{1} \lambda_{2}}\Big{)}A_{1}+\frac{1}{\lambda_{1}\lambda_{2}}A_{2}&\text{if }n=0,\\ -\Big{(}\frac{2}{\lambda_{1}^{2}}+\frac{1}{\lambda_{0}\lambda_{1}}+\frac{1}{ \lambda_{1}\lambda_{2}}\Big{)}A_{0}+\Big{(}(\frac{1}{\lambda_{1}}+\frac{1}{ \lambda_{2}})^{2}+\frac{1}{\lambda_{1}^{2}}+\frac{1}{\lambda_{2}^{2}}\Big{)}A _{1}&\\ -\Big{(}\frac{2}{\lambda_{2}^{2}}+\frac{1}{\lambda_{1}\lambda_{2}}+\frac{1}{ \lambda_{2}\lambda_{3}}\Big{)}A_{2}+\frac{1}{\lambda_{2}\lambda_{3}}A_{3}& \text{if }n=1,\\ \Big{(}(\frac{1}{\lambda_{n}}+\frac{1}{\lambda_{n+1}})^{2}+\frac{1}{\lambda_{n }^{2}}+\frac{1}{\lambda_{n+1}^{2}}\Big{)}A_{n}-\Big{(}\frac{2}{\lambda_{n}^{2 }}+\frac{1}{\lambda_{n}\lambda_{n-1}}+\frac{1}{\lambda_{n}\lambda_{n+1}}\Big{)} A_{n-1}&\\ -\Big{(}\frac{2}{\lambda_{n+1}^{2}}+\frac{1}{\lambda_{n}\lambda_{n+1}}+\frac{1 }{\lambda_{n+1}\lambda_{n+2}}\Big{)}A_{n+1}+\frac{1}{\lambda_{n}\lambda_{n-1} }A_{n-2}+\frac{1}{\lambda_{n+1}\lambda_{n+2}}A_{n+2}&\text{if }n\geq 2.\end{array}\right.\]
Using the bi-Laplacian operator \((-\Delta_{\lambda})^{2}\) and a sequence \(\mu=\{\mu_{n}\}\) of strictly positive of real numbers, we define an extended Rellich weight sequence \(\sigma_{n}^{(2)}\) for \(n\geq 2\) as below:
\[\sigma_{n}^{(2)}=\frac{((-\Delta_{\lambda})^{2}\mu)_{n}}{\mu_{n}}.\]
By choosing \(\delta_{n}=\frac{1}{\lambda_{n}}\), and replacing \((-\Delta_{\lambda})^{2}\) by \((-\Delta_{\delta})^{2}\) the Rellich weight has reduced to the following form:
\[\sigma_{n}^{(2)} =\frac{((-\Delta_{\delta})^{2}\mu)_{n}}{\mu_{n}}\] \[=\Big{(}(\delta_{n}+\delta_{n+1})^{2}+\delta_{n}^{2}+\delta_{n+1} ^{2}\Big{)}-\frac{\mu_{n-1}}{\mu_{n}}\Big{(}2\delta_{n}^{2}+\delta_{n}\delta_{ n-1}+\delta_{n}\delta_{n+1}\Big{)}\] \[-\frac{\mu_{n+1}}{\mu_{n}}\Big{(}2\delta_{n+1}^{2}+\delta_{n} \delta_{n+1}+\delta_{n+1}\delta_{n+2}\Big{)}+\frac{\mu_{n-2}}{\mu_{n}}\delta_ {n}\delta_{n-1}+\frac{\mu_{n+2}}{\mu_{n}}\delta_{n+1}\delta_{n+2}.\]
We are now ready to establish the extended version of the discrete Rellich inequality in one dimension. Denote \(\Sigma=diag\{\sigma_{1}^{(2)},\sigma_{2}^{(2)},\ldots\}\). Then we have the following theorem, statement of which is given as follows:
**Theorem 4.1**.: _Suppose that \(A\in C_{c}(\mathbb{N}_{0})\) such that \(A_{0}=A_{1}=0\). Then there exists a remainder matrix \(R_{\delta}^{(2)}\) for which the following identity holds:_
\[A(-\Delta_{\delta})^{2}\bar{A}^{T}=A\Sigma\bar{A}^{T}+A(\overline{R}_{\delta}^ {(2)}{}^{T}R_{\delta}^{(2)})\bar{A}^{T}, \tag{4.1}\]
_which means the following inequality is true:_
\[\sum_{n=2}^{\infty}((-\Delta_{\delta})^{2}A)_{n}\bar{A}_{n}=\sum_{n=1}^{\infty }|(-\Delta_{\delta}A)_{n}|^{2}\geq\sum_{n=1}^{\infty}\sigma_{n}^{(2)}|A_{n}|^{2} \tag{4.2}\]
Proof.: To prove inequality (4.2), we assume that the above identity (4.1) holds true for unknown \(R_{\delta}^{(2)}\):
\[\sum_{n=1}^{\infty}|(-\Delta_{\delta}A)_{n}|^{2}=\sum_{n=1}^{\infty}\sigma_{n} ^{(2)}|A_{n}|^{2}+\sum_{n=1}^{\infty}|(R_{\delta}^{(2)}A)_{n}|^{2}, \tag{4.3}\]
where it is supposed that \(R_{\delta}^{(2)}\) acting on the sequence \(\{A_{n}\}\) has the following form:
\[(R_{\delta}^{(2)}A)_{n}=\gamma_{n}\sqrt{\delta_{n+1}\delta_{n+2}}A_{n}-\beta_ {n}\sqrt{\delta_{n+1}}A_{n+1}+\frac{\sqrt{\delta_{n+1}\delta_{n+2}}}{\gamma_{ n}}A_{n+2},\ n\in\mathbb{N}, \tag{4.4}\]
where \(\beta_{n},\gamma_{n}\in\mathbb{R}\) and \(\gamma_{n}\neq 0\) for \(n\in\mathbb{N}\). Our aim is to determine the coefficients \(\beta_{n},\gamma_{n}\) such that the identity (4.3) is being satisfied. To reach the goal, we compute each terms of the identity (4.3). Throughout our investigation in this section, it is assumed that \(A_{0}=A_{1}=0\)
Let us start evaluation of the following infinite sum.
\[\sum_{n=1}^{\infty}|((-\Delta_{\delta})A)_{n}|^{2}\] \[=\sum_{n=1}^{\infty}|(\delta_{n}+\delta_{n+1})A_{n}-A_{n-1}\delta_{ n}-A_{n+1}\delta_{n+1}|^{2}\] \[=\sum_{n=2}^{\infty}\Big{[}(\delta_{n}+\delta_{n+1})^{2}+\delta_{ n}^{2}+\delta_{n+1}^{2}\Big{]}|A_{n}|^{2}-2\mathbb{R}\sum_{n=2}^{\infty}\Big{(} \delta_{n+1}^{2}+\delta_{n+1}\delta_{n+2}\Big{)}\bar{A}_{n+1}A_{n}\] \[-2\mathbb{R}\sum_{n=2}^{\infty}\Big{(}\delta_{n}\delta_{n+1}+ \delta_{n+1}^{2}\Big{)}\bar{A}_{n+1}A_{n}+2\mathbb{R}\sum_{n=2}^{\infty}\delta _{n+1}\delta_{n+2}A_{n}\bar{A}_{n+2}.\]
Now the following computation gives
\[\sum_{n=1}^{\infty}|(R_{\delta}^{(2)}A)_{n}|^{2}\] \[=\sum_{n=1}^{\infty}|\gamma_{n}\sqrt{\delta_{n+1}\delta_{n+2}}A_{ n}-\beta_{n}\sqrt{\delta_{n+1}}A_{n+1}+\frac{\sqrt{\delta_{n+1}\delta_{n+2}}}{ \gamma_{n}}A_{n+2}|^{2}\] \[=\sum_{n=1}^{\infty}\Big{[}\gamma_{n}^{2}\delta_{n+1}\delta_{n+2 }|A_{n}|^{2}+\beta_{n}^{2}\delta_{n+1}|A_{n+1}|^{2}+\frac{\delta_{n+1}\delta_{ n+2}}{\gamma_{n}^{2}}|A_{n+2}|^{2}-2\mathbb{R}\Big{(}\gamma_{n}\beta_{n}\sqrt{ \delta_{n+2}}\delta_{n+1}A_{n}\bar{A}_{n+1}\Big{)}\] \[-2\mathbb{R}\Big{(}\frac{\beta_{n}}{\gamma_{n}}\delta_{n+1}\sqrt {\delta_{n+2}}A_{n+1}\bar{A}_{n+2}\Big{)}+2\mathbb{R}\Big{(}\delta_{n+1} \delta_{n+2}A_{n}\bar{A}_{n+2}\Big{)}\Big{]}\] \[=\Big{(}\gamma_{2}^{2}\delta_{4}\delta_{3}+\beta_{1}^{2}\delta_{ 2}\Big{)}|A_{2}|^{2}+\sum_{n=3}^{\infty}\Big{[}\delta_{n+1}\delta_{n+2}\gamma_ {n}^{2}+\beta_{n-1}^{2}\delta_{n}+\frac{1}{\gamma_{n-2}^{2}}\delta_{n-1} \delta_{n}\Big{]}|A_{n}|^{2}\] \[-2\mathbb{R}\sum_{n=2}^{\infty}\Big{(}\gamma_{n}\beta_{n}\delta_ {n+1}\sqrt{\delta_{n+2}}+\frac{\beta_{n-1}}{\gamma_{n-1}}\delta_{n}\sqrt{ \delta_{n+1}}\Big{)}A_{n}\bar{A}_{n+1}+2\mathbb{R}\sum_{n=2}^{\infty}A_{n}\bar {A}_{n+2}\delta_{n+1}\delta_{n+2}.\]
Plugging these values in (4.3), and comparing the coefficients in both sides we get a set of equations as prescribed below:
\[(\delta_{2}+\delta_{3})^{2}+\delta_{2}^{2}+\delta_{3}^{2}=\sigma _{2}^{(2)}+\gamma_{2}^{2}\delta_{3}\delta_{4}+\beta_{1}^{2}\delta_{2}\] \[(\delta_{n+1}+\delta_{n})^{2}+\delta_{n}^{2}+\delta_{n+1}^{2}= \sigma_{n}^{(2)}+\gamma_{n}^{2}\delta_{n+1}\delta_{n+2}+\beta_{n-1}^{2}\delta_ {n}+\frac{1}{\gamma_{n-2}^{2}}\delta_{n-1}\delta_{n}\ \forall n\geq 3 \tag{4.5}\] \[2\delta_{n+1}^{2}+\delta_{n}\delta_{n+1}+\delta_{n+1}\delta_{n+2 }=\gamma_{n}\beta_{n}\delta_{n+1}\sqrt{\delta_{n+2}}+\frac{\beta_{n-1}}{ \gamma_{n-1}}\delta_{n}\sqrt{\delta_{n+1}}\ \forall n\geq 2.\]
As we have seen that in case of discrete Hardy inequality (2.3), the remainder term \(R_{\lambda}^{(1)}\) vanishes on \(\{\mu_{n}\}\) that is \((R_{\lambda}^{(1)}\mu)_{n}=0\). Therefore, we here additionally assume that the remainder term \(R_{\delta}^{(2)}\) will also vanish when acting on \(\{\mu_{n}\}\) that is \((R_{\delta}^{(2)}\mu)_{n}=0\) in case of
discrete Rellich inequality, which means that
\[\gamma_{n}\mu_{n}\sqrt{\delta_{n+1}\delta_{n+2}}-\beta_{n}\mu_{n+1}\sqrt{\delta_{n +1}}+\frac{\mu_{n+2}}{\gamma_{n}}\sqrt{\delta_{n+1}\delta_{n+2}}=0,\ n\in \mathbb{N}. \tag{4.6}\]
Computing \(\beta_{n}\) from (4.6) and using it in (4.5), we get a recurrence relation of \(\gamma_{n}^{2}\) as given below
\[\gamma_{n}^{2} =\frac{1}{\delta_{n+1}\delta_{n+2}}\frac{\mu_{n+1}}{\mu_{n}} \Big{[}2\delta_{n+1}^{2}+\delta_{n}\delta_{n+1}+\delta_{n+1}\delta_{n+2}\] \[-\frac{\mu_{n+2}}{\mu_{n+1}}\delta_{n+1}\delta_{n+2}-\frac{\mu_{ n-1}}{\mu_{n}}\delta_{n}\delta_{n+1}-\frac{\mu_{n+1}}{\mu_{n}\gamma_{n-1}^{2}} \delta_{n}\delta_{n+1}\Big{]},\ n\geq 2. \tag{4.7}\]
We now choose an initial assumption for determining \(\gamma_{n}^{2}\), \(n\geq 2\) as
\[\gamma_{1}^{2}=\frac{\mu_{2}}{\mu_{1}}\Big{[}2\frac{\delta_{2}}{\delta_{3}}+ \frac{\delta_{1}}{\delta_{3}}+1\Big{]}-\frac{\mu_{3}}{\mu_{1}} \tag{4.8}\]
Using the value of \(\gamma_{1}^{2}\), and recurrence relation (4.7), one obtains the values of \(\{\gamma_{n}^{2}\}\) for \(n\geq 2\). Once we get the positive values of \(\{\gamma_{n}^{2}\}\), then we also get the value of \(\{\beta_{n}\}\) from equation (4.6). Hence we obtain a valid expression for the remainder term \((R_{\delta}^{(2)}A)_{n}\) from (4.4).
It is clear that the exact expression for \(\{\gamma_{n}^{2}\}\) is very difficult to obtain but they are exists from our previous investigation. Our next result shows that such \(\{\gamma_{n}^{2}\}\) will indeed exist for a suitable \(\delta_{n}\) and \(\mu_{n}\). It is pertinent to mention here that Gerhat et al. [4] also established such kind of existence for \(\{\gamma_{n}^{2}\}\), but their result is a particular case of our result for \(\delta_{n}=1\) (or \(\lambda_{n}=1\)), \(n\in\mathbb{N}\). Here we prove that \(\{\gamma_{n}^{2}\}\) exists for a special choices of \(\delta_{n}\) and lies within the same bounds as provided in [4]. Our result reads as follows.
Lemma 4.1.: _Let \(n\in\mathbb{N}\), \(p_{n}=\frac{\mu_{n+1}}{\mu_{n}}\), \(\mu_{n}=n^{3/2}\) and \(\{\gamma_{n}^{2}\}\), \(\{\gamma_{1}^{2}\}\) are given in equations (4.7) and (4.8), respectively with \(\delta_{n}=\frac{n+2}{n+1}\). Then_
\[p_{n}p_{n+1}<\gamma_{n}^{2}<p_{n}p_{n+1}p_{n+2}.\]
To establish this lemma, we need some results, which are given in the form of several Lemmas. Before proceeding further, we denote the following:
\[T(n)=\Big{[}1+\frac{2(n+3)^{2}}{(n+2)(n+4)}+\frac{(n+3)(n+2)}{(n+4)(n+1)}- \frac{2(n+3)(n+2)}{(n+4)(n+1)}(\frac{n-1}{n})^{\frac{3}{2}}-2(\frac{n+2}{n+1} )^{\frac{3}{2}}\Big{]}. \tag{4.9}\]
Lemma 4.2.: _Suppose that \(n\geq 2\), \(n\in\mathbb{N}\). Then \(T(n)>0\), where \(T(n)\) is given in the equation (4.9)._
Proof.: A direct simplification of \(T(n)\) gives the following, i.e,
\[T(n)= \frac{1}{(n+4)(n+2)(n^{2}+n)^{\frac{3}{2}}}\Big{[}\sqrt{n^{2}+n} \Big{(}4n^{4}+28n^{3}+60n^{2}+38n\Big{)}\] \[-2\sqrt{n^{2}-1}\Big{(}n^{4}+6n^{3}+9n^{2}-4n-12\Big{)}-2n\sqrt{n ^{2}+2n}\Big{(}n^{3}+8n^{2}+20n+16\Big{)}\Big{]}.\]
Applying A.M.-G.M.-H.M. inequality one gets
\[T(n) >\frac{1}{(n+4)(n+2)(2n+1)(n^{2}+n)^{3/2}}\Big{[}2n(n+1)\Big{(}4n^{4 }+28n^{3}+60n^{2}+38n\Big{)}\] \[\quad-2n(2n+1)\Big{(}n^{4}+6n^{3}+9n^{2}-4n-12\Big{)}-2n\sqrt{(n^{ 2}+2n)}(2n+1)\Big{(}n^{3}+8n^{2}+20n+16\Big{)}\Big{]}\] \[= \frac{2n}{(n+4)(n+2)(2n+1)(n^{2}+n)^{3/2}}\Big{[}\Big{(}2n^{5}+19n ^{4}+64n^{3}+97n^{2}+66n+12\Big{)}\] \[\quad-\sqrt{(n^{2}+2n)}\Big{(}2n^{4}+17n^{3}+48n^{2}+52n+16\Big{)} \Big{]}\] \[= \Big{[}\frac{2n\Big{(}18n^{7}+230n^{6}+1164n^{5}+3001n^{4}+4196n ^{3}+3100n^{2}+1072n+144\Big{)}}{(n+4)(n+2)(2n+1)(n^{2}+n)^{3/2}}\Big{]}D_{n} >0,\]
where
\[D_{n}=\frac{1}{\Big{(}2n^{5}+19n^{4}+64n^{3}+97n^{2}+66n+12\Big{)}+\sqrt{(n^{ 2}+2n)}\Big{(}2n^{4}+17n^{3}+48n^{2}+52n+16\Big{)}}.\]
Hence \(T(n)>0\), \(\forall n\geq 2\).
**Lemma 4.3**.: _Let \(n\geq 2\) be a natural number. Then \(U(n)<p_{n+1}p_{n+2}\) holds for all \(n\geq 2\)._
Proof.: For this, we denote \(S(n)\) by the following difference
\[S(n) =U(n)-p_{n+1}p_{n+2}\] \[=\Big{[}2\frac{(n+3)^{2}}{(n+2)(n+4)}+\frac{(n+3)(n+2)}{(n+4)(n+1 )}+1-\frac{(n+3)(n+2)}{(n+4)(n+1)}(\frac{n-1}{n})^{\frac{3}{2}}(\frac{n+1}{n+2 })^{\frac{3}{2}}\] \[\quad-\frac{(n+3)(n+2)}{(n+4)(n+1)}(\frac{n-1}{n})^{\frac{3}{2}} -(\frac{n+2}{n+1})^{\frac{3}{2}}-(\frac{n+3}{n+1})^{\frac{3}{2}}\Big{]}\]
Simplifying \(S(n)\) as a fraction, we obtain a numerator \(f(n)\) (say) as below:
\[f(n) =\Big{[}2(n+3)^{2}n(n+1)\sqrt{n(n+1)(n+2)}+n(n+3)(n+2)^{2}\sqrt{ n(n+1)(n+2)}\] \[\quad+n(n+1)(n+2)(n+4)\sqrt{n(n+1)(n+2)}-n(n+2)^{3}(n+4)\sqrt{n}\] \[\quad-(n+3)(n+2)(n^{2}-1)(n+1)\sqrt{n-1}-n(n+2)(n+3)(n+4)\sqrt{n(n +2)(n+3)}\] \[\quad-(n+3)(n+2)^{2}(n-1)\sqrt{(n+1)(n+2)(n-1)}\Big{]},\]
which further becomes
\[f(n) =\sqrt{n(n+1)(n+2)}(4n^{4}+28n^{3}+60n^{2}+38n)\] \[\quad-\sqrt{n(n+2)(n+3)}(n^{4}+9n^{3}+26n^{2}+24n)\] \[\quad-\sqrt{(n-1)(n+1)(n+2)}(n^{4}+6n^{3}+9n^{2}-4n-12)\] \[\quad-\sqrt{n}(n^{5}+10n^{4}+36n^{3}+56n^{2}+32n)-\sqrt{n-1}(n^{5} +6n^{4}+10n^{3}-11n-6).\]
We need to show that \(f(n)<0\), which gives \(S(n)<0\), and hence we get the desired inequality. Now by applying A.M.-G.M.-H.M. inequality \(\forall n\geq 2\), we have
\[(i) \sqrt{(n+1)(n+2)}<\frac{2n+3}{2}\] \[(ii) \frac{2(n+2)(n+3)}{2n+5}<\sqrt{(n+2)(n+3)}\] \[(iii) \frac{2(n+1)(n+2)}{2n+3}<\sqrt{(n+1)(n+2)}.\]
Using these inequalities, we get
\[(2n+3)(2n+5)f(n)<(2n+3)\sqrt{n}\Big{(}4n^{6}+35n^{5}+98n^{4}+58n^ {3}-142n^{2}-163n\Big{)}\] \[\quad\quad-(2n+5)\sqrt{n-1}\Big{(}4n^{6}+33n^{5}+96n^{4}+100n^{3} -34n^{2}-133n-66\Big{)}.\]
Since the sum of the two terms of R.H.S. of the above inequality is positive, so multiplying the same in both sides of above, we get
\[(2n+3)(2n+5)\Big{[}(2n+3)\sqrt{n}\Big{(}4n^{6}+35n^{5}+98n^{4}+58 n^{3}-142n^{2}-163n\Big{)}\] \[\quad+(2n+5)\sqrt{n-1}\Big{(}4n^{6}+33n^{5}+96n^{4}+100n^{3}-34n^ {2}-133n-66\Big{)}\Big{]}f(n)\] \[<\Big{(}-192n^{13}-3252n^{12}-22956n^{11}-84779n^{10}-159528n^{9} -73848n^{8}+267088n^{7}\] \[\quad+428550n^{6}-75848n^{5}-621720n^{4}-273984n^{3}+396949n^{2}+ 417120n+108900\Big{)}\] \[<0.\]
This shows that \(f(n)<0\). Hence \(S(n)<0\), which finishes proof of the lemma.
Proof.: **of Lemma 4.1:**
It is observed that when \(\mu_{n}=n^{\frac{3}{2}}\) then from equation (4.7), we have
\[\gamma_{n}^{2} =(\frac{n+1}{n})^{\frac{3}{2}}\Big{[}1+2\frac{(n+3)^{2}}{(n+2)(n+ 4)}+\frac{(n+3)(n+2)}{(n+4)(n+1)}-\frac{(n+3)(n+2)}{(n+4)(n+1)}(\frac{n-1}{n} )^{\frac{3}{2}}\] \[-(\frac{n+2}{n+1})^{\frac{3}{2}}-\frac{(n+2)(n+3)}{(n+4)(n+1) \gamma_{n-1}^{2}}(\frac{n+1}{n})^{\frac{3}{2}}\Big{]}.\]
Also \(\gamma_{1}^{2}=2\sqrt{2}\big{[}2\frac{16}{15}+\frac{12}{10}+1\Big{]}-3\sqrt{3 }\approx 7.06036>5.19616=p_{1}p_{2}\). Now suppose that \(\gamma_{n-1}^{2}>p_{n-1}p_{n}\) holds true for \(n\geq 2\). Then by induction from the above expression for \(\gamma_{n}^{2}\), we get
\[\gamma_{n}^{2} >(\frac{n+1}{n})^{\frac{3}{2}}\Big{[}1+2\frac{(n+3)^{2}}{(n+2)(n +4)}+\frac{(n+3)(n+2)}{(n+4)(n+1)}-2\frac{(n+3)(n+2)}{(n+4)(n+1)}(\frac{n-1}{n })^{\frac{3}{2}}-(\frac{n+2}{n+1})^{\frac{3}{2}}\Big{]}\] \[=(\frac{n+1}{n})^{\frac{3}{2}}T(n),\]
where \(T(n)\) is given in (4.9). It is now sufficient to show that \(T(n)>0\) for all \(n\geq 2\). From Lemma 4.2, we see that \(T(n)>0\) for all \(n\geq 2\). This proves that \(\gamma_{n}^{2}>p_{n}p_{n+1}\)\(\forall n\in\mathbb{N}\).
To establish the reverse inequality, we first note that \(\gamma_{1}^{2}\approx 7.06036<8=p_{1}p_{2}p_{3}\). Let us suppose that \(\gamma_{n-1}^{2}<p_{n-1}p_{n}p_{n+1}\) holds \(\forall n\geq 2\), \(n\in\mathbb{N}\). Then by induction, we get
\[\gamma_{n}^{2} <(\frac{n+1}{n})^{\frac{3}{2}}\Big{[}1+2\frac{(n+3)^{2}}{(n+2)(n +4)}+\frac{(n+3)(n+2)}{(n+4)(n+1)}-\frac{(n+3)(n+2)}{(n+4)(n+1)}(\frac{n-1}{n} )^{\frac{3}{2}}(\frac{n+1}{n+2})^{\frac{3}{2}}\] \[-\frac{(n+3)(n+2)}{(n+4)(n+1)}(\frac{n-1}{n})^{\frac{3}{2}}-( \frac{n+2}{n+1})^{\frac{3}{2}}\Big{]}\] \[=p_{n}U(n),\]
where
\[U(n) =\Big{[}1+2\frac{(n+3)^{2}}{(n+2)(n+4)}+\frac{(n+3)(n+2)}{(n+4)(n +1)}-\frac{(n+3)(n+2)}{(n+4)(n+1)}(\frac{n-1}{n})^{\frac{3}{2}}(\frac{n+1}{n+2 })^{\frac{3}{2}}\] \[-\frac{(n+3)(n+2)}{(n+4)(n+1)}(\frac{n-1}{n})^{\frac{3}{2}}-( \frac{n+2}{n+1})^{\frac{3}{2}}\Big{]}.\]
It is now enough to prove that \(U(n)<p_{n+1}p_{n+2}\) for \(n\geq 2\), and which follows from Lemma 4.3. Therefore \(\gamma_{n}^{2}<p_{n}p_{n+1}p_{n+2}\), \(\forall n\in\mathbb{N}\). This completes the proof of the lemma.
Remark 4.4.: There exists infinitely many sequences for which \(\gamma_{n}^{2}\) exists and lies within the same bounds, that is \(p_{n}p_{n+1}<\gamma_{n}^{2}<p_{n}p_{n+1}p_{n+2}\), \(\forall n\in\mathbb{N}\).
## 5. Improvement of Knopp inequality via Rellich inequality
In this section, our goal is to establish an improvement of the Knopp inequality (1.16).
### Knopp inequality of order \(\alpha=2\)
Let us substitute \(\alpha=2\) in (1.16), then the Knopp inequality of order \(2\) reads as below
\[\sum_{n=1}^{\infty}\Big{(}\frac{1}{n(n+1)}\sum_{k=1}^{n}(n-k+1)|a_{k}|\Big{)}^ {2}<\frac{16}{9}\sum_{n=1}^{\infty}|a_{n}|^{2}. \tag{5.1}\]
Now we define a backward difference operator \(\nabla\) acting on the sequence \(\{A_{n}\}\) as \((\nabla A)_{n}=A_{n}-A_{n-1}\) so that \((\nabla^{2}A)_{n}=-2A_{n-1}+A_{n-2}+A_{n}\) with \(A_{0}=0\) and all negative subscript are zero. With this notation, and by choosing \( A_{n}=\sum_{k=1}^{n}(n-k+1)|a_{k}|\), inequality (5.1) takes the following form:
\[\sum_{n=1}^{\infty}\frac{|A_{n}|^{2}}{n^{2}(n+1)^{2}}<\frac{16}{9}\sum_{n=1}^ {\infty}|(\nabla^{2}A)_{n}|^{2}. \tag{5.2}\]
Then we have the following result.
Lemma 5.1.: _The improvement of the Rellich inequality implies that the improvement of the Knopp inequality of order \(2\)._
Proof.: Suppose that the improvement of the Rellich inequality holds, i.e., inequality (1.14) is satisfied as below:
\[\sum_{n=1}^{\infty}|((-\Delta)A)_{n}|^{2}\geq\sum_{n=2}^{\infty}\rho_{n}^{(2)} \frac{|A_{n}|^{2}}{n^{4}}>\frac{9}{16}\sum_{n=2}^{\infty}\frac{|A_{n}|^{2}}{n^{ 4}},\]
where \(A_{0}=A_{1}=0\), and \(\rho_{n}^{(2)}\) is the improved Rellich weight. On the other hand, with this assumption we have observed that
\[\sum_{n=1}^{\infty}|(\nabla^{2}A)_{n}|^{2}=\sum_{n=1}^{\infty}|2A_{n-1}-A_{n-2 }-A_{n}|^{2}=\sum_{n=1}^{\infty}|((-\Delta)A)_{n}|^{2}.\]
Hence from inequality (5.2), one gets
\[\sum_{n=1}^{\infty}|(\nabla^{2}A)_{n}|^{2}=\sum_{n=1}^{\infty}|((-\Delta)A)_{ n}|^{2}\geq\sum_{n=2}^{\infty}\rho_{n}^{(2)}\frac{|A_{n}|^{2}}{n^{4}}>\frac{9}{1 6}\sum_{n=2}^{\infty}\frac{|A_{n}|^{2}}{n^{4}}\geq\frac{9}{16}\sum_{n=1}^{ \infty}\frac{|A_{n}|^{2}}{n^{2}(n+1)^{2}}.\]
This proves the lemma.
### Knopp inequality of order \(\alpha\geq 1\)
Similar to the previous discussion, we choose
\[A_{n}=\sum_{k=1}^{n}\binom{n-k+\alpha-1}{n-k}|a_{k}|,\]
and define backward difference operator of order \(\alpha\) as
\[(\nabla^{\alpha}A)_{n}=\sum_{k=1}^{n}(-1)^{k+1}\binom{\alpha}{k-1}A_{n-k+1}.\]
We then have the higher order Knoop's inequality (1.16) stated as below
\[\sum_{n=1}^{\infty}\frac{|A_{n}|^{2}}{\left\{\binom{n-1+\alpha}{n-1}\right\} ^{2}}<\Big{\{}\frac{\Gamma(\alpha+1)\Gamma(\frac{1}{2})}{\Gamma(\alpha+\frac{ 1}{2})}\Big{\}}^{2}\sum_{n=1}^{\infty}|\nabla^{\alpha}A_{n}|^{2}. \tag{5.3}\]
Then we have the following result.
Lemma 5.2.: _The improvement of the higher order Knopp inequalities lies on the improvement of the higher order Rellich inequalities._
Proof.: Suppose that the inequality (1.15) holds true with the assumption \(A_{n}=0\) for all \(n=0,1,\ldots,\alpha-1\), and for all negative subscripts \(n\). Then after sincere investigation for even \(\alpha\geq 2\) it is observed that
\[\sum_{n=\alpha}^{\infty}|\nabla^{\alpha}A_{n}|^{2}=\sum_{n=\frac{\alpha}{2}} ^{\infty}|((-\Delta)^{\frac{\alpha}{2}}A)_{n}|^{2} \tag{5.4}\]
and for odd \(\alpha\geq 2\), we have
\[\sum_{n=\alpha}^{\infty}|\nabla^{\alpha}A_{n}|^{2}=\sum_{n=\frac{\alpha+1}{2} }^{\infty}|\nabla((-\Delta)^{\frac{\alpha-1}{2}}A)_{n}|^{2}. \tag{5.5}\]
On the other hand, a straightforward calculation suggest that for any \(\alpha\in\mathbb{N}\), the following identity holds.
\[\sum_{n=\alpha}^{\infty}|\nabla^{\alpha}A_{n}|^{2}=\sum_{n=\alpha}^{\infty}((- \Delta)^{\alpha}A)_{n}\bar{A}_{n}. \tag{5.6}\]
Therefore, we conclude that
\[\sum_{n=\alpha}^{\infty}|\nabla^{\alpha}A_{n}|^{2} =\sum_{n=\alpha}^{\infty}((-\Delta)^{\alpha}A)_{n}\bar{A}_{n}\] \[\geq\sum_{n=\alpha}^{\infty}\rho_{n}^{(\alpha)}|A_{n}|^{2}>\sum_{ n=\alpha}^{\infty}\frac{((2\alpha)!)^{2}}{16^{\alpha}(\alpha!)^{2}}\frac{1}{n^{2 \alpha}}|A_{n}|^{2}\geq\Big{\{}\frac{\Gamma(\alpha+\frac{1}{2})}{\Gamma( \alpha+1)\Gamma(\frac{1}{2})}\Big{\}}^{2}\sum_{n=\alpha}^{\infty}\frac{|A_{n} |^{2}}{\Big{\{}\binom{n-1+\alpha}{n-1}\Big{\}}^{2}}.\]
Hence the lemma.
|
2309.11490 | Flow Annealed Kalman Inversion for Gradient-Free Inference in Bayesian
Inverse Problems | For many scientific inverse problems we are required to evaluate an expensive
forward model. Moreover, the model is often given in such a form that it is
unrealistic to access its gradients. In such a scenario, standard Markov Chain
Monte Carlo algorithms quickly become impractical, requiring a large number of
serial model evaluations to converge on the target distribution. In this paper
we introduce Flow Annealed Kalman Inversion (FAKI). This is a generalization of
Ensemble Kalman Inversion (EKI), where we embed the Kalman filter updates in a
temperature annealing scheme, and use normalizing flows (NF) to map the
intermediate measures corresponding to each temperature level to the standard
Gaussian. In doing so, we relax the Gaussian ansatz for the intermediate
measures used in standard EKI, allowing us to achieve higher fidelity
approximations to non-Gaussian targets. We demonstrate the performance of FAKI
on two numerical benchmarks, showing dramatic improvements over standard EKI in
terms of accuracy whilst accelerating its already rapid convergence properties
(typically in $\mathcal{O}(10)$ steps). | Richard D. P. Grumitt, Minas Karamanis, Uroš Seljak | 2023-09-20T17:39:14Z | http://arxiv.org/abs/2309.11490v1 | # Flow Annealed Kalman Inversion for Gradient-Free Inference in Bayesian Inverse Problems
###### Abstract
For many scientific inverse problems we are required to evaluate an expensive forward model. Moreover, the model is often given in such a form that it is unrealistic to access its gradients. In such a scenario, standard Markov Chain Monte Carlo algorithms quickly become impractical, requiring a large number of serial model evaluations to converge on the target distribution. In this paper we introduce Flow Annealed Kalman Inversion (FAKI). This is a generalization of Ensemble Kalman Inversion (EKI), where we embed the Kalman filter updates in a temperature annealing scheme, and use normalizing flows (NF) to map the intermediate measures corresponding to each temperature level to the standard Gaussian. In doing so, we relax the Gaussian ansatz for the intermediate measures used in standard EKI, allowing us to achieve higher fidelity approximations to non-Gaussian targets. We demonstrate the performance of FAKI on two numerical benchmarks, showing dramatic improvements over standard EKI in terms of accuracy whilst accelerating its already rapid convergence properties (typically in \(\mathcal{O}(10)\) steps).
## 1 Introduction
Many scientific inference tasks are concerned with inverse problems of the form
\[y=\mathcal{G}(x)+\eta, \tag{1}\]
where \(y\in\mathbb{R}^{d_{y}}\) are the data, \(x\in\mathbb{R}^{d}\) are the model parameters, \(\mathcal{G}\) is the forward map, and \(\eta\) is the observation noise. Throughout this work we will assume that we do not have access to gradients of \(\mathcal{G}\) with respect to the parameters, and that \(\eta\sim\mathcal{N}(0,\Gamma)\) where \(\Gamma\) is a fixed noise covariance. The assumption of additive Gaussian noise is the standard setting for Ensemble Kalman Inversion (EKI) [1, 2, 3, 4, 5, 6, 7, 8], and whilst we are restricted to problems with Gaussian likelihoods, this covers a large family of scientific inverse problems. The goal of the Bayesian inverse problem is then to recover the posterior distribution over the model parameters given our observations, \(p(x|y)\).
Typical gradient-free inference methods often involve some variant on Markov Chain Monte Carlo (MCMC) algorithms e.g., random walk Metropolis [9, 10, 11], or Sequential Monte Carlo (SMC) [12]. However, these methods typically require \(\gtrsim 10^{3}\) serial model evaluations to achieve convergence, making them intractable for problems with expensive forward models. EKI by contrast utilizes embarrassingly parallel model evaluations to update parameter estimates, typically converging to an approximate solution in \(\mathcal{O}(10)\) iterations [2, 6, 7, 8].
EKI leverages ideas originally developed in the context of Ensemble Kalman Filtering (EKF) for data assimilation [13]. Since its development, EKI has seen applications across a range of disciplines, including studies of fluid flow [14], climate models [15] and machine learning tasks [16]. EKI can be understood in the context of annealing, where seek to move from the prior to the posterior through a sequence of intermediate measures. In standard EKI, this involves constructing a sequence of
Gaussian approximations to the intermediate measures. In the regime where we have a Gaussian prior \(\pi_{0}(x)=\mathcal{N}(m_{0},C_{0})\) and a linear forward model \(\mathcal{G}(x)=Gx\), the particle distribution obtained via EKI converges to the true posterior in the limit where the ensemble size \(J\to\infty\). However, outside this linear, Gaussian regime EKI is an uncontrolled approximation to the posterior that is constructed on the basis of matching first and second moments of the target distribution. Nonetheless, EKI has been shown to perform well on problems with nonlinear forward models and slightly non-Gaussian targets [1, 2, 6].
In this work we propose the application of normalizing flows (NF) [17, 18, 19, 20] to relax the Gaussian ansatz made by standard EKI for the intermediate measures. Instead of assuming a Gaussian particle distribution at each iteration, the NF is used to fit for the empirical particle distribution and map to a Gaussian latent space, where the EKI updates are performed. In doing so, we are better able to capture non-Gaussian target geometries. The structure of this paper is as follows: in Section 2 we describe the Flow Annealed Kalman Inversion (FAKI) algorithm, in Section 3 we demonstrate the performance of the method on two Bayesian inference tasks with non-Gaussian target geometries and we summarize our work in Section 4.
## 2 Methods
### Regularized Ensemble Kalman Inversion
A number of versions of EKI have been proposed in the literature. Of interest here is the regularized, perturbed observation form of EKI [6]. Starting with an ensemble of particles drawn from the prior, \(\{x_{0}^{j}\}_{j=1}^{J}\), the particles are updated at each iteration according to
\[x_{n+1}^{j}=x_{n}^{j}+C_{n}^{x\mathcal{G}}(C_{n}^{\mathcal{G}\mathcal{G}}+ \alpha_{n}\Gamma)^{-1}(y-\mathcal{G}(x_{n}^{j})+\sqrt{\alpha_{n}}\xi_{n}^{j}). \tag{2}\]
The empirical covariances \(C_{n}^{x\mathcal{G}}\) and \(C_{n}^{\mathcal{G}\mathcal{G}}\) are given by
\[C_{n}^{\alpha\mathcal{G}} =\frac{1}{J-1}\sum_{j=1}^{J}(x_{n}^{j}-\langle x_{n}\rangle) \otimes(\mathcal{G}(x_{n}^{j})-\langle\mathcal{G}_{n}\rangle), \tag{3}\] \[C_{n}^{\mathcal{G}\mathcal{G}} =\frac{1}{J-1}\sum_{j=1}^{J}(\mathcal{G}(x_{n}^{j})-\langle \mathcal{G}_{n}\rangle)\otimes(\mathcal{G}(x_{n}^{j})-\langle\mathcal{G}_{n} \rangle). \tag{4}\]
At each iteration we perturb the forward model evaluations with the Gaussian observation noise \(\xi_{n}^{j}\sim\mathcal{N}(0,\Gamma)\). The parameter \(\alpha_{n}\) is a Tikhonov regularisation parameter, which can be viewed as an inverse step size in the Bayesian annealing context. In particular, given a set of annealing parameters \(\beta_{0}\equiv 0<\beta_{1}<\ldots<\beta_{N}<\beta_{N+1}\equiv 1\), we have the corresponding set of target distributions
\[\pi_{n}(x)\propto\pi_{0}(x)\exp\left(-\frac{\beta_{n}}{2}\norm{\Gamma^{-1/2}( y-\mathcal{G}(x))}^{2}\right), \tag{5}\]
with
\[\alpha_{n}=\beta_{n+1}-\beta_{n}. \tag{6}\]
EKI proceeds by constructing a sequence of ensemble approximations to Gaussian distributions that approximate the intermediate targets.
The choice of the regularization parameter, \(\alpha_{n}\) controls the transition from the prior to the posterior. Previous proposals for an adaptive choice have taken inspiration from SMC by using a threshold on the effective sample size (ESS) of the particles at each temeperature level [21, 22]. In this work we adopt the same approach, calculating pseudo-importance weights at each temperature given by
\[w_{n}^{j}=\exp\left(-\frac{1}{2}(\beta_{n+1}-\beta_{n})\norm{\Gamma^{-1/2}(y- \mathcal{G}(x_{n}^{j}))}^{2}\right). \tag{7}\]
The next temperature level can then be selected by solving
\[\left(\sum_{j=1}^{J}w_{n}^{j}(\beta_{n+1})^{2}\right)^{-1}\left(\sum_{j=1}^{J }w_{n}^{j}(\beta_{n+1})\right)^{2}=\tau J, \tag{8}\]
using the bisection method, where \(0<\tau<1\) is the target fractional ESS threshold. Throughout our work we set \(\tau=0.5\). Full pseudocode for EKI is given in Algorithm 1.
### Normalizing Flows
As discussed above, standard EKI proceeds by constructing a sequence of ensemble approximations to Gaussian distributions. The procedure works well in the situation where the target and all the intermediate measures are close to Gaussian. However, when any of these measures are far from Gaussian, EKI can dramatically fail to capture the final target geometry.
To address this shortcoming we propose the use of NFs to approximate each intermediate target, instead of using the Gaussian ansatz of standard EKI. NFs are powerful generative models that can be used for flexible density estimation and sampling [17, 18, 19, 20]. An NF model maps from the original space \(x\in\mathbb{R}^{d}\) to a latent space \(z\in\mathbb{R}^{d}\), through a sequence of invertible transformations \(f=f_{1}\circ f_{2}\circ\ldots\circ f_{L}\), such that we have a bijective mapping \(z=f(x)\). The mapping is such that the latent variables are mapped to some simple base distribution, typically chosen to be the standard Normal distribution, giving \(z\sim p_{z}(z)=\mathcal{N}(0,I)\).
The NF density can be evaluated through the change of variables formula,
\[q(x)=p_{z}(f(x))\left|\det Df(x)\right|=p_{z}(f(x))\prod_{l=1}^{L}\left|\det Df _{l}(x)\right|, \tag{9}\]
where \(Df(x)=\partial f(x)/\partial x\) denotes the Jacobian of \(f\). Efficient evaluation of this density requires the Jacobian of the transformation to be easy to evaluate, and efficient sampling requires the inverse of the mapping \(f\) to be easy to calculate. In this work we use Masked Autoregressive Flows (MAF) [18], which have previously been found to perform well in the context of preconditioned MCMC sampling within SMC without the need for expensive hyper-parameter searches during sampling [23].
### Flow Annealed Kalman Inversion
Given particles distributed as \(\pi_{n}(x)\), the subsequent target can be written as
\[\pi_{n+1}(x)\propto\pi_{n}(x)\exp\left(-\frac{1}{2\alpha_{n}}\left\|\Gamma^{ -1/2}(y-\mathcal{G}(x))\right\|^{2}\right). \tag{10}\]
We may therefore view \(\pi_{n}(x)\) i.e., the posterior at the temperature level \(\beta_{n}\), as an effective prior for \(\pi_{n+1}(x)\), with a data likelihood annealed by \(\alpha_{n}^{-1}\). By fitting an NF to the particles \(\{x_{n}^{j}\}_{j=1}^{J}\), we obtain an approximate map from the intermediate target \(\pi_{n}(x)\) to \(\mathcal{N}(z|0,I)\). The latent space target is then given by the change of variables formula as
\[\pi_{n+1}(z)=\pi_{n+1}(x=f_{n}^{-1}(z))\left|\det Df_{n}^{-1}(z)\right|. \tag{11}\]
By controlling the choice of \(\alpha_{n}\), we control the distance between the Gaussianized effective prior and this latent space target density. For FAKI, we therefore perform the EKI updates in the NF latent space at each temperature level, allowing us to relax the Gaussian ansatz of standard EKI by constructing an approximate map from each \(\pi_{n}(x)\) to a Gaussian latent space. It is worth noting that, whilst this method relaxes the Gaussianity assumptions of standard EKI, it does not address the linearity assumptions used in deriving EKI.
The FAKI update for the latent space particle locations is given by
\[z_{n+1}^{j}=z_{n}^{j}+\mathcal{C}_{n}^{z\mathcal{G}}(\mathcal{C}_{n}^{z \mathcal{G}}+\alpha_{n}\Gamma)^{-1}(y-\mathcal{G}(f_{n}^{-1}(z_{n}^{j}))+ \sqrt{\alpha_{n}}\xi_{n}^{j}), \tag{12}\]
where the latent space empirical covariances are given by
\[\mathcal{C}_{n}^{s\mathcal{G}} =\frac{1}{J-1}\sum_{j=1}^{J}(z_{n}^{j}-\langle z_{n}\rangle)\otimes( \mathcal{G}(f_{n}^{-1}(z_{n}^{j}))-\langle\mathcal{G}_{n}\rangle), \tag{13}\] \[\mathcal{C}_{n}^{\mathcal{G}\mathcal{G}} =\frac{1}{J-1}\sum_{j=1}^{J}(\mathcal{G}(f_{n}^{-1}(z_{n}^{j}))- \langle\mathcal{G}_{n}\rangle)\otimes(\mathcal{G}(f_{n}^{-1}(z_{n}^{j}))- \langle\mathcal{G}_{n}\rangle). \tag{14}\]
Full pseudocode for FAKI is given in Algorithm 2.
```
1:Input:\(J\) prior samples \(\{x_{0}^{j}\sim\pi_{0}(x)\}_{j=1}^{J}\), data \(y\), observation error covariance \(\Gamma\) and fractional ESS target threshold \(\tau\)
2:Initialize inverse temperature \(\beta_{0}=0\), iteration counter \(n=0\)
3:while\(\beta<1\)do
4: Evaluate \(\mathcal{G}_{j}=\mathcal{G}(x_{n}^{j})\), \(j\in\{1,\ldots,J\}\)
5: Solve for \(\beta_{n+1}\) using the bisection method with Equation 8
6:\(\alpha_{n}\leftarrow\beta_{n+1}-\beta_{n}\)
7: Fit NF map \(f_{n}\) to current samples \(\{x_{n}^{j}\}_{j=1}^{J}\)
8: Map particles to latent space \(z_{n}^{j}=f_{n}(x_{n}^{j})\), \(j\in\{1,\ldots,J\}\)
9: Update particles using Equation 12 to obtain \(\{z_{n+1}^{j}\}_{j=1}^{J}\)
10: Map back to the data space \(x_{n+1}^{j}=f_{n}^{-1}(z_{n+1}^{j})\), \(j\in\{1,\ldots,J\}\)
11:\(n\gets n+1\)
12:endwhile
13:Output: Converged particle ensemble \(\{x_{N}^{j}\}_{j=1}^{J}\)
```
**Algorithm 2** Flow Annealed Kalman Inversion
## 3 Results
In this section we demonstrate the performance of FAKI compared to standard EKI on two numerical benchmarks, a two dimensional Rosenbrock distribution and a stochastic Lorenz system [24, 25]. Both models display significant non-Gaussianity at some point during the transition from prior to posterior, severely frustrating the performance of EKI. This manifests in both reduced fidelity of the final ensemble approximations to the posterior, and in a larger number of iterations being required for convergence following the ESS-based annealing scheme described in Section 2.1.
In Table 1 we provide statistics summarizing the performance of EKI and FAKI on our numerical benchmarks. We measure the quality of the posterior approximations by computing the 1-Wasserstein distance, \(W_{1}\)[26, 27] between the samples obtained through FAKI and EKI, against reference posterior samples obtained via long runs of Hamiltonian Monte Carlo (HMC) [28, 29]. These reference samples are thinned to be approximately independent when computing the 1-Wasserstein distances1. The 1-Wasserstein distance may be interpreted as the cost involved in rearranging one probability measure to look like another, with lower values indicating the two probability measures are closer to one another. In addition to this assessment of the approximation quality, we report the number of iterations, \(N_{\text{iter}}\) required by FAKI and EKI for convergence. For both quantities we report the median and median absolute deviation (MAD), estimated over 10 independent runs using different random seeds.
Footnote 1: We use the Python Wasserstein library: [https://github.com/pkomiske/Wasserstein/](https://github.com/pkomiske/Wasserstein/).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Model & Algorithm & Median\([N_{\text{iter}}]\) & MAD\([N_{\text{iter}}]\) & Median\([W_{1}]\) & MAD\([W_{1}]\) \\ \hline Rosenbrock & EKI & 100 & 7.0 & 0.72 & 0.05 \\ Rosenbrock & FAKI & 34.0 & 7.0 & 0.43 & 0.14 \\ Lorenz & EKI & 10.0 & 0.0 & 69.8 & 1.08 \\ Lorenz & FAKI & 8.0 & 0.0 & 5.65 & 0.86 \\ \hline \end{tabular}
\end{table}
Table 1: Median and MAD values for \(N_{\text{iter}}\) and 1-Wasserstein distances for each model and algorithm combination, calculated over 10 independent runs using different random seeds. For both the numerical benchmarks we see that FAKI results in a reduced number of iterations for convergence, and a lower value of the 1-Wasserstein distance between the converged samples and the ground truth.
### \(d=2\) Rosenbrock
In our first numerical experiment we consider the two dimensional Rosenbrock distribution. This toy model allows us to clearly see the impact of non-Gaussianity on the performance of EKI, and how FAKI is able to alleviate these issues. For the Rosenbrock model we assume a Gaussian prior over the parameters \(x\in\mathbb{R}^{2}\),
\[x\sim\mathcal{N}(0,10^{2}I). \tag{15}\]
The data, \(y\in\mathbb{R}^{2}\) are distributed according to the likelihood,
\[y\sim\mathcal{N}(\mathcal{G}(x)=(x_{1}-x_{0}^{2},x_{0})^{\intercal},\Gamma= \mathrm{diag}(0.01^{2},1^{2})). \tag{16}\]
To generate simulated data we evaluate \(y=G((1,1)^{\intercal})+\eta\), where \(\eta\sim\mathcal{N}(0,\mathrm{diag}(0.01^{2},1^{2}))\). The large difference in noise scales results in a highly non-Gaussian posterior geometry that poses a significant challenge for EKI. For each run of EKI and FAKI we use 100 particles.
In Figure 1 we show pair-plots comparing the final particle distributions obtained with EKI and FAKI against samples obtained through a long run of HMC. The NF mapping means that the ensemble approximation obtained by FAKI is able to capture the highly nonlinear target geometry. In comparison, EKI struggles to fill the tails of the Rosenbrock target. Moreover, whilst FAKI converges within \(\sim 34\) iterations, EKI required a median number of \(\sim 100\) iterations to converge using the ESS-based annealing scheme.
### Stochastic Lorenz System
The Lorenz equations are a set of coupled differential equations used as a simple model of atmospheric convection. Notably, for certain parameter values the Lorenz equations are known to exhibit chaotic behaviour [24]. In this work we follow [25] and consider the stochastic Lorenz system,
\[\mathrm{d}X_{t} =10(Y_{t}-X_{t})\mathrm{d}t+\mathrm{d}W_{t}^{x}, \tag{17}\] \[\mathrm{d}Y_{t} =X_{t}(28-Z_{t})\mathrm{d}t-Y_{t}\mathrm{d}t+\mathrm{d}W_{t}^{y},\] (18) \[\mathrm{d}Z_{t} =X_{t}Y_{t}\mathrm{d}t-\frac{8}{3}Z_{t}\mathrm{d}t+\mathrm{d}W_{t }^{z}, \tag{19}\]
where \(W_{t}^{x}\), \(W_{t}^{y}\) and \(W_{t}^{z}\) are Gaussian white noise processes with standard deviation \(\sigma_{0}=0.1\). To generate simulated data we integrated these equations using an Euler-Maruyama scheme with \(\mathrm{d}t=0.02\) for 30 steps, with initial conditions \(X_{0},Y_{0},Z_{0}\sim\mathcal{N}(0,1^{2})\). The observations are then taken to be the \(X_{t}\) values over these 30 time steps, with Gaussian observational noise \(\eta_{t}\sim\mathcal{N}(0,\sigma^{2}=1^{2})\).
Figure 1: Pair-plots for the Rosenbrock target. Panel (a): Pair-plot comparison of samples from EKI and a long HMC run. Panel (b): Pair-plot comparison of samples from FAKI and a long HMC run. Samples from FAKI are able to correctly capture the highly nonlinear target geometry. Standard EKI struggles to fill the tails of the target, and requires \(\sim 100\) iterations to converge, compared to \(\sim 34\) iterations for FAKI.
The goal of our inference here is to recover the initial conditions, the trajectories \((X_{t},Y_{t},Z_{t})\) and the innovation noise scale \(\sigma_{0}\), giving a parameter space of \(d=94\) dimensions. We assign priors over these parameters as,
\[\log\sigma_{0} \sim\mathcal{N}(-1,1^{2}), \tag{20}\] \[X_{0},Y_{0},Z_{0} \sim\mathcal{N}(0,1^{2}),\] (21) \[X_{t} \sim\mathcal{N}(X_{t-1}+f_{X}(X_{t-1},Y_{t-1},Z_{t-1},t-1)\mathrm{ d}t,\sigma_{0}^{2}\mathrm{d}t),t\in\{1,\ldots,30\},\] (22) \[Y_{t} \sim\mathcal{N}(Y_{t-1}+f_{Y}(X_{t-1},Y_{t-1},Z_{t-1},t-1)\mathrm{ d}t,\sigma_{0}^{2}\mathrm{d}t),t\in\{1,\ldots,30\},\] (23) \[Z_{t} \sim\mathcal{N}(Z_{t-1}+f_{Z}(X_{t-1},Y_{t-1},Z_{t-1},t-1)\mathrm{ d}t,\sigma_{0}^{2}\mathrm{d}t),t\in\{1,\ldots,30\}, \tag{24}\]
where \(f_{X}\), \(f_{Y}\) and \(f_{Z}\) are the transition functions corresponding to Equations 17-19 respectively. The Gaussian likelihood has the form
\[\hat{X}_{t}\sim\mathcal{N}(X_{t},\sigma^{2}),t\in\{1,\ldots,30\}, \tag{25}\]
where \(\hat{X}_{t}\) are the observations of the \(X_{t}\) trajectory. The chaotic dynamics of the Lorenz system results in a highly non-Gaussian prior distribution, with the inversion having to proceed through a sequence
Figure 2: Comparison of first and second moment estimates along each dimension for the stochastic Lorenz system. Panel (a): Comparison between the mean estimates from EKI and a long HMC run. Panel (b): Comparison between the mean estimates from FAKI and a long HMC run. Panel (c): Comparison between the standard deviation estimates from EKI and a long HMC run. Panel (d): Comparison between the standard deviation estimates from FAKI and a long HMC run. Blue bars indicate the moment estimates obtained via HMC along each dimension, with the adjacent orange bars showing the estimates obtained through EKI/FAKI. EKI is unable to obtain accurate mean estimates for much of the \(Z_{t}\) trajectory, whilst FAKI is able to obtain accurate mean estimates for each dimension. FAKI outperforms EKI in its estimates of the marginal standard deviations, with EKI drastically overestimating the standard deviations along many dimensions.
of highly non-Gaussian intermediate measures towards the posterior. This severely frustrates the performance of EKI, with the Gaussian ansatz failing to describe the geometry of the intermediate measures. For each run of EKI and FAKI we use 940 particles.
In Figure 2 we show the ensemble estimates for the mean and standard deviation along each dimension obtained by EKI and FAKI, compared to reference estimates obtained through long runs of HMC. FAKI is able to obtain accurate mean estimates along each dimension, whereas EKI is unable to obtain the correct means for much of the \(Z_{t}\) trajectory. EKI severely overestimates the marginal standard deviations along many dimensions. This situation is alleviated by the NF mappings learned by FAKI. The greater fidelity of the FAKI posterior approximations are reflected in the median estimates for the 1-Wasserstein distances, with a value of 5.65 for FAKI and 69.8 for EKI.
## 4 Conclusions
In this work we have introduced Flow Annealed Kalman Inversion (FAKI), a gradient-free inference algorithm for Bayesian inverse problems with expensive forward models. This is a generalization of Ensemble Kalman Inversion (EKI), where we utilize Normalizing Flows (NF) to replace the Gaussian ansatz made in EKI. Instead of constructing a sequence of ensemble approximations to Gaussian measures that approximate a sequence of intermediate measures, as we move from the prior to the posterior, we learn an NF mapping at each iteration to a Gaussian latent space. Provided the transition between temperature levels is controlled, we can perform Kalman inversion updates in the NF latent space. In the NF latent space, the Gaussianity assumptions of EKI are more closely satisfied, resulting in a more stable inversion at each temperature level.
We demonstrate the performance of FAKI on two numerical benchmarks, a \(d=2\) Rosenbrock distribution and a \(d=94\) stochastic Lorenz system. Both examples exhibit significant non-Gaussianity in the transition from prior to posterior that frustrate standard EKI. In the presence of strong non-Gaussianity, we find FAKI produces higher fidelity posterior approximations compared to EKI, as measured by the 1-Wasserstein distance between FAKI/EKI samples and reference HMC samples. In addition to the improved fidelity of the posterior approximations, we find FAKI tends to reduce the number of iterations required for convergence.
Whilst the application of NFs is able to relax the Gaussian ansatz of EKI, it does not address the linearity assumptions used in deriving EKI. As such, FAKI is still not exact for general forward models. In future work, it will be interesting to explore methods to address this, for example the combination of FAKI with unbiased MCMC or importance sampling methods. It would also be interesting to consider generalizations of FAKI that are able to accommodate non-Gaussian likelihoods and/or parameter-dependent noise covariances. The use of NFs means that we typically require ensemble sizes \(J\gtrsim 10d\) to learn accurate NF maps with the MAF architecture employed in this work. It would be useful to explore alternative NF architectures and regularization schemes that are able to learn accurate NF maps with smaller ensemble sizes, in order to enable FAKI to scale to higher dimensions. In this work, we have found that the MAF architecture is able to capture a wide range of target geometries without the need for expensive NF hyper-parameter searches. However, it may be possible to exploit NF architectures with inductive biases that are particularly suited to common target geometries e.g., the nonlinear correlations that often appear in hierarchical models.
## Acknowledgments
This research was funded by NSFC (grant No. 12250410240) and the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research under Contract No. DE-AC02-05CH11231 at Lawrence Berkeley National Laboratory to enable research for Data-intensive Machine Learning and Analysis. RDPG was supported by a Tsinghua Shui Mu Fellowship.
The authors thank Qijia Jiang and David Nabergoj for helpful discussions.
|
2307.16653 | Using Proxy Pattern-Mixture Models to Explain Bias in Estimates of
COVID-19 Vaccine Uptake from Two Large Surveys | Recently, attention was drawn to the failure of two very large internet-based
probability surveys to correctly estimate COVID-19 vaccine uptake in the United
States in early 2021. Both the Delphi-Facebook CTIS and Census Household Pulse
Survey (HPS) overestimated uptake substantially, by 17 and 14 percentage points
in May 2021, respectively. These surveys had large numbers of respondents but
very low response rates (<10%), thus, non-ignorable nonresponse could have had
substantial impact. Specifically, it is plausible that "anti-vaccine"
individuals were less likely to participate given the topic (impact of the
pandemic on daily life). In this paper we use proxy pattern-mixture models
(PPMMs) to estimate the proportion of adults (18+) who received at least one
dose of a COVID-19 vaccine, using data from the CTIS and HPS, under a
non-ignorable nonresponse assumption. Data from the American Community Survey
provide the necessary population data for the PPMMs. We compare these estimates
to the true benchmark uptake numbers and show that the PPMM could have detected
the direction of the bias and provide meaningful bias bounds. We also use the
PPMM to estimate vaccine hesitancy, a measure for which we do not have a
benchmark truth, and compare to the direct survey estimates. | Rebecca R Andridge | 2023-07-31T13:33:05Z | http://arxiv.org/abs/2307.16653v1 | # Using Proxy Pattern-Mixture Models to Explain Bias in Estimates of COVID-19 Vaccine Uptake
###### Abstract
Recently, attention was drawn to the failure of two very large internet-based probability surveys to correctly estimate COVID-19 vaccine uptake in the United States in early 2021. Both the Delphi-Facebook CTIS and Census Household Pulse Survey (HPS) overestimated uptake substantially, by 17 and 14 percentage points in May 2021, respectively. These surveys had large numbers of respondents but very low response rates (\(<\)10%), thus, non-ignorable nonresponse could have had substantial impact. Specifically, it is plausible that "anti-vaccine" individuals were less likely to participate given the topic (impact of the pandemic on daily life). In this paper we use _proxy pattern-mixture models (PPMMs)_ to estimate the proportion of adults (18+) who received at least one dose of a COVID-19 vaccine, using data from the CTIS and HPS, under a non-ignorable nonresponse assumption. Data from the American Community Survey provide the necessary population data for the PPMMs. We compare these estimates to the true benchmark uptake numbers and show that the PPMM could have detected the direction of the bias and provide meaningful bias bounds. We also use the PPMM to estimate vaccine hesitancy, a measure for which we do not have a benchmark truth, and compare to the direct survey estimates.
**Keywords:** nonresponse bias, survey data
**Running Head:** _Using PPMMs to Estimate COVID-19 Vaccine Uptake_
## 1 Introduction
In the absence of nonresponse, carefully designed probability samples provide a principled way of producing unbiased estimates of population quantities such as proportions and means.
Random selection of individuals into a sample, where every population unit has a known, non-zero probability of selection, ensures that the sample represents the population in expectation. Federal statistical agencies in the United States and abroad rely on such surveys to produce official estimates of population-level characteristics that play an important in policy-making and business strategies (Hastak et al., 2001). These large, government-sponsored surveys are generally large and expensive, requiring years of development (e.g., field-testing) as well as careful post-survey analysis before official statistics are released.
The COVID-19 pandemic created a unique challenge in that it created a sudden, unanticipated need for data to describe both the incidence of disease and how the pandemic was impacting daily life. In this paper we analyze two large surveys that were implemented quickly in response to the pandemic: the U.S. Census Bureau's Household Pulse Survey (HPS) (Fields et al., 2020) and the Delphi-Facebook COVID-19 Trends and Impact Survey (CTIS) (Salomon et al., 2021). The HPS was a government-sponsored survey, whereas the CTIS was a collaboration between academia and a private company. Both surveys were large probability samples that repeatedly collected information on a range of pandemic-related topics; we focus on the estimation of vaccine uptake in early 2021 when vaccines first became available in the U.S.. Average sample sizes (number of respondents) for the HPS was approximately 75,000 per wave and for CTIS it was approximately 250,000 per week.
Despite their large sizes, both the Census HPS and Delphi-Facebook CTIS produced substantially biased estimates of vaccine uptake in the U.S. in early 2021 (Nguyen et al., 2021; Bradley et al., 2021). As shown in Figure 1, the weighted estimates from these surveys consistently overestimated vaccine uptake (the percentage of U.S. adults reporting receiving at least one dose of a COVID-19 vaccine) as compared to benchmark data retrospectively available from the U.S. Centers for Disease Control and Prevention (CDC) (U.S. Centers for Disease Control and Prevention 2023). Bradley et al. (2021) decomposed the error in the survey estimates of vaccine update for both surveys using the framework of Meng (2018), emphasizing the danger of very large samples leading to very precise (negligible confidence
interval length) but severely biased results.
Importantly, while these two surveys resulted in large samples, they had very small response rates. In the period from January through May 2021, unweighted response rates for the HPS were in the range of 6.6-7.8% (U.S. Census Bureau 2023). Response rates are not available for the CTIS, but daily cooperation rates1 were approximately 0.5-1.5% (CTIS 2022a). With such small response rates, the protection against bias afforded by probability sampling is erased, and these surveys in many ways resemble nonprobability samples (e.g., convenience samples). A detailed analysis of nonresponse for the HPS (U.S. Census Bureau 2021) showed that response rates differed across demographic domains (e.g., age, race, ethnicity). Post-survey weighting adjustments were used for both surveys to attempt to correct for differential nonresponse, but were limited to a small set of demographic characteristics. Given that these weighting adjustments failed to produce unbiased estimates, and with such small response rates, we hypothesized that a _non-ignorable_ nonresponse mechanism might
Figure 1: Survey weighted estimates of COVID-19 vaccine uptake for adults in the U.S. in 2021 compared to CDC benchmark data (grey line), plotted by the end date of each survey wave. Intervals are 95% CIs; for Delphi-Facebook CTIS the CIs are too small to be visible.
have been responsible at least in part for the biased estimates.
In the context of measuring vaccine uptake, if an individual's propensity to respond to either the HPS or CTIS is at least in part a function of their vaccine status, this constitutes a _non-ignorable_ nonresponse mechanism. Specifically, it is plausible that people who were "anti-vaccine" (and thus were unvaccinated) were less likely to complete these surveys on the impact of the COVID-19 pandemic on daily life. One could also hypothesize that individuals who were anti-vaccine might also be suspicious of the government and thus less likely to respond to the HPS, which was an official government-sponsored survey.
In order to assess whether this type of non-ignorable nonresponse may have been occurring, we use previously developed _proxy pattern-mixture models (PPMMs)_(Andridge and Little 2011, 2020), which allow for estimation under a non-ignorable nonresponse assumption, to estimate vaccine uptake using data from both surveys. In Section 2 we describe the HPS and CTIS in more detail. In Section 3 we briefly review the PPMM, and present results from applying it to estimate vaccine uptake in Section 4. In Section 5 we use the PPMM to estimate vaccine hesitancy, a measure for which we do not have a benchmark truth. We conclude in Section 6 with discussion of how the PPMM could have been used prospectively as part of a nonresponse bias assessment and describe factors that would facilitate such analyses in the future.
## 2 Details on the COVID-19 Vaccine Surveys
### Census Household Pulse Survey
The Census Household Pulse Survey (HPS) was an experimental data product of the U.S. Census Bureau that was developed in the early phase of the COVID-19 pandemic in conjunction with ([https://www.census.gov/data/experimental-data-products/household-pulse-survey.html](https://www.census.gov/data/experimental-data-products/household-pulse-survey.html)). The first phase of this survey launched on April 23, 2020 with the goal of quickly and efficiently collecting data about how the pandemic was affecting the lives of individuals
residing in the United States, and was still ongoing as of March 2023. Survey questions asked about experiences that may be affected by the pandemic, with a focus on employment status, food security, housing security, physical and mental health, and educational disruption (Fields et al., 2020). Starting in January 2021, when COVID-19 vaccines became available, questions were added about vaccination status and intention. Table 1 lists the questions used to estimate vaccine uptake and vaccine hesitancy.
Given the goal of quick survey deployment and results dissemination as well as the context (during the pandemic), all data collection was via web. The HPS consisted of repeated, stratified, cross-sectional random samples with a target population of all adults (18+) residing in housing units in the U.S. (excluding Puerto Rico). As with many demographic surveys conducted by federal statistical agencies, the HPS sampled households from the Census Bureau's Master Address File (MAF). However, due to the online-only design, only addresses on the MAF that had a linked cell phone number and/or email address (from the Census Bureau Contact Frame) were eligible for sampling due to the online-only survey design. Approximately 80% of housing units on the MAF had a cell phone and/or email address ([https://www.census.gov/programs-surveys/household-pulse-survey/technical-documentation.html](https://www.census.gov/programs-surveys/household-pulse-survey/technical-documentation.html)). Initially, samples were drawn weekly from the MAF, with a shift to bi-weekly samples in August 2020. The sample was stratified by geographic area (50 states, Washington D.C., top 15 Metropolitan Statistical Areas). Sampled individuals were contacted by text and/or email with a request to complete the survey.
We analyzed iterations of the HPS conducted from January 6, 2021 through May 10, 2021. During this time period, approximately 1,000,000 housing units were sampled in each data collection period with 68,000-80,000 respondents per wave.
Several post-survey adjustments were made to the HPS base weights to produce the final analytic weights, including adjustments for nonresponse, undercoverage, and a conversion from household-level to person-level weights (Fields et al., 2020). As a last step, an iterative raking procedure was used to ensure that weighted totals match the U.S. adult popula
tion with respect to specified demographic characteristics. Specifically, weights were raked to two sets of population totals from the 2019 American Community Survey: educational attainment by age and sex2 within state, and race/ethnicity by age and sex within state.
Footnote 2: Surveys conducted by the U.S. federal government historically have collected sex as a binary variable and without nuance, i.e., conflating it with gender. We acknowledge this limitation.
### Delphi-Facebook COVID-19 Trends and Impact Survey
The Delphi-Facebook COVID-19 Trends and Impact Survey (CTIS) ([https://delphi.cmu.edu/covid19/ctis/](https://delphi.cmu.edu/covid19/ctis/)) was developed in the early phase of the COVID-19 pandemic as a collaboration between Meta (Facebook's parent company) and the University of Maryland and Carnegie Mellon University (Barkay et al., 2020). The survey launched on April 6, 2020 and ended on June 25, 2022. The stated main goal of the surveys was to collect real-time
\begin{table}
\begin{tabular}{l l} \hline \hline \multicolumn{2}{l}{**Census Household Pulse Survey**} \\ \hline Uptake & Question: “Have you received a COVID-19 vaccine?” \\ \cline{2-3} & Response Options: “Yes”, “No” \\ \hline Intention & Question: “Once a vaccine to prevent COVID-19 is available to you, would you...” \\ & [only asked if did not respond “Yes” to uptake question] \\ \cline{2-3} & Response Options: “Definitely get a vaccine”, “Probably get a vaccine”, “Be unsure about getting a vaccine”*, “Probably NOT get a vaccine”, “Definitely NOT get a vaccine” \\ \hline \multicolumn{2}{l}{**Delphi-Facebook CTIS**} \\ \hline Uptake & Question: “Have you had a COVID-19 vaccination?” \\ \cline{2-3} & Response Options: “Yes”, “No”, “I don’t know” \\ \hline Intention & Question: “If a vaccine to prevent COVID-19 were offered to you today, would you choose to get vaccinated?” [only asked if did not respond “Yes” to uptake question] \\ \cline{2-3} & Response Options: “Yes, definitely”, “Yes, probably”, “No, probably not”, “No, definitely not” \\ \hline \multicolumn{2}{l}{*option added mid-April 2020} \\ \end{tabular}
\end{table}
Table 1: Survey questions about vaccine uptake and intention in the Census Household Pulse Survey and Delphi-Facebook CTIS, January 2021 - May 2021
indicators of symptom severity, both individual and household-level (Kreuter et al., 2020). Starting in January 2021, questions about vaccination status and intention were added, with the exact wording as shown in Table 1.
The CTIS was a large, stratified, cross-sectional random samples, drawn daily, with a target population of all adults (18+). The survey was implemented in over 200 countries; we only use data from the U.S. in our analyses. The sampling frame was all Facebook users (18+) who had been active on Facebook in the previous month. Samples were drawn daily, and the survey invitation was shown at the top of the Facebook feed for selected individuals(Salomon et al., 2021). In the U.S. the sample was stratified by state.
We pooled the daily CTIS samples into weeks and analyzed the weeks ending in January 16, 2021 through May 8, 2021. During this time period, an average of approximately 290,000 respondents provided at least partial responses to the survey each week.
Multiple post-survey adjustments were made to the CTIS base weights to account for nonresponse and non-coverage (due to the fact that not all of the target population are Facebook users) (CTIS 2022b). First, inverse propensity score weighting was used to adjust for nonresponse within the sampling frame (the Facebook user base) using age and gender as predictors of response status. Then post-stratification was used to ensure weighted totals match the target population with respect to age by sex within state using the Current Population Survey 2018 March Supplement for population totals (Barkay et al., 2020).
## 3 Methodology: The Proxy Pattern-Mixture Model
The proxy pattern-mixture model (PPMM) was originally proposed by Andridge and Little (2011) as a tool for assessing the potential impact of non-ignorable nonresponse on estimating means of continuous variables, primarily in the context of item nonresponse. It was subsequently extended to estimating proportions by Andridge and Little (2020). The PPMM has also been used as the basis for indices that quantify the potential for non-ignorable selection
bias for means (Little et al., 2020), proportions (Andridge et al., 2019), and regression coefficients (West et al., 2021) estimated from nonprobability samples. Our goal is to estimate a proportion - the proportion of U.S. adults who have had at least one dose of a COVID-19 vaccine - thus we use the binary PPM of Andridge and Little (2020) in our analyses. We briefly describe their methodology here in the context of estimating vaccine uptake and refer readers to Andridge and Little (2020) for additional details.
Let \(Y_{i}\) be the binary indicator of whether individual \(i\) in the population of U.S. adults (18+) has received at least one dose of a COVID-19 vaccine. A single iteration of either the HPS or CTIS collects \(Y_{i}\) from a subset of the population, and let \(S_{i}\) be the sample inclusion indicator that takes the value \(S_{i}=1\) if the individual is sampled and responds (provides a value of \(Y_{i}\)) and 0 otherwise. Since only a small fraction of sampled individuals responded to the survey, the \(S_{i}\) we observe is a combination of the design-based sample inclusion probability (which we know) and an unknown response propensity (which we do not know). Thus the probability density of \(S_{i}\) is unknown without additional assumptions. In the PPMM analysis we will make assumptions about the distribution of \(S_{i}\) through a principled sensitivity analysis. In what follows, we refer to the units with \(S=1\) as the "responding sample," and note that the units with \(S=0\) include both individuals who were sampled but did not respond and individuals who were not sampled.
Crucial to the implementation of the PPMM, we must also observe covariate information at the individual level for the responding individuals and in aggregate for the rest of the population. Let \(Z_{i}=(Z_{i1},Z_{i2},\ldots Z_{ip})\) be a set of \(p\) covariates collected on the survey, which for our purposes will be limited to information we can also obtain in aggregate for the U.S. population, i.e., demographic data. In the PPMM approach, this covariate data for respondents is reduced to a single _proxy_ variable \(X\) by regressing \(Y\) on \(Z\) using a probit regression model and taking \(X\) to be the estimated linear predictor from this regression. Importantly, individual-level values of \(X_{i}\) are available for all responding individuals (\(S_{i}=1\)), as their \(Z\) values can be plugged into the estimated probit regression equation. We do not
observe \(X_{i}\) for nonresponding individuals, but if we have the mean and variance of \(Z\) for this part of the population from an external source then we can estimate the mean and variance of \(X\) for the nonresponding portion of the population. Despite the large sample sizes of the HPS and CITS surveys, the samples are considerably smaller than the size of the full population, i.e., sampling fractions are small. Therefore, estimates of the mean and variance of \(Z\) for the entire population of U.S. adults are effectively the same as estimates for the part of this population that did not respond to a single wave or week of the HPS or CTIS surveys.
The basic idea of the PPMM is that we can measure the degree of bias present for the respondent sample mean of the proxy \(X\) by comparing it to the population-level mean of \(X\) (based on the aggregate information for \(Z\)). If \(X\) is correlated with \(Y\), then this provides some information about the potential bias in the respondent sample mean of \(Y\). If \(X\) and \(Y\) are highly correlated, then a small bias in \(X\) suggests (but does not guarantee) a small bias in \(Y\). If, however, \(X\) and \(Y\) are weakly correlated (which would occur if the covariates \(Z\) that create \(X\) are not very predictive of \(Y\)) then we simply do not have much evidence for or against bias in the respondent sample mean of \(Y\). Fortunately, many studies have shown that demographics available in aggregate at the national level such as age, sex, race/ethnicity, and education are moderately associated with COVID-19 vaccine acceptance (e.g., Reiter et al.2020; Haile et al.2022).
The PPMM does not directly model \(Y\) and \(X\), but instead introduces a normally distributed latent variable, \(U\), such that \(Y=1\) when \(U>0\), and models the joint distribution of \(U\) and \(X\). Specifically, Andridge and Little (2020) use a bivariate normal pattern-mixture model for the joint distribution of \(U\) and \(X\) given \(S\), in which the mean and variance parameters are distinct for \(S=1\) and \(S=0\). Parameters of this joint distribution are fully identified for the responding sample, with the exception of the mean and variance of the latent \(U\) which cannot be separately identified; as in Andridge and Little (2020) we fix the variance of \(U\) at one. For the nonresponding portion of the population (\(S=0\)) we can
identify the mean and variance of \(X\), but not the parameters describing the distribution of \(U\) or the correlation between \(X\) and \(U\).
The unidentified parameters of the PPMM can be identified by making an assumption about the distribution of \(S\) and with the introduction of a sensitivity parameter, \(\phi\). Andridge and Little (2020) show that the PPMM is just identified if we assume that the probability an individual is sampled and responds is an unspecified function of a known linear combination of \(X\) and \(U\), plus potentially other observed covariates \(V\) that are independent of \(U\) (and \(Y\)) and \(X\):
\[\Pr(S=1|U,X,V)=g\left((1-\phi)X^{*}+\phi U,V)\right) \tag{1}\]
Here \(X^{*}\) is the proxy, \(X\), rescaled to have the same variance as \(U\) for \(S=1\), and \(\phi\in[0,1]\) is the sensitivity parameter. For a specified value of \(\phi\), the parameters of the PPMM are just identified, and thus the overall mean of \(Y\) can be estimated as a weighted (by the responding fraction) average of estimates of \(E[Y|S=1]=E[U>0|S=1]\) and \(E[Y|S=0]=E[U>0|S=0]\). Though there is no information in the data with which to estimate \(\phi\), certain values of \(\phi\) correspond to specific types of response mechanisms, thus enabling a reasonable, bounded sensitivity analysis. Specifically, \(\phi=0\) corresponds to a missing at random assumption (Rubin 1987), where the probability of response is only a function of \(X\) and \(V\), which are observed - this is an ignorable response mechanism. If \(\phi>0\), then response depends at least in part on \(U\), and therefore on \(Y\) - a non-ignorable response mechanism.
Andridge and Little (2020) provide an explicit formula for the overall mean of \(Y\) under the PPMM as a function of the parameters of the underlying normally-distributed latent \(U\) for respondents (\(\mu_{u}^{(1)}\)) and nonrespondents (\(\mu_{u}^{(0)},\sigma_{uu}^{(0)}\)) and the fraction of the population that responded (\(\pi\)),
\[\mu_{y}=\pi\Phi\left(\mu_{u}^{(1)}\right)+(1-\pi)\Phi\left(\mu_{u}^{(0)}\Big{/} \sqrt{\sigma_{uu}^{(0)}}\right), \tag{2}\]
where \(\Phi(z)\) denotes the CDF of the standard normal distribution evaluated at \(z\). With the
identifying restriction in (1), the mean and variance of \(U\) for nonrespondents are given by
\[\mu_{u}^{(0)} =\mu_{u}^{(1)}+\left(\frac{\phi+(1-\phi)\rho_{ux}^{(1)}}{\phi\rho_{ ux}^{(1)}+(1-\phi)}\right)\left(\frac{\mu_{x}^{(0)}-\mu_{x}^{(1)}}{\sqrt{ \sigma_{xx}^{(1)}}}\right) \tag{3}\] \[\sigma_{uu}^{(0)} =1+\left(\frac{\phi+(1-\phi)\rho_{ux}^{(1)}}{\phi\rho_{ux}^{(1)} +(1-\phi)}\right)^{2}\left(\frac{\sigma_{xx}^{(0)}-\sigma_{xx}^{(1)}}{\sigma_{ xx}^{(1)}}\right). \tag{4}\]
Here \(\mu_{x}^{(j)}\) and \(\sigma_{xx}^{(j)}\) are the mean and variance of the proxy \(X\) for \(S=j,j=\{0,1\}\) and \(\rho_{ux}^{(1)}\) is the correlation between \(U\) and \(X\) in the respondent sample.
Insight into how the PPMM works can be seen by closer inspection of Equations (2)-(4). In (3), the mean of latent \(U\) for the nonresponding portion of the population (\(\mu_{u}^{(0)}\)) is the respondent mean (\(\mu_{u}^{(1)}\)), shifted by a factor that depends on the sensitivity parameter \(\phi\), the strength of the proxy as captured by the correlation between \(X\) and \(U\) in the respondent sample (\(\rho_{ux}^{(1)}\)), and how different the proxy mean is for respondents (\(\mu_{x}^{(1)}\)) and nonrespondents (\(\mu_{x}^{(0)}\)). Larger differences in proxy means between respondents and nonrespondents will lead to larger shifts of the mean of \(U\). The amount of shift is also governed by \(\phi\), and at the two extremes of \(\phi=0\) and \(\phi=1\) the first term in the parentheses in (3) is \(\rho_{ux}^{(1)}\) and \(1/\rho_{ux}^{(1)}\), respectively. Thus, the larger the correlation \(\rho_{ux}^{(1)}\), the smaller the range of the shift as \(\phi\) goes from 0 to 1. If the proxy is weak, however, this term will produce a wide range for \(\mu_{u}^{(0)}\) as \(\phi\) is varied. A similar shifting occurs for the variance of \(U\) for nonrespondents as seen in (4).
For model estimation we use the Bayesian approach described by Andridge and Little (2020), which puts non-informative priors on all identified parameters in the PPMM to obtain draws of the overall mean of \(Y\) via a Gibbs sampler. Since the data contain no information to inform \(\phi\), we use a Uniform(0,1) prior, which generates a 95% credible interval for the mean of \(Y\) that effectively averages over all possible values of \(\phi\). The posterior median serves as an estimate of the mean of \(Y\) for \(\phi=0.5\), which was recommended by Little et al. (2020) as a "point index" if a single point estimate is desired under a non-ignorable response mechanism.
## 4 Applying the PPMM to Estimate Vaccine Uptake
As described in Section 3, application of the PPMM requires aggregate information for covariates \(Z\) that are also available in the HPS and CTIS survey data. We used the American Community Survey (ACS) 2019 data obtained via IPUMS USA (Ruggles et al., 2023) for population-level data on the following covariates available in both the HPS and CTIS: age, gender, education, race, and ethnicity. The categories for all of these covariates differed slightly between HPS and CTIS, so separate estimates of the population mean and variance were made using the ACS that matched each survey; see Supplemental Table S1 for the coding of variables across data sources. We note that income was also available in both the HPS and the ACS, but as is typical for this variable it had relatively high rates of missingness in the survey data with approximately 25% of respondents not providing their income, and thus we elected not to use this to create the proxy.
Our responding sample for each survey was taken to be the set of records that had information on vaccination status (\(Y\)) and complete covariate data (\(Z\)), as the PPMM requires complete data for the respondent sample. We followed the procedures used by the respective surveys when producing their vaccination estimates in terms of how missing data in \(Y\) was handled. For the HPS, an individual with a missing \(Y\) value was assumed to be a "no, not vaccinated" and was included in the sample, whereas for the CTIS an individual with missing \(Y\) was dropped from the sample (\(\approx\)6-7%). For covariate data, the publicly available HPS data had our \(Z\) variables already singly imputed (since they were part of the Census' weighting adjustments) and thus there were no records with missing \(Z\) values. In contrast, the CTIS suffered from missing data for the demographic variables that came at the very end of the survey, with approximately 15% additional records being dropped. Due to the very large size of the CTIS surveys, analysis sample sizes were still very large, ranging from 167,000 to 290,000 across weeks. We note that the survey weights provided with each survey are not used for the PPMM analyses, and instead the responding sample is treated effectively as a non-probability sample.
As previously noted, sampling fractions for both the HPS and CTIS were small and thus we used the mean and variance of \(Z\) from the ACS for the nonrespondent portion of the population, though technically these values are for the full population. Additionally, we treat the means from the ACS as though they were "known" despite them being estimates themselves; future work is needed to incorporate uncertainty about the \(Z\) at the population level into PPMM estimation.
As a benchmark truth for the proportion of the population that had received at least one dose of a COVID-19 vaccine we used the vaccination uptake statistics available from the CDC as used by Bradley et al. (2021) and available via their GitHub repository ([https://github.com/vcbradley/ddc-vaccine-US](https://github.com/vcbradley/ddc-vaccine-US)). As noted in Bradley et al. (2021), this benchmark data itself is potentially subject to error, though retroactive corrections are included in these counts.
Figure 2 shows the estimated proxy strength, i.e., the estimated correlation between \(U\) and \(X\) for respondents in both the HPS and CTIS during January through May of 2021.
Figure 2: Posterior medians for the biserial correlation (\(\rho^{(1)}\)) between COVID-19 vaccination uptake (binary \(Y\)) and proxy \(X\) for the selected sample under the proxy pattern-mixture model. Bounds shown are 95% credible intervals (too small to see for Delphi-Facebook CTIS).
In the earlier waves, when vaccines were first available only to limited groups (e.g., older adults), the model that builds the proxy is relatively weak (around \(\hat{\rho}_{ux}^{(1)}=0.25\)). As vaccines became more widely available, the proxy strength increases, to a high of slightly larger than \(\hat{\rho}_{ux}^{(1)}=0.5\), with a small decrease in April and May.
Figure 3 shows the estimates of vaccine uptake under the PPMM with a Uniform(0,1) prior on the sensitivity parameter \(\phi\) for both surveys, compared to the CDC benchmark and the direct survey (weighted) estimates. Several patterns are evident in the results. First, the upper endpoint of the credible intervals corresponding to \(\phi=0\) is nearly identical to the weighted estimates for HPS, which is expected since the covariates \(Z\) that created the proxy are the same as those used in the weighting adjustments. For the CTIS, the interval endpoint is slightly lower than the direct estimates, as a result of our PPMM using _more_ information than the survey weights which only used age and gender, since education and race/ethnicity (used in the PPMM) were predictive of vaccine uptake.
Second, the PPMM credible intervals cover the benchmark truth for both surveys in all
Figure 3: Estimates of vaccine uptake using the proxy pattern-mixture model (PPMM) with a Uniform(0,1) prior on the sensitivity parameter \(\phi\), for both the Census HPS and the Delphi-Facebook CTIS. Shown are the posterior medians with 95% credible intervals. The grey line is the benchmark CDC data (the “truth”).
waves/weeks, while the direct survey estimates only cover the truth twice (the first two waves of the HPS). Importantly, the PPMM correctly detects the _direction_ of bias for both surveys in all waves/weeks, i.e., the PPMM indicates that the direct estimates were overestimating the true proportion of adults who had at least one vaccine dose. For the CTIS, the posterior median proportion (corresponding to \(\phi=0.5\)) is remarkably close to the truth across all waves; for the HPS this "point index" value is too low (i.e., overcorrects the bias) in the earlier waves when the HPS direct estimates are not as biased.
Finally, the PPMM credible intervals are much wider than the confidence intervals for the survey estimates despite the very large sample sizes. This is a desirable property, since one of the problems highlighted by Bradley et al. (2021) is the "big data paradox" of Meng (2018, p.702): "The bigger the data, the surer we fool ourselves." The relatively larger intervals of the PPMM reflect the strength - or weakness - of the proxy model. Since the covariate data \(Z\) are only moderately associated with \(Y\), our confidence in how much non-ignorable nonresponse bias might be present is only moderate, corresponding to larger credible intervals.
## 5 Applying the PPMM to Estimate Vaccine Hesitancy
We also used the PPMM, with the same set of covariates \(Z\) and same external population source, to estimate the proportion of U.S. adults who were vaccine hesitant for both the HPS and CTIS data. Individuals who reported that they would "probably not" or "definitely not" choose to be vaccinated or were "unsure" (HPS only) were coded as being vaccine hesitant (see Table 1 for exact question wording and response options). Individuals who either had received a vaccine dose or who "definitely" or "probably" would do so were coded as not being vaccine hesitant.
Proxy strength for the models for vaccine hesitancy was relatively stable both across time and between surveys. The posterior median for \(\rho^{(0)}\) for the HPS ranged from 0.392 to 0.415
across waves. For the CTIS, \(\rho^{(0)}\) was largest at the earliest time point (0.391) and slightly declined across the time, with the smallest posterior median at the last time point (0.332). As such, the proxy for vaccine hesitancy was generally weaker than the proxy for vaccine uptake. The full set of estimates are available in Supplemental Figure S1.
Results of applying the PPMM are shown in Figure 4. As one might hypothesize, given that vaccine uptake was overestimated by these surveys, the PPMM suggests that vaccine hesitancy is _underestimated_ by a relatively stable amount across time. Using the posterior median as a point estimate under a non-ignorable response mechanism, the results suggest that vaccine hesitancy is being underestimated by around 9 percentage points on average for the HPS and by around 7 percentage points on average for the CTIS. As expected due to the relatively weak proxy, the credible intervals are large, averaging approximately 40 percentage points wide for HPS and 30 percentage points wide for CTIS. Nonetheless, this provides some evidence that the survey estimates may be too optimistic when it comes to estimating vaccine hesitancy if nonresponse is non-ignorable.
Figure 4: Estimates of vaccine hesitancy using the proxy pattern-mixture model with a Uniform(0,1) prior on the sensitivity parameter \(\phi\), for both the Census HPS and the Delphi-Facebook CTIS. Shown are the posterior medians with 95% credible intervals.
## 6 Discussion
In this analysis of two large surveys that substantially overestimated vaccine uptake in the U.S. in early 2021, the PPMM correctly detected the direction of bias for all survey waves. This suggests that non-ignorable nonresponse is a plausible explanation for the bias - individuals who were not vaccinated were less likely to respond to these surveys. In addition to correctly detecting the direction of bias, median posterior estimates from the PPMM, corresponding to \(\phi=0.5\) (previously suggested as a way to obtain a single estimate under the PPMM) were remarkably accurate. For the Delphi-Facebook CTIS, PPMM estimates with \(\phi=0.5\) were close to the retrospectively available benchmark truth in all survey waves. For the Census HPS, estimates for \(\phi=0.5\) were very close to the truth in the last two waves, when the true bias was the largest.
The success of the PPMM in the vaccine uptake context is in part due to the fact that the factors available at the population level, i.e., demographics, were moderately predictive of the outcomes of interest. If other outcomes on the same surveys are not as strongly associated with demographic characteristics then the proxies will be weaker. Having a weak proxy means that credible intervals from the PPMM will be relatively wide, and the analysis will be less informative. Nonetheless, the present analysis highlights the fact that demographic data alone can in fact provide enough information for a meaningful sensitivity analysis and provide reasonable bounds on the potential bias.
Importantly, the data necessary for a sensitivity analysis based on the PPMM are data that would be readily available in most scenarios. The only additional data needed beyond the survey microdata itself (from respondents) are population-level means and variances for the variables that create the proxy. In most cases these would be available while the survey data is first being analyzed. In fact, in many cases these population margins will be the same as what would be used for post-survey weighting adjustments.
Another reason for the success of the PPMM in our context is that the target population is a relatively stable and clearly defined population for which summary statistics are readily
available. This may not always be the case. For example, when applying the PPMM to pre-election polling we found very strong proxies (\(\rho^{(0)}\geq 0.9\)) (West and Andridge 2023). However, the challenge there was in defining the population of interest. A pre-election poll attempts to make inference to a dynamic population of "likely voters." Finding aggregate data for such a population is a major challenge, unlike the relatively simple task of finding demographic summaries for all adults in the U.S. in the vaccine uptake application.
Overall, this _retrospective_ analysis provides evidence that the PPMM could be used as a method for _prospective_ assessment of the potential for non-ignorable nonresponse bias. In most cases, a benchmark truth will not be available, but this application suggests that the PPMM can in fact capture the truth in a "real data" setting. Our analysis also provides support for Little et al.'s recommendation of \(\phi=0.5\) as a reasonable point estimate, a "moderately non-ignorable" mechanism that falls halfway between the ignorable (\(\phi=0\)) and most extremely non-ignorable (\(\phi=1\)) sensitivity bounds.
## Data Availability
Census HPS microdata are publicly available for download from [https://www.census.gov/data/experimental-data-products/household-pulse-survey.html](https://www.census.gov/data/experimental-data-products/household-pulse-survey.html). Delphi-Facebook CTIS individual-level microdata are available to eligible academic and nonprofit researchers with fully executed data use agreements, see [https://dataforgood.facebook.com/dfg/docs/covid-19-trends-and-impact-survey-request-for-data-access](https://dataforgood.facebook.com/dfg/docs/covid-19-trends-and-impact-survey-request-for-data-access). The HPS data used in this paper, along with code to replicate the analyses, are available at [https://github.com/randridge/PPMA](https://github.com/randridge/PPMA), along with code only for the Delphi-Facebook analyses. |
2306.01006 | Scaling Evidence-based Instructional Design Expertise through Large
Language Models | This paper presents a comprehensive exploration of leveraging Large Language
Models (LLMs), specifically GPT-4, in the field of instructional design. With a
focus on scaling evidence-based instructional design expertise, our research
aims to bridge the gap between theoretical educational studies and practical
implementation. We discuss the benefits and limitations of AI-driven content
generation, emphasizing the necessity of human oversight in ensuring the
quality of educational materials. This work is elucidated through two detailed
case studies where we applied GPT-4 in creating complex higher-order
assessments and active learning components for different courses. From our
experiences, we provide best practices for effectively using LLMs in
instructional design tasks, such as utilizing templates, fine-tuning, handling
unexpected output, implementing LLM chains, citing references, evaluating
output, creating rubrics, grading, and generating distractors. We also share
our vision of a future recommendation system, where a customized GPT-4 extracts
instructional design principles from educational studies and creates
personalized, evidence-supported strategies for users' unique educational
contexts. Our research contributes to understanding and optimally harnessing
the potential of AI-driven language models in enhancing educational outcomes. | Gautam Yadav | 2023-05-31T17:54:07Z | http://arxiv.org/abs/2306.01006v2 | # Scaling Evidence-based Instructional Design Expertise through Large Language Models
###### Abstract
This paper presents a comprehensive exploration of leveraging Large Language Models (LLMs), specifically GPT-4, in the field of instructional design. With a focus on scaling evidence-based instructional design expertise, our research aims to bridge the gap between theoretical educational studies and practical implementation. We discuss the benefits and limitations of AI-driven content generation, emphasizing the necessity of human oversight in ensuring the quality of educational materials. This work is elucidated through two detailed case studies where we applied GPT-4 in creating complex higher-order assessments and active learning components for different courses. From our experiences, we provide best practices for effectively using LLMs in instructional design tasks, such as utilizing templates, fine-tuning, handling unexpected output, implementing LLM chains, citing references, evaluating output, creating rubrics, grading, and generating distractors. We also share our vision of a future recommendation system, where a customized GPT-4 extracts instructional design principles from educational studies and creates personalized, evidence-supported strategies for users' unique educational contexts. Our research contributes to understanding and optimally harnessing the potential of AI-driven language models in enhancing educational outcomes.
L SF SF 2019
Large Language Models, Instructional Design, GPT-4, Evidence-Based Education, Personalized Learning
## 1 Introduction
The incorporation of large language models, such as GPT-4 [1], in learning engineering offers a range of benefits, including the generation of personalized content, augmentation of existing learning materials, and support in evaluation processes. Despite its potential, GPT-4's reliability can be inconsistent, particularly in complex subject areas, leading to potential inaccuracies and biases. To ensure high-quality learning experiences, a balanced approach combining AI-generated content with human oversight is essential.
Our primary aim is to bridge the gap between empirical educational research and its practical implementation, focusing on utilizing Large Language Models (LLMs) to streamline evidence-based instructional design. This goal is underscored by two comprehensive case studies that illustrate the potential of our approach.
In addition to presenting these in-depth examinations, we also explore future trajectories and limitations inherent in this area of research. By drawing these outlines, we aspire to foster a
deeper understanding of the judicious application of AI-driven language models such as GPT-4 in education. This understanding, in turn, can empower educators to optimize the use of these potent tools in their instructional endeavors.
## 2 Prior Work
The integration of AI-driven language models, like GPT-4, in education, presents numerous advantages, such as the capacity to produce tailored content, enhance existing learning materials, and offer support in evaluation processes [2]. However, while GPT-4 can generate content that appears confident and precise, its reliability may be inconsistent, particularly in complicated subject areas. This can potentially result in incorrect or substandard content [3]. Additionally, biases in AI, originating from the training data and human decision-making, could influence the generated content, resulting in inaccuracies [4]. Expert knowledge in specific fields is crucial in validating and maintaining the quality of AI-generated educational content. Thus, although GPT-4 provides several benefits, a balanced approach that combines AI-generated content with human supervision remains vital to ensure high-quality learning experiences.
Previous research involving large language models has explored their application in educational settings, such as the use of models like GPT-4 for generating questions or providing hints/explanations to students [5, 6, 7]. However, the current literature, to the best of our knowledge, does not extend beyond the creation of single-step open-ended or selected-response questions. Various research studies highlight the effectiveness of active learning strategies such as Predict-Explain-Observe-Explain (PEOE) [8], faded worked examples [9], and self-explanation [10], given certain boundary conditions. Despite their proven efficiency, these assessments are not universally employed due to the time and expertise required to construct them and the challenge of making evidence-based decisions on the optimal strategy to use.
In our work, we explored the application of GPT-4, an AI-powered language model, in the development and assessment of educational content. This exploration has revealed valuable insights into the potential benefits and challenges associated with using GPT-4 to automate the creation of higher-order assessments. By sharing our findings, we aim to provide a well-rounded perspective on our suggestions for optimally harnessing AI technology in educational settings.
## 3 Case Studies
This section details two case studies drawn from my experience as a Learning Engineer at Carnegie Mellon University, where I focused on enhancing courses by incorporating active learning components for students.
### Case Study 1: E-learning Design & Principles
The first case involved a course called E-learning Design & Principles. The instructor's objective was to address the 30 instructional principles outlined in [11], basing their approach on the following learning objective:
Learning Objective: Deliver nuanced and evidence-based guidance regarding the effectiveness of selected instructional principles, considering boundary conditions in a given context.
The selected assessment strategy was a scenario-based predict-observe-explain (POE) method for each instructional principle. However, creating a single case study, specifically for the Worked Example principle (refer Appendix), demanded several days and multiple iterations. Once an example was finalized, we employed one-shot prompting to scale it for other instructional principles. Here is a sample prompt for the spatial contiguity principle:
1. What are the boundary conditions of using spatial contiguity principle? cite references for these boundary conditions where authors have done a study to reach this conclusion based on data and evidence
2. Create assessments in form of predict-explain-observe-explain scenarios for spatial contiguity principle out of [feed previous prompt output here as references] I want to generate assessments in the form of predict-explain-observe-explain scenarios for explaining boundary conditions of when spatial contiguity principle is applicable based on EVIDENCE IN RESEARCH. For every multiple-choice question and short answer, we want to generate feedback. Can you start by giving a detailed study description followed by PEO exercises for each of the references generated above and summary in the end of how these features interact with each other to make decision? Let me write you an example of PEO scenarios for boundary conditions using cited research studies in Worked Example principle then you try writing it for spatial contiguity principle: [feed Worked Example principle case study here as an example from as described in Appendix]
After many iterations, we finalized these prompts, for example, one of the iteration involved emphasizing evidence from research to prevent the generation of hypothetical scenarios over constructing scenarios from studies in the cited papers. As shown in Figure 1, the outputs for each principle still required iterative cycles with the subject matter expert (instructor). According to our estimation of what would have been required without the automation, the time for subsequent principles was reduced by more than half. This significant reduction is based on our usual turn-around cycle, where we manually find relevant references and create an initial draft.
### Case Study 2: Learning Analytics and Educational Data Science
The second case study pertains to a new course titled "Learning Analytics and Educational Data Science," slated for Fall 2023. There were no pre-existing online components, and the instructor wanted to develop programming 'learn-by-doing' assignments using Jupyter Notebook.
Learning Objective: Implement a predictive model using Python
Our chosen assessment strategy was the use of faded worked examples with feedback. We attempted to leverage data visualization problems developed using a combination of worked examples and problem-solving practice activities with feedback for another CMU course using GPT-4. However, these worked examples were inappropriate in this context, as students could simply copy-paste solutions so we only focused on crafting problem-solving activities.
Sepatti Contributed
Kalpana, S., Chandler, P., & Sweller, J. (1998), Managing split-attention and redundancy in multimedia instruction. Applied Cognitive Psychology, 13(4), 361-371.
In this study, participants were asked to learn about electrical circuits from a diagram that had either integrated or separated last labels. The participants were divided into groups based on their prior knowledge, with some having low prior knowledge (Group A) and others having high prior knowledge (Group B).
Product - MCD Prompt:
Which group go you believe will benefit more from the integrated text labels in the diagram?
Chockock:
A. Low Prior Knowledge Group
B. High Prior Knowledge Group
C. Both Groups Equally
Feedback: Anwar question below for feedback
Explain - MCD Prompt:
Why do you think the group you chose in the previous question will benefit more from the integrated text labels in the diagram?
Chockock:
A. The integrated text reduces the cognitive load for learners with low prior knowledge by minimizing the split-attention effect.
B. The integrated text supports the high prior knowledge learners by providing them with a more organized representation of the information.
C. The integrated text supports both groups by reducing the need for learners to search for corresponding information between separate sources.
Feedback:
A. Correct. Integrated text reduces the cognitive load for learners with low prior knowledge by minimizing the split-attention effect, leading to better learning outcomes.
B. Incorrect. High prior knowledge learners can already manage the cognitive load effectively; the integrated text is more beneficial for prior knowledge learners.
C. incorrect. Although integrated text can help both groups, it is more beneficial for learners with low prior knowledge who experience a higher cognitive load.
Observe:
The study found that participants with low prior knowledge benefited more from integrated text labels in the diagram, as it reduced the cognitive load by minimizing the split-attention effect.
Explain: Short Answer Prompt:
Explain the days's outcomes. Why do you think the integrated text labels were more effective for low prior knowledge learners?
Feedback:
Integrated text labels were more effective for low prior knowledge learners because they reduced the cognitive load by minimizing the split-attention effect. Learners with low prior knowledge have limited working memory capacity, and having to search for corresponding information between separate sources (net and diagram) can overload their working memory, resulting in poorer learning outcomes. Integrated text labels help manage this cognitive load more effectively.
We iteratively crafted a series of prompts, designed to yield the most effective output through trial and error:
1. Can you give me \(2\) examples of hands-on exercises that cover the following learning objective "Implement a predictive model using Python" in a module called Classifiers.
2. Can you provide a worked example in Python, including the corresponding code for the following hands-on exercise: [feed one example from the previous prompt output here]
3. (3 - 5 Prompts): [Debug any errors that appear when trying to run code provided in Prompt 2 output in Google Colab.; This average of 3 - 5 is based on the development of three exercises using different datasets for the learning objective above]
4. For each step in Jupyter Notebook, * I want to create practice activities like these: [examples provided in the appendix]
Figure 1: Example of Predict-Explain-Observe-Explain activity created by GPT-4, focusing on the Spatial Contiguity Principle for one cited reference. A subject matter expert has added annotations to evaluate the quality of the content. The figure highlights both the strengths and limitations of using GPT-4 generated content in this context.
Convert following code into above format where [code for this step] where students need to enter the given code with test cases to verify if students entered correctly.
* (Only if a test case inadvertently revealed the answer in a previous step, we asked for more complex combinations to check for correct usage without giving away the answer) can you use more complex combinations to check for correct usage of above step without giving away the answer if students actually read the test cases
We first attempted to use the same one-shot prompting strategy as in the previous case study but found that few-shot prompting yielded better results. Interestingly, GPT-4 did not generalize the Altair library in the output Python code as shown in Figure 2, even though all examples as shown in Appendix consisted of that. We only edited a few instructions where steps like reading the datasets or training classifiers were not suitable for this format and were covered in the student's prior knowledge.
Figure 2: A segment of a Jupyter Notebook showcasing a sequence of practice activities that were designed with the aid of GPT-4. The objective of these activities is to help students learn how to identify at-risk students using predictive models in Python.
## 4 Prompt Engineering - Best Practices for Instructional Design
Drawing from our experiences with GPT-4 in educational content creation, we have garnered invaluable insights into the potential advantages and obstacles of integrating AI in education. The lessons we've learned and their implications can guide educators and instructional designers to successfully implement AI-driven language models such as GPT-4, maximizing benefits while mitigating potential challenges. Here are our proposed best practices when utilizing large language models:
### Utilizing Templates for Instructional Design Tasks
Prior research [12] indicates that as LLMs become more powerful, employing several examples (few-shot prompting) might not be as effective as zero examples (zero-shot prompting). Our case studies demonstrated the usefulness of examples in some prompts and zero-shot prompting in others. For complex tasks, like assessments involving specific instructional design principles, using well-defined examples in the form of templates can improve the quality of the generated output. Templates help enhance the structure and consistency of the materials generated by AI-driven language models, promoting a more streamlined content creation process.
### Fine-Tuning for Novel Instructional Problems
When creating new problems similar to the input's problem structure, a lower temperature value in the prompt can maintain a focus on the same knowledge components. Conversely, for diversity and problems in various contexts, such as a story or equation variable problems, a higher temperature can facilitate the broader transfer [13].
### Handling Unexpected Output
At times, LLMs may not behave as expected, even if all examples follow the same pattern. To counter this, consider a lower temperature for output similar to input or defining explicit rules in the prompt. Defining explicit rules needed involves iterating your prompt, identifying the patterns it struggles with, and explicitly stating these as rules in your prompt.
### Implementing LLM Chains for Multi-step Instructional Tasks
Although the surface quality of AI-generated educational content may be tempting, it's often best to break the task into the smallest subtasks initially. Once you have established the output quality, you can consider combining them, provided the interaction between them doesn't compromise the output. This approach, known as LLM chaining, helps improve output quality by avoiding the pitfalls of asking LLMs to handle multiple or nested tasks in one prompt [14].
### Citing References in AI-Generated Instructional Materials
Including credible sources or citations in AI-generated content is not only beneficial for enhancing its accuracy and credibility but also simplifies the process of verifying expert knowledge.
write learning objectives for a course on educational game design
1. Develop an understanding of the principles of educational game design.
2. Identify the key elements of effective game-based learning.
3. Analyze existing educational games and evaluate their effectiveness in meeting learning objectives.
4. Design and develop original educational games that align with specific learning objectives.
5. Apply game design concepts and principles to non-digital learning experiences.
6. Evaluate the effectiveness of educational games in meeting learning objectives and providing engaging learning experiences.
7. Collaborative with peers to develop, playfust, and improve educational games.
During the first case study, the inclusion of verifiable references served as a tool for cross-checking the validity of both AI-generated content and Subject Matter Expert (SME) assertions. For instance, an expert initially identified an output from ChatGPT as incorrect, but upon cross-verification, the content was found to be accurate. This highlights that SMEs, while experienced, may not possess comprehensive knowledge in all aspects. Therefore, not only does referencing guard against the occasional fabrication of references by Language Models like GPT-4, it also provides a mechanism to identify and correct potential misconceptions held by SMEs. This approach ensures the creation of robust and reliable AI-generated instructional materials.
### Evaluating Output with AI Critique and SMEs
AI critique can augment content quality as shown in Figure 3. However, content should be verified with Subject Matter Experts (SMEs) before incorporating it. Despite LLM's ability to critique and improve its own output, users should critically evaluate generated content.
### Creating Rubrics and Grading
AI can expedite rubric creation and grading, but educator involvement is crucial for reliability and fairness. Combined with a human educator, GPT-4 can enhance grading consistency and remove bias.
Figure 3: ChatGPT tasked to generate learning objectives for an educational game design, followed by critiquing one of them. The critique provided by ChatGPT gives meaningful recommendations for improvement, thereby demonstrating the model’s self-awareness and its capacity to evaluate its own output.
## 5 Future Work
In our future work, we aim to develop a sophisticated recommendation system, essentially "closing the loop" in our educational technology solution.
At the heart of our proposed system is a customized version of GPT-4, which we use to extract crucial information from empirical educational studies such as each paper's instructional design principles and identify the conditions under which they thrive. This extraction process enables us to encode and store data such as the educational domain, cognitive load, and learners' prior knowledge into a dedicated database, primed for future retrieval and application.
Harnessing the capabilities of GPT-4, we then create multiple assessment applications for each instructional design principle archived in our database. We adopt a few-shot prompting approach to devise these examples, aiming to guide users in effectively applying these principles across a broad range of educational contexts.
Our recommendation system is designed with the user's specific needs in mind. Users can input their unique instructional design requirements, including target learners, learning objectives, subject area, and other pertinent conditions. Our GPT-4 based system uses this information to generate evidence-supported instructional design strategies, tailored to the user's specific context. Each recommended strategy is paired with example applications and supported by original references from the studies they were drawn from, enabling users to further verify and delve into the source material.
The overarching goal of this endeavor is to democratize instructional design expertise, making it widely accessible to instructional designers and teachers alike. By doing so, we aim to streamline the design process, enhance educational outcomes, and ultimately drive forward the future of educational technology.
## Acknowledgments
I extend my sincere gratitude to Prof. Ken Koedinger and John Stamper, whose subject matter expert guidance was indispensable to the success of this research. Their profound wisdom and unwavering support enriched this work immeasurably. I also want to acknowledge students from Human Learning and How to Optimize It for their contribution to crafting the Model Human Learner document which inspired this idea.
|
2307.00031 | Nonlinear Topological Mechanics in Elliptically Geared Isostatic
Metamaterials | Despite the extensive studies of topological systems, the experimental
characterizations of strongly nonlinear topological phases have been lagging.
To address this shortcoming, we design and build elliptically geared isostatic
metamaterials. Their nonlinear topological transitions can be realized by
collective soliton motions, which stem from the transition of nonlinear Berry
phase. Endowed by the intrinsic nonlinear topological mechanics, surface polar
elasticity and dislocation-bound zero modes can be created or annihilated as
the topological polarization reverses orientation. Our approach integrates
topological physics with strongly nonlinear mechanics and promises multi-phase
structures at the micro and macro scales. | Fangyuan Ma, Zheng Tang, Xiaotian Shi, Ying Wu, Jinkyu Yang, Di Zhou, Yugui Yao, Feng Li | 2023-06-30T02:45:03Z | http://arxiv.org/abs/2307.00031v2 | # Nonlinear Topological Mechanics in Elliptically Geared Isostatic Metamaterials
###### Abstract
Despite the extensive studies of topological systems, the experimental characterizations of strongly nonlinear topological phases have been lagging. To address this shortcoming, we design and build elliptically geared isostatic metamaterials. Their nonlinear topological transitions can be realized by collective soliton motions, which stem from the transition of nonlinear Berry phase. Endowed by the intrinsic nonlinear topological mechanics, surface polar elasticity and dislocation-bound zero modes can be created or annihilated as the topological polarization reverses orientation. Our approach integrates topological physics with strongly nonlinear mechanics and promises multi-phase structures at the micro and macro scales.
_Introduction_--Since the discovery of topological insulators [1; 2; 3], an increasing interest in topological states has been addressed by the condensed matter community. Topological band theory has proliferated across classical structures, such as topological photonics [4; 5; 6; 7; 8; 9; 10], electrical circuits [11; 12; 13; 14], acoustics [15; 16; 17; 18], plasmonics [19; 20] and mechanics [21; 22; 23; 24; 25; 26; 27; 28; 29], which exhibit unconventional boundary responses. Most works are hitherto limited to linear regime, whereas the study of nonlinear topological systems remains sporadic.
To date, nonlinear topological metamaterials have been investigated in Kerr-nonlinear photonics [4; 5; 6; 8; 9], weakly nonlinear electrical [30] and mechanical systems [31; 32; 33]. Despite the efficacy of the superficial Kerr-nonlinear topological invariant inherited from linear theories, it remains unclear whether these weakly nonlinear excitations remain topological for larger amplitudes [8; 9; 32; 33; 34; 35; 36]. The symmetry-violating nonlinearities may deteriorate the topological robustness by causing mode instabilities [37; 38] and frequencies mixing with bulk bands [8; 9; 39]. On the other hand, nonlinear excitations also provide unique features absent in linear systems, such as non-reciprocal phase transition [31], moving domain walls [36; 40], and transporting mechanical states [32; 34]. However, the rigorous experimental demonstrations of strongly nonlinear topological phases and properties, are yet elusive [41].
Herein, we invoke and experimentally demonstrate strongly nonlinear topological transitions in a mechanical prototype by assembling elliptic gears on beams. Recent advances in mechanical metamaterials [42; 43] have show-cased functionalities of _circular_ gears, including shape morphing and topological mechanics in the linear elastic regime. In contrast, our work exploits the geometric nonlinearity of _elliptic_ gears to uncover unprecedented physical phenomena when topology encounters built-in nonlinearity. In the elliptically geared one-dimensional (1D) system, the topological index [41] called quantized nonlinear Berry phase guarantees the nonlinear topological modes, and the geared lattice exhibits soliton-induced topological transition.
We use 1D geared metabeams to construct highly adjustable topological metamaterials in 2D. These metamaterials can undergo phase transitions in their topologically polarized mechanical properties, which are enabled by the interplay between the nonlinear geometric transition in the metabeams, and the soft shearing of the whole structure, called the Guest mode [44]. While Guest modes are always nonlinear, their physical consequences are remarkably distinct from the nonlinear geometry of metabeams, which endows the topological phase transition of the metabeam mechanics, and are described by the nonlinear topological index. The term "highly flexible topological metamaterials in 2D" specifically refers to the intrinsic nonlinear topological mechanics embedded in every metabeam, whereas Guest modes simply link them to the global topological polarization of the whole structure. The geometric interplay between Guest modes and nonlinear topological transition in metabeams, allows for a complete switch in position between surface softness and rigidity as the topological polarization reverses its direction, which is a significant departure from previous research that only observed partial exchanges of topological mode localization [42]. Within our metamaterial, topological floppy modes or states of self-stress can be positioned near dislocations, and the bounded softness or rigidity can be annihilated or created as the topological polarization reverses direction. The reversal of the polarization is induced by the Guest mode, which integrates nonlinear topological mechanical transitions in every metabeam.
_Strongly nonlinear topological floppy modes in 1D geared chains_--Our prototype consists of a chain of elliptic gears coupled to their nearest neighbors, which are 3D-printed using photosensitive resin (Fig.1(a)). Every gear can rotate freely about pivots on their right-sided focal points, and the nearest-neighbor gears keep engaged during rotations. The rotation angles of the gears are
re-written in an alternative way \(\theta_{n}\rightarrow(-1)^{n}\theta_{n}\) for the remainder of this letter. The elastic energy of the chain is expressed as \(V=\sum_{n}k\ell_{n}^{2}/2\), where \(k\) is the elastic constant and \(\ell_{n}\) is the sliding distance between adjacent gears, which reflects the teeth deformation. The sliding distance is given by \(\ell_{n}=s_{e}(\theta_{n})-s_{-e}(\theta_{n+1})\), where \(s_{e}(\theta)\) is the arc length of the contacting point traveling along the ellipse with eccentricity \(e\) and rotation angle \(\theta\) (see Supplementary Information [45]). We define the degree of nonlinearity as \(|s_{e}(\theta)/\theta a(1-e)-1|\), where \(a\) denotes the major semi-axis of the ellipse. Circular gears with \(e=0\) or small rotation angles with \(\theta\approx 0\) lead to purely linear arc length in \(\theta\), resulting in a vanishing degree of nonlinearity that demonstrates the linear elastic regime [42]. However, for gears with large eccentricity, like \(e=0.4\) and rotation angles close to \(\theta\approx\pi\), as shown in Fig.1(c), the degree of nonlinearity reaches \(0.6\), which manifests _strong nonlinearity_ in the gear mechanics.
Gear rotations typically induce both elastic deformation and rotational kinetic energy. However, floppy modes refer to zero-frequency angular displacements that do not deform elastic bonds or gear teeth. Therefore, all \(\ell_{n}\) must be zero, resulting in zero potential energy. Finally, since floppy modes occur very slowly, their zero-frequency static nature means that the contribution of kinetic energy is also negligible.
The topological nature [42; 22] of the floppy modes is understood through the chain mechanics under periodic boundary condition (PBC). In the linear elastic regime, the mechanical properties are described by a gapped two-band model [3; 22]. As the amplitudes increase, the frequencies of plane-wave nonlinear traveling modes deviate from Bloch waves, leading to a decrease in the nonlinear bandgap. At the critical amplitude \(A_{c}=\pi\), a floppy mode penetrates into the chain, as shown in Fig.1(e), which has a zero frequency indicating the closure of the nonlinear bandgap [41]. Thus, we define the amplitude-controlled topological invariant, namely quantized nonlinear Berry phase [41], as
\[\gamma(A)=\pi[1+\mathrm{sgn}(e)\,\mathrm{sgn}(A_{c}-A)]/2, \tag{1}\]
where \(A\) is the amplitude of plane-wave nonlinear traveling waves. This index distinguishes between different topological phases below and above \(A_{c}=\pi\), reflecting topologically distinct nonlinear responses on the open boundaries of the lattice.
In the geared metamaterial shown by Figs.1(c-e), open boundary conditions (OBCs) are used, severing the connection on the periodic boundary to create an excess degree of freedom, which is manifested as the floppy mode. In Fig.1(c), the left boundary exhibits a nonlinear topological floppy mode when \(\theta_{1}<A_{c}\), reflecting a topologically-polarized metabeam with \(\gamma(A<A_{c})=\pi\). This topological polarization can be reversed by activating a soliton that propagates through the chain (Fig.1(d)). When all gear rotations approach \(A_{c}=\pi\) in Fig.1(e), \(\gamma(A=A_{c})\) becomes ill-defined and induces topological phase transition. The floppy mode with gear rotations beyond \(\pi\) is localized at the right boundary. Floppy modes account for the local stiffness of lattice boundaries, as evidenced by the highly asymmetric rigidity in Fig.1(b).
The topological transition amplitude can be customized using other gear shapes, such as the triangle-trefoil geared chain [45], whose transition amplitude \(A_{c}=\pi/3\) stems from its \(C_{3}\)-rotational symmetry. The idea of metabeams combines topological mechanics and strongly nonlinear transitions into a single nonlinear topological index, which goes beyond linear and weakly nonlinear topological mechanics [42; 33; 22; 34]. Additionally, the compact and highly tunable designs allow for the assembly of higher-dimensional mechanical metamaterials and customizes their rich topological mechanical phases.
_Geared topological metamaterials in 2D--_We utilize the 1D prototype of elliptically geared metabeams to construct 2D highly flexible metamaterials in a generalized honeycomb lattice. In Fig.2(a), three types of metabeams with the initial orientations \(\theta_{1}=7\pi/6\), \(\theta_{2}=-\pi/6\), and \(\theta_{3}=\pi/2\), are prepared by assembling \(N_{1}=4\), \(N_{2}=4\), and \(N_{3}=6\) elliptic gears on top of them, where the gear eccentricities are \(e_{1}=e_{2}=0.4\), and \(e_{3}=0\) (circular gears), respectively. The transmission rates, denoting
Figure 1: Elliptically geared chain and its mechanical properties. (a) The gear parameters: thickness \(5\) mm, major axis \(2a=30\) mm, minor axis \(2b=27.5\) mm, eccentricity \(e=0.4\), number of teeth \(21\), and number of gears \(N=12\). (b) Torque measurements of the gears at floppy and rigid boundaries. (c) The floppy mode localized at the left boundary. (d) A soliton that penetrates through the chain. (e) The end state of soliton propagation is that all gears rotate uniformly by \(\pi\). This indicates a shift in the localization edge of the nonlinear topological floppy mode, where the left boundary becomes rigid and the right boundary becomes soft.
the rotational speed ratio of gears at the last site to the first, are \(\lambda_{1}=1/\lambda_{2}=12.7\) and \(\lambda_{3}=1\) for the initial configurations of the metabeams. The primitive vectors \(\mathbf{a}_{1}\), \(\mathbf{a}_{2}\) and reciprocal vectors \(\mathbf{b}_{1}\), \(\mathbf{b}_{2}\) satisfy \(\mathbf{a}_{i}\cdot\mathbf{b}_{j}=2\pi\delta_{ij}\). The above parameters are chosen for experimental convenience (see [45] for general choices).
The unit cell in Fig.2(b) comprises three metabeams joined at a vertical hinge that penetrates through three co-axial gears, preventing relative displacements and rotations (see Fig.S10 for experimental manufacturing details [45]). Each site features two translational and one rotational degrees of freedom (\(N_{\rm DOF}=3\)), and every metabeam offers one longitudinal constraint and one transverse constraint provided by the sliding distance (\(N_{\rm con}=2\)). The coordination number of the geared honeycomb lattice is \(z=2N_{\rm DOF}/N_{\rm con}=3\), which results in a mechanical frame with the balanced degrees of freedom and constraints, ensuring it to stay at the isostatic point.
The mechanical properties are described by the compatibility matrix \(\mathbf{C}\), which maps the displacements and rotations of the gears at each site, to the elongations and sliding distances of the beams. In the analysis of floppy modes, both kinetic and potential energy are negligible. Thus, floppy modes constitute the null space of \(\mathbf{C}\), indicating that all beams and gear teeth remain undeformed. Self-stress states, which are characterized by the null space of \(\mathbf{C}^{\top}\) (\(\top\) denotes matrix transpose), describe non-zero tensions that allow for vanishing net forces and torques on each gear. Spatially repetitive frames have the property that for every wavevector \(\mathbf{k}\) in Brillouin zone, the mechanical properties are governed by the Fourier-transformed compatibility matrix \(\mathbf{C}(\mathbf{k})\). The topological mechanical phase is described by the winding numbers
\[\mathcal{N}_{i}=-\frac{1}{2\pi}\oint_{C_{i}}d\mathbf{k}\cdot\nabla_{\mathbf{k}}\,{\rm Im }\,\ln\det\mathbf{C}(\mathbf{k}),\quad i=1,2, \tag{2}\]
where \(C_{i}=\mathbf{k}\rightarrow\mathbf{k}+\mathbf{b}_{i}\) is a closed-loop trajectory in reciprocal space. The winding numbers in the generalized honeycomb lattice remain invariant under arbitrary choices of \(C_{i}\) due to the fully-gapped phonon spectra, except for the \(\mathbf{k}=0\) point [22]. These well-defined invariants constitute the vector \(\mathbf{R}_{\rm T}=\sum_{i=1,2}\mathcal{N}_{i}\mathbf{a}_{i}\), called the topological polarization, which characterizes the topological phases of the mechanical metamaterial.
Isostatic lattices can host nonlinear and uniform soft strains of the whole structure, known as Guest modes, that reversibly evolve the geometry and change the topological polarization without causing any elastic energy. To mark the rotation angle of the Guest mode, we use the bond orientation of the first-type metabeams, denoted as \(\theta_{1}\). However, the gears also play a role by inducing a degree of nonlinearity within the metabeams, which can be quantified by the transmission rate \(\lambda_{1}\)[45]. The topological polarization changes correspondingly, and follows the trajectory shown in the multi-phase diagram of Fig.2(c). In [45], we also plot the topological phase diagram in terms of the Guest mode angle and degree of nonlinearity. Figs.2(d-g) depict four lattice configurations that evolve continuously from Fig.2(a) via the Guest mode, with corresponding gear orientations and topological polarizations displayed. We note that the elasticity analysis in the 2D metamaterial is based on a compatibility formalism that assumes small displacements from a reference configuration determined by the degree of the nonlinear Guest mode. The band structure of the compatibility formalism is based on linear elastic theory.
Bulk-boundary correspondence states that topological polarization reveals the localization of floppy modes on open boundaries. The number density of floppy modes per supercell is \(\nu=\frac{1}{2\pi}\mathbf{G}\cdot(\mathbf{R}_{\rm T}+\mathbf{R}_{\rm L})\), where \(\mathbf{G}\) de
Figure 2: Topological phases of the 2D geared metamaterial. (a), (b) Top and side views of the unit cell. (c) The multi-phase diagram of the lattice topology, under the parameters constraints of \(\theta_{2}=\pi-\theta_{1}\) and \(\lambda_{2}=\lambda_{1}^{-1}\) in (a). Three identified phases are characterized by \(\mathbf{R}_{\rm T}\) and represented by different colored regions. White dots correspond to lattice configurations shown in panels (d-g). (d-g), Geometric configurations of the 2D lattices, where the topological polarizations are \(\mathbf{R}_{\rm T}=\mathbf{a}_{1}-\mathbf{a}_{2}\) in (d), ill-defined \(\mathbf{R}_{\rm T}\) in (e), \(\mathbf{R}_{\rm T}=-\mathbf{a}_{2}\) in (f), and \(\mathbf{R}_{\rm T}=-\mathbf{a}_{1}-\mathbf{a}_{2}\) in (g), respectively.
notes the reciprocal lattice vector whose normal points outwards the open surface. As \(\mathbf{R}_{\rm T}\) is gauge-dependent upon the unit cell choice, we invoke \(\mathbf{R}_{\rm L}\), namely the local polarization, that cancels the gauge dependence of \(\mathbf{R}_{\rm T}\). Thus, we plot the number densities of topological floppy modes on the left and right open boundaries using colors of the metabeams, which are \((\nu_{\rm l},\nu_{\rm r})=(0,2),(1,1),(2,0)\) in Figs.2(d,f,g), respectively. The standard (linear) calculation of the topological polarization of the honeycomb lattice shows that the floppy modes reside only on one edge in Fig.2(d) and are completely transferred to the other edge in Fig.2(g) when the Guest mode is activated.
The asymmetric distribution of floppy modes governs the contrasting local stiffness on opposing boundaries. Surfaces clear of floppy modes, i.e., \(\nu=0\), are as rigid as the inner bulk of the lattice, whereas boundaries that host topological floppy modes with \(\nu\neq 0\) exhibit softness. Fig.2(d) (Fig.2(g)) shows a much softer (harder) right boundary, while Fig.2(f) reflects comparable stiffness on both boundaries. Figs.2(d) and (g) manifest two honeycomb lattices with identical bond orientations but opposite gear orientations (enlarged figures in [45]).
When metabeams are joined together to create a hexagonal lattice, the Guest mode associated with geometric distortions of the lattice is coupled to the solitons of the individual beams in such a way that a continuous activation of the Guest mode restores the lattice to its initial configuration while reversing the polarizations of all of the beams. As a result, the global shearing Guest mode and the soliton in every metabeam together ensure \(180^{\circ}\) gear rotations, which leads to two distinct metabeam configurations in one lattice configuration, and the complete reversal of stiffness contrast on opposing boundaries. This property is in stark contrast to previous works [46; 47; 33; 42], where topological transitions are induced by Guest modes alone, leading to partial migration of floppy modes.
To measure boundary stiffness, we construct the structure in Figs.3(a-c) with fixed boundaries perpendicular to \(\mathbf{b}_{2}\) and open boundaries perpendicular to \(\mathbf{b}_{1}\), whose manual procedure is elaborated in [45]. When \(\mathbf{R}_{\rm T}\) points to the lower right, the right boundary hosts two floppy modes per supercell (\(\nu_{\rm r}=2\)) while the left edge shows no floppy mode (\(\nu_{\rm l}=0\)), in agreement with Fig.2(d). Fig.3(a) shows the associated metabeam configuration, the direction of the loading force, and the large deformations in gray shadow at the right boundary. In contrast, deformations on the left side are small under the same loading strength. Fig.3(d) shows 8 cycles of force-displacement measurements for the structure in Fig.3(a), where the deformations on the right boundary (blue curves) are much larger than that of the left edge (red curves). Using the Guest mode, which combines gears twist and bond orientation changes, we manipulate lattice topological phase transitions that stem from the nonlinear topological transition of every metabeam. In Fig.3(b), both boundaries host floppy modes, corresponding to (\(\nu_{\rm l}=1\), \(\nu_{\rm r}=1\)) in Fig.2(f), and exhibit comparable stiffness as measured in Fig.3(e). The Guest mode continues to transform the lattice geometry and \(\mathbf{R}_{\rm T}\), evolving floppy modes to the final state (\(\nu_{\rm l}=2\), \(\nu_{\rm r}=0\)) of Figs.3(c,f), where the stiffness ratio between the left and right surfaces is reversed compared to Figs.3(a,d). Hysteresis in the displacement (Figs.3(b,d,f)) arises from gear clearance, while hysteresis in the measured force curve occurs due to friction.
Topologically protected mechanical zero modes can be localized around lattice dislocations. This effect stems from the interplay between two Berry phases: the dipole moment \(\mathbf{d}\) that is perpendicular to the Burgers vector of the dislocation, and the topological polarization \(\mathbf{R}_{\rm T}\) of the lattice. Fig.4 experimentally constructs a topological dislocation, around which the lattice geometry is locally modified. The number of localized mechanical zero modes around this dislocation is computed via \(\nu_{\rm dislocation}=\mathbf{R}_{\rm T}\cdot\mathbf{d}/V_{\rm cell}\), where \(\nu_{\rm dislocation}>0\) (\(\nu_{\rm dislocation}<0\)) reveals the number of mechanical floppy modes (states of self-stress), and \(V_{\rm cell}\) denotes the area of the unit cell. In Fig.4, the dislocation-constrained floppy modes and states of self-stress can exchange po
Figure 3: The force-displacement measurements of the geared metamaterial in three different topological phases. (a-c) Nonlinear gear geometry and Guest mode together induce changes in \(\mathbf{R}_{\rm T}\), with zooms depicting the geometry of the unit cell. Load arrows (blue and red) indicate push and pull tests. (d-f) Force-displacement curves with positive direction taken as \(+\mathbf{a}_{1}\) (right direction) and \(-\mathbf{a}_{1}\) (left direction). Colored lines show average values and shaded areas indicate standard deviation across eight measurements.
sitions as the polarization \(\mathbf{R}_{\mathrm{T}}\) is thoroughly reversed by the topological transition in the nonlinear mechanics of the metabeams.
Using other gear shapes leads to starkly different mechanics. In the triangle-trefoil-geared honeycomb metamaterial [45], the boundary stiffness is completely reversed when all gears only rotate by \(\pi/3\) on the metabeams they mount on. The Guest-mode-induced lattice geometry cannot reach the auxetic state, which is in stark contrast to Fig.2(f) from the elliptically-geared lattice. Therefore, gear shapes may control topological mechanical properties and manipulate other functionalities, such as negative Poisson ratio [48].
_Conclusions_-We show strongly nonlinear topological mechanics in elliptically geared metamaterials, which are ensured by quantized nonlinear Berry phases. The interplay between soliton-induced nonlinear topological mechanics and the global shearing Guest mode allows for the complete reversal of polar elasticity without disentangling the lattice. Our prototype opens up avenues for gear designs [49, 50, 51, 52] with unconventional functionalities.
_Acknowledgment_--This work is supported by the National Natural Science Foundation of China (Grant Nos. 12102039, 12272040).
|
2307.16647 | Formation, stability, and highly nonlinear optical response of excitons
to intense light fields interacting with two-dimensional materials | Excitons play a key role in the linear optical response of 2D materials.
However, their significance in the highly nonlinear optical response to intense
mid-infrared light has often been overlooked. Using hBN as a prototypical
example, we theoretically demonstrate that excitons play a major role in this
process. Specifically, we illustrate their formation and stability in intense
low-frequency fields, where field strengths surpass the Coulomb field binding
the electron-hole pair in the exciton. Additionally, we establish a parallelism
between these results and the already-known physics of Rydberg states using an
atomic model. Finally, we propose an experimental setup to test the effect of
excitons in the nonlinear optical response | Eduardo B. Molinero, Bruno Amorim, Mikhail Malakhov, Giovanni Cistaro, Álvaro Jiménez-Galán, Misha Ivanov, Antonio Picón, Pablo San-José, Rui E. F. Silva | 2023-07-31T13:27:07Z | http://arxiv.org/abs/2307.16647v1 | Formation, stability, and highly nonlinear optical response of excitons to intense light fields interacting with two-dimensional materials
###### Abstract
Excitons play a key role in the linear optical response of 2D materials. However, their significance in the highly nonlinear optical response to intense mid-infrared light has often been overlooked. Using hBN as a prototypical example, we theoretically demonstrate that excitons play a major role in this process. Specifically, we illustrate their formation and stability in intense low-frequency fields, where field strengths surpass the Coulomb field binding the electron-hole pair in the exciton. Additionally, we establish a parallelism between these results and the already-known physics of Rydberg states using an atomic model. Finally, we propose an experimental setup to test the effect of excitons in the nonlinear optical response
## I Introduction
Emergence is a fundamental concept in condensed matter physics [1]. Pertinent examples include superconductivity [2; 3], magnetism [4], and topological phases [5; 6]. Another example are excitons [7]: the quasiparticles created by attraction between an electron excited to the conduction band and the hole left in the valence band. In 2D materials excitons have particularly significant binding energy \(E_{\mathrm{bind}}\sim 1\) eV and play dominant role in their linear optical response [8]. What should one expect for the highly nonlinear optical response, when 2D solids interact with intense low-frequency laser fields?
This is a highly pertinent key question for high harmonic generation (HHG) in solids, which has emerged as an important direction in ultrafast condensed matter physics [9; 10; 11]. Will excitons provide strong contribution to intense-field driven nonlinear response, or will they simply not survive the strong laser field, which typically exceeds the Coulomb field that binds the electron and the hole together? Answering this question is important both fundamentally and for applications. At the fundamental level, HHG offers a unique window into the electronic structure and dynamics in trivial, topological, and strongly correlated solids far from equilibrium [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. Interpreting these dynamics without understanding the role of excitons is hardly adequate. For applications, if excitons can generate bright high harmonics, their role would be important when using high harmonics as bright solid-state sources of ultrashort VUV-XUV radiation[31; 32].
While the entry of intense light fields into condensed matter physics is relatively recent [12], interaction with such fields in general and HHG in particular have been extensively studied in atoms [33]. Rydberg states, the atomic analogues of excitons [34], were found to play a surprisingly important role in strong-field ionization from the ground state (the atomic analogue of electron injection into the conduction band.) Prominent examples are the so-called frustrated tunnelling [35; 36; 37; 38] and the Freeman resonances in multi-photon ionization [39; 38]. The remarkable stability of Rydberg states against intense laser fields, predicted in [40; 41; 42], was confirmed in [43; 44], dramatically demonstrated in [37], and even led to lasing during laser filamentation in dense nitrogen gas [45]. Multiphoton Rydberg excitations have been found to contribute to harmonic emission during the laser pulse [46; 47; 48] and free induction decay after its end [49; 50]. In contrast, apart from the pioneering works [51; 52; 18; 53], the role of excitons in the strong field regime has been generally ignored, with in-depth analysis of their dynamics and the physics of their creation and destruction in strong laser fields lacking until now.
In this work we aim to fill this gap. We show that high harmonic emission can reveal the formation of excitons by both significantly increasing both the overall harmonic yield, by about an order of magnitude, and the emission intensity at energies near excitonic excitations, by an even stronger two orders of magnitude. We also show that shifting the exciton binding energy by using a substrate provides a tell-tale sign of their contributions. Time-resolved analysis of the emission shows the formation dynamics and the remarkable stability of excitons against strong light fields. In spite of the emergent, interaction-induced nature of excitons, we consistently find strong similarity in their strong field dynamics
with that of single-particle Rydberg states. This connection highlights how a non-trivial emergent quasiparticle, such as an exciton in a sea of interacting electrons, can behave much like a single-particle excitation in an atomic gas, even when driven by an intense field.
## II Results
The nonlinear optical response of excitons in 2D materials can be simulated using real-time equation of motion techniques. Here we perform simulations using hexagonal boron nitride (hBN), see Methods section. The choice of this specific material was based on two reasons: it is a prototypical 2d semiconductor [54] and it is known to host excitonic states with large binding energies [55; 56]. Furthermore, one of the key features is the possibility of engineering the effect of interactions by changing the substrate. Introducing a substrate with a higher dielectric constant effectively screens electronic interactions, thereby reducing the energy required to bind the electron-hole pair [57; 58], i.e., by shifting the first excitonic state closer to the conduction band. This behaviour can be appreciated by looking at the absorption spectrum shown in Fig.1b (for further information see Methods).
To see such high harmonic response, we considered a laser pulse in the mid-infrared regime (\(3\,\mu\)m) with an intensity of \(1.16\,\) TW/cm\({}^{2}\), a total duration of \(20\) optical cycles and with a \(\cos^{2}\) envelope. The field will be oriented in the \(\Gamma-K\) direction. However, we must note that the effects here described are robust against variations of the parameters.
As a first step, we have performed two calculations, one where excitons are present and another one where excitonic effects are neglected. In Figs. 2(a-b), we show the comparison of the high-harmonic spectrum between the system with and without excitons. In both cases, they display the general trend of HHG in solids, i.e., the amplitude of low-order harmonics decays as the order is increased, until the energy of the harmonics equals the band gap of the material, roughly at the tenth harmonic. At this point, a plateau of high-harmonics emerges, but once they reach their cutoff condition (around the 30th harmonic), an exponential decay of the harmonics ensues [9; 59]. Although the qualitative behavior is similar, the most outstanding difference between the interacting and non-interacting scenarios is the presence of an enhancement in the intensity between the fifth and the eleventh harmonic (gray areas in Figs. 2). This enhancement coincides precisely with the energy of the exciton in hBN. A less pronounced but also visible difference is the enhancement of harmonics in the plateau region.
As mentioned above, excitons are bound states with energies inside the gap of the single-particle spectrum, situated between the valence and conduction bands. Thus, the application of a strong laser field to the system results in a increased probability for electrons to transition from the valence to the conduction bands, owing to the availability of additional channels provided by the excitonic states. In the transition process, excitons thus play the role of stepping stones across the gap. Therefore, having more channels available to excite to the valence band will increase the excited electronic population. This explains the enhancement of the harmonics in the plateau when including excitons. Moreover, the amplification is primarily concentrated in the fifth/seventh harmonic, which corresponds to an energy of 2.1/2.9 eV. This energy region is roughly equivalent to difference \(\Delta_{\mathrm{hBN}}-E_{\mathrm{bind}}\), between the hBN bandgap \(\Delta=4.52\) eV and the binding energy of the exciton \(E_{\mathrm{bind}}\approx 2.0\) eV [55; 60; 61]. In other words, the HHG enhancement produced by the exciton states happens around the energy required for a valence-band electron to transition into the first exciton state, which reflects how excitons open a new transition pathway for HHG in the interacting case.
Although comparing interacting and non-interacting systems may seem relevant on its own, it is not possible to switch interactions on and off during experiments. However, electronic interactions can be screened by introducing a dielectric material on the system, as mentioned before. By employing a really strong dielectric, it becomes possible to emulate a system with negligible interactions. In Fig. 2(b) we show the high-harmonic spectrum for free-standing hBN versus hBN encapsulated in silica which acts as the strong dielectric. Notice how the same physics takes place: an enhancement in the intensity is observed when the interactions are stronger. The reason is the same; the excitonic states acts as extra channels for the tunneling. However, the degree of enhancement observed is less pronounced compared to the cases with/without interactions. This is attributed to the incomplete screening of interactions by the silica encapsulation, resulting in the persistence of certain excitonic states (see Fig. 1b). The tunability of excitons in two-dimensional materials, facilitated by the substrate, offers an experimental platform for investigating the effects of such quasiparticles in the high-harmonic spectrum.
We have seen so far, that excitons have a strong influence in the highly nonlinear optical response. However, as in atoms, should we expect that excitons are formed and survive after the end of the pulse with such strong electric
Figure 1: (a) Crystalline structure of hBN alongside of a depiction of an exciton. (b) Absorption spectra of hBN in terms of different substrates.
fields? To determine whether the exciton indeed survive to such strong laser pulse, we have computed the Gabor transform of the current, including times after the pulse has ended. The first row (Figs. 3a-b) shows the Gabor profile where one can see a relevant enhancement in the emission below the band gap (black line) due to the presence of extra channels. An intriguing new feature observed in the Gabor profile is the appearance of a more complicated interference pattern when excitons are present in the system. This phenomenon arises from the influence of bound states on the semiclassical trajectories. Moreover, clear signatures of exciton survival after the pulse can be observed. To verify this, we have performed a Gabor transform with a reduced width, thereby increasing the resolution in the frequency domain, see Fig. 3c-d. In these figures, it is evident that in the presence of interactions, there is free induction decay precisely at the binding energy of the exciton as the field ramps down ensuring the survival of the exciton (red line). Remarkably, strong pulses not only create excitons but also _stabilize_ them, just as happens with Rydberg states in atoms [37]. Additionally, following the analogy with atomic systems, this observation implies that we cannot fully ionize a two-dimensional system since the electrons remain bound, forming excitons. Hence, we show evidence of an analog of frustrated ionization tunneling [38] in solid-state systems.
The presence of bound states between a fixed ground state and continuum of quasi-free states (the conduction band), bears some resemblance with the energy spectrum of an atom. Indeed, within a first approximation excitons are solutions to the Wannier equation [62], which is nothing more than a Schrodinger equation for a centrosymmetric potential. This observation suggests the possibility that the HHG spectrum of the interacting semiconductor could be approximately described using a simple, non-interacting atomic model, where excitons are replaced by excited states of the atom. In the following we confirm that this is indeed the case. However, we want to address the opposite question: can the _whole_ system be qualitatively described using an atomic model?
To answer this question, we have developed a one-dimensional atomic model (see Methods for more details) that intends to capture the physics of hBN excitons. The
Figure 3: Gabor profile of the harmonic signal for various cases. The first column corresponds to the non-interacting case while the second one correspond to the interacting case. Upper row corresponds to a gaussian window of width \(\sigma=(3\omega_{L})^{-1}\) to have proper resolution in time while the lower row corresponds to a window of smaller width, \(\sigma=(\omega_{L}/2)^{-1}\), to have better resolution in the frequency domain. The two dashed horizontal lines corresponds to the energies associated with \(\Delta_{\text{hBN}}\)(black) and \(E_{\text{bind}}\) (red) while the orange line depicts the electric field.
Figure 2: (a) High harmonic generation spectrum computed for a monolayer of hBN with (blue colour) and without (red colour) electronic interactions. (b) HHG spectrum computed for a monolayer of free-standing hBN (blue colour) and encapsulated in SiO\({}_{2}\) (red colour). The spectrum is obtained for a laser pulse in the \(\Gamma-K\) direction along its parallel direction.
crucial idea is the use of a softcore potential,
\[V_{\alpha,\beta}(x)\ =\ \frac{\alpha}{\sqrt{x^{2}+\beta^{2}}}, \tag{1}\]
which allows us to model the excitation spectrum of hBN. We will tweak the potential parameters, \(\alpha\) and \(\beta\), so that the energy difference between the ground and the first excited state matches the crucial energy scale \(\Delta_{\rm hBN}-E_{\rm bind}\). More specifically, we will fix the ground state energy to \(E_{0}=-\Delta_{\rm hBN}\) and the first excited state to \(E_{1}=-E_{\rm bind}\) (see the diagram in Fig. 4a). The laser parameters will be the same as for the hBN case. However, as reaching the tunneling regime in atoms tends to require higher field intensities [33], we will increase its value up to \(4.5\) TW/cm\({}^{2}\); this particular value was chosen so that the cutoff in the harmonics were the same in both systems. Fig. 4b shows the HHG spectrum of the atomic model in terms for various \(E_{1}\) energies. The spectrum displays the typical characteristics of an atomic spectrum [33]: the low-order, perturbative harmonics, followed by the plateau harmonics caused by the recombination processes. It is worth noting that when the energy of the first excited state \(E_{1}\) is raised, the appearance of the plateau shifts to higher harmonic frequencies. Such shift can be understood in the same way as for the hBN case: the closer the first excited is to the ground state, the more likely it is for the electron to transition into the continuum, thus facilitating the onset of the plateau in the harmonic spectrum. This is also the kind of enhancement produced by Rydberg states found in atomic gases [47].
To better understand the similarities between the two systems, we conducted a scan encompassing different excitonic energies. Although the exciton binding energy \(E_{\rm bind}\) is in principle a fixed physical quantity (at least if we neglect screening effects from the electrostatic environment), we can adjust its value from \(-2.0\) to \(0.1\) to clarify its effect on the HHG spectrum. This is done by changing the amplitude of the Rytova-Keldysh potential [61]. For each binding energy we then compute the corresponding parameters \(\alpha,\beta\) of \(V_{\alpha,\beta}(x)\). In Fig. 5a we plot the result, comparing the HHG spectrum between the 2d system and the atomic one in terms of the first exciton binding energies. Both systems display a qualitatively similar HHG spectrum; the enhancement is located precisely at the specific harmonic that corresponds to \(\Delta_{\rm hBN}-E_{\rm bind}\), see the dashed line. The similarity between Figs. 5(a,b) is striking, given that these correspond to two very different physical problems: one describes the nonlinear electron dynamics of a two-dimensional material with electron-electron interaction, while the other corresponds to a one-dimensional non-interacting atom. The common denominator between the two systems is, as mentioned before, the existence of a fixed ground state separated from a continuum of states with excitable states between those two (see Fig.5b), even if their nature (two-particle vs single-particle) is completely different. There are other obvious differences, such as the existence of dispersive bands above the gap in the 2D crystal, which can even be topological. However, the qualitative aspects of electron dynamics are insensitive to those differences. Fundamentally, the key quantity that controls the rate of transition [33; 9], and hence the emission intensity, is the energy of the lowest excitation \(\Delta_{\rm hBN}-E_{\rm bind}=\left|E_{0}\right|-\left|E_{1}\right|.\) In this context, therefore, an interacting two-dimensional crystal can be understood qualitatively with a non-interacting atomic gas model, where excitons play the same role as Rydberg states in enhancing the HHG response [48; 47].
The strong field approximation (both in atoms and in solid-state systems) only considers the existence of a ground state coupled to a continuum of states, without including any excited states [63]. Indeed, going further than the strong field approximation leads to an enhancement in the high-harmonic emission in atoms [64; 65; 66]. These previous results bolster our interpretation of excitons as analogs of Rydberg states in solids. Moreover, it is known that these Rydberg states can act as a bottleneck to ionization; the electrons gets trapped into long orbits (Rydberg states) due to the laser field, frustrating a complete ionization [36; 37] of the system. Analogously, the same phenomena could take place in crystals, namely that the strong field excitation of electrons to the conduction bands could be blocked due to the population of excitonic states.
## IV Conclusion
In this work, the effect of many-body interactions on the high-harmonic spectra of two-dimensional materials has been studied. The results shows a significant increase in the emission spectra when accounting for these interactions, which is associated to the presence of excitonic states within the system. Specifically, the enhancement is located at the energy difference between the valence band and the first excitonic state; the excitons act as extra channels for the tunneling processes from the valence to the conduction band. Furthermore, we proposed an experimental setup to effectively test the effect of excitons. Additionally, we have shown that the exciton does survive such strong pulses, in complete analogy with the physics observed in atoms due to the Rydberg states. Finally, we developed a one-dimensional atomic model to gain physical insight. Using
Figure 4: (a) Schematic diagram of \(V_{\alpha,\beta}(x)\) [WIP]. (b) High-harmonic spectrum of the atomic model. Dashed lines denotes the place were the difference \(\left|E_{0}\right|-\left|E_{1}\right|\) lies for each \(E_{1}\).
this model, we showed the same qualitative physics appears in both systems: the presence of excited states between a fixed ground state and a continuum leads to an enhancement in the high-harmonic spectra. Our work has shown how interactions affect the high-harmonic generation in two-dimensional materials and the role of emergent quasi-particles. These results allow us to reinterpret the excitons as a many-body version of Rydberg states in the strong field regime, opening the window for the use of the techniques in atomic strong field physics for imaging and control of these Rydberg states in the context of condensed matter targets.
## Methods
_hBN simulations_ - The microscopic response of the system to the laser field was obtained by numerically solving the Semiconductor Bloch Equations (SBEs) in the Wannier Gauge [61, 67]. These equations, in atomic units, reads
\[\mathrm{i}\partial_{t}\rho_{nm}(\mathbf{k},t)=[H^{(0)}(\mathbf{k})+\Sigma (\mathbf{k},t),\rho(\mathbf{k},t)]_{nm} \tag{2}\] \[+|e|\mathbf{E}(t)\cdot[\mathbf{A}(\mathbf{k}),\rho(\mathbf{k},t)]_{nm}\] (3) \[+\mathrm{i}|e|\mathbf{E}(t)\cdot\nabla_{\mathbf{k}}\rho_{nm}(\mathbf{k},t) \tag{4}\]
where \(H^{(0)}_{nm}(\mathbf{k})\) are the non-interacting terms of the Hamiltonian, \(\Sigma_{nm}(\mathbf{k},t)\) accounts for the electronic Coulomb interactions, \(\mathbf{A}_{nm}(\mathbf{k})\) are the multiband Berry connection terms and \(n,\ m\) refers to the band indexes. The electronic interaction are incorporated in the dynamics at the Fock level [60, 61]. More formally, this means that the self-energy is calculated using
\[\Sigma_{nm}(\mathbf{k},t)=-\sum_{\mathbf{k}^{\prime}}V_{nm}(\mathbf{k}-\mathbf{k}^{\prime}) \left(\rho_{nm}(\mathbf{k}^{\prime},t)-\rho^{0}_{nm}\right).\]
where the initial state, \(\rho^{0}_{nm}=\rho_{nm}(\mathbf{k},0)\), is completely filled for all the states below the Fermi energy. The subtraction \(\rho_{nm}(\mathbf{k}^{\prime},t)-\rho^{0}_{nm}\) is done to ensure that we not take into account interactions in the equilibrium state.
The potential, \(V_{nm}(\mathbf{k}-\mathbf{k}^{\prime})\), reads
\[V_{nm}(\mathbf{k}-\mathbf{k}^{\prime})=\sum_{\mathbf{G}}e^{i(\mathbf{k}-\mathbf{k}^{\prime}+\mathbf{G })\cdot(\mathbf{\tau}_{n}-\mathbf{\tau}_{m})}V(\mathbf{k}-\mathbf{k}^{\prime}+\mathbf{G}),\]
where \(\mathbf{\tau}_{n}\) are the center of the Wannier orbitals and \(\mathbf{G}\) are the vectors of the reciprocal lattice. The sum over \(\mathbf{G}\) is done to ensure the periodicity of the system. Here, \(V(\mathbf{q})\), is the Fourier transform of the Rytova-Keldysh potential, which is known to accurately capture screening and dielectric effects in two-dimensional materials [68, 69].
For the monolayer hBN, we used the tight-binding model in which only the \(p_{z}\) orbitals are considered [61, 67, 70]. The hopping parameter, \(t_{0}\), was set to \(-2.8\) eV and the on-site energy for the two atomic species was set to \(\varepsilon_{B/N}=\pm 2.26\) eV. The density matrix was constructed in a \(300\times 300\) Monkhorst-Pack grid and it was time-propagated using a fourth-order Runge-Kutta with a timestep of \(dt=0.1\) atomic units (au). Convergence was ensured for all the numerical parameters.
_Atomic model_ - The atomic model is based on the solution of the time-dependent Schrodinger equation (TDSE) for a one-dimensional atom in the presence of a strong laser field. In atomic units, the TDSE reads
\[i\partial_{t}\Psi(x,t)=\left(T_{\mathrm{kin}}+V_{\alpha,\beta}(x)+E(t)\cdot x \right)\Psi(x,t), \tag{5}\]
where \(T_{\mathrm{kin}}\)is the electronic kinetic energy, \(V_{\alpha,\beta}(x)\) is the softcore potential (Eq. 1) and \(E(t)\) is the electric field. The TDSE was solved numerically using a fourth-order Runge-Kutta method with a timestep of \(dt=0.1\) au. The initial state, \(\Psi(x,0)\), was selected as the ground state of the time-independent hamiltonian \(H=T_{\mathrm{kin}}+V_{\alpha,\beta}\). The calculations were performed in a box of length \(L=1000\) au with a grid spacing of \(dx=0.25\) au. We checked that convergence was achieved for all numerical parameters.
## Data availability
The data that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request.
Figure 5: (a) High-harmonic spectrum comparison between the atomic system (left) and the hBN case (right). The dashed black lines denotes where the line \(\Delta_{\mathrm{hBN}}-E_{\mathrm{bind}}\) lies. (b) Schematic diagram to clarify the equivalence between the two systems [WIP].
## Author contributions
E. B. M., B. A., M.I., P. S. J. and R. E. F. S. developed the idea. E. B. M. performed the numerical calculations. M. M., G. C. and A. P. developed the numerical code used for the SBEs. E. B. M. developed the numerical code for the atomic TDSE. All authors contributed to analysis of the results. E. B. M., M. I., P. S. J. and R. E. F. S. wrote the main part of the manuscript that was discussed by all authors.
## Competing interests
The authors declare no competing interests.
## Acknowledgments
E. B. M. and R. E. F. S. acknowledge support from the fellowship LCF/BQ/PR21/11840008 from "La Caixa" Foundation (ID 100010434). This research was supported by Grant PID2021-122769NB-I00 funded by MCIN/AEI/10.13039/501100011033.
|
2306.17670 | Learning Delays in Spiking Neural Networks using Dilated Convolutions
with Learnable Spacings | Spiking Neural Networks (SNNs) are a promising research direction for
building power-efficient information processing systems, especially for
temporal tasks such as speech recognition. In SNNs, delays refer to the time
needed for one spike to travel from one neuron to another. These delays matter
because they influence the spike arrival times, and it is well-known that
spiking neurons respond more strongly to coincident input spikes. More
formally, it has been shown theoretically that plastic delays greatly increase
the expressivity in SNNs. Yet, efficient algorithms to learn these delays have
been lacking. Here, we propose a new discrete-time algorithm that addresses
this issue in deep feedforward SNNs using backpropagation, in an offline
manner. To simulate delays between consecutive layers, we use 1D convolutions
across time. The kernels contain only a few non-zero weights - one per synapse
- whose positions correspond to the delays. These positions are learned
together with the weights using the recently proposed Dilated Convolution with
Learnable Spacings (DCLS). We evaluated our method on three datasets: the
Spiking Heidelberg Dataset (SHD), the Spiking Speech Commands (SSC) and its
non-spiking version Google Speech Commands v0.02 (GSC) benchmarks, which
require detecting temporal patterns. We used feedforward SNNs with two or three
hidden fully connected layers, and vanilla leaky integrate-and-fire neurons. We
showed that fixed random delays help and that learning them helps even more.
Furthermore, our method outperformed the state-of-the-art in the three datasets
without using recurrent connections and with substantially fewer parameters.
Our work demonstrates the potential of delay learning in developing accurate
and precise models for temporal data processing. Our code is based on PyTorch /
SpikingJelly and available at: https://github.com/Thvnvtos/SNN-delays | Ilyass Hammouamri, Ismail Khalfaoui-Hassani, Timothée Masquelier | 2023-06-30T14:01:53Z | http://arxiv.org/abs/2306.17670v3 | # Learning Delays in Spiking Neural Networks using Dilated Convolutions with Learnable Spacings
###### Abstract
Spiking Neural Networks (SNNs) are a promising research direction for building power-efficient information processing systems, especially for temporal tasks such as speech recognition. In SNNs, delays refer to the time needed for one spike to travel from one neuron to another. These delays matter because they influence the spike arrival times, and it is well-known that spiking neurons respond more strongly to coincident input spikes. More formally, it has been shown theoretically that plastic delays greatly increase the expressivity in SNNs. Yet, efficient algorithms to learn these delays have been lacking. Here, we propose a new discrete-time algorithm that addresses this issue in deep feedforward SNNs using backpropagation, in an offline manner. To simulate delays between consecutive layers, we use 1D convolutions across time. The kernels contain only a few non-zero weights - one per synapse - whose positions correspond to the delays. These positions are learned together with the weights using the recently proposed Dilated Convolution with Learnable Spacings (DCLS). We evaluated our method on three datasets: the Spiking Heidelberg Dataset (SHD), the Spiking Speech Commands (SSC) and its non-spiking version Google Speech Commands v0.02 (GSC) benchmarks, which require detecting temporal patterns. We used feedforward SNNs with two or three hidden fully connected layers, and vanilla leaky integrate-and-fire neurons. We showed that fixed random delays help and that learning them helps even more. Furthermore, our method outperformed the state-of-the-art in the three datasets without using recurrent connections and with substantially fewer parameters. Our work demonstrates the potential of delay learning in developing accurate and precise models for temporal data processing. Our code is based on PyTorch / SpikingJelly and available at: [https://github.com/Thvnvtos/SNN-delays](https://github.com/Thvnvtos/SNN-delays)
## 1 Introduction
Spiking neurons are coincidence detectors [27; 33]: they respond more when receiving synchronous, rather than asynchronous, spikes. Importantly, it is the spike arrival times that should coincide, not the spike emitting times - these times are different because propagation is usually not instantaneous. There is a delay between spike emission and reception, called delay of connections, which can vary across connections. Thanks to these heterogeneous delays, neurons can detect complex spatiotemporal spike patterns, not just synchrony patterns [23] (see Figure 1).
In the brain, the delay of a connection corresponds to the sum of the axonal, synaptic, and dendritic delays. It can reach several tens of milliseconds, but it can also be much shorter (1 ms or less) [23]. For example, the axonal delay can be reduced with myelination, which is an adaptive process that is required to learn some tasks (see [4] for a review). In other words, learning in the brain can not be reduced to synaptic plasticity. Delay learning is also important.
A certain theoretical work has led to the same conclusion: Maass and Schmitt demonstrated, using simple spiking neuron models, that a SNN with k adjustable delays can compute a much richer class of functions than a threshold circuit with k adjustable weights [29].
Finally, on most neuromorphic chips, synapses have a programmable delay. This is the case for Intel Loihi [8], IBM TrueNorth [1], SpiNNaker [13] and SENeCA [45].
All these points have motivated us and others (see related works in the next section) to propose delay learning rules. Here we show for the first time that delays can be learned together with the weights, using backpropagation, in arbitrarily deep SNNs. The trick is to simulate delays using temporal convolutions and to learn them using the recently proposed Dilated Convolution with Learnable Spacings [24; 25]. In practice, the method is fully integrated with PyTorch and leverages its automatic-differentiation engine.
## 2 Related Work
### Deep Learning for Spiking Neural Networks
Recent advances in SNN training methods like the surrogate gradient method [30; 35] and the ANN2SNN conversion methods [5; 9; 19] made it possible to train increasingly deeper spiking neural networks. The surrogate gradient method defines a continuous relaxation of the non-smooth spiking nonlinearity: it replaces the gradient of the Heaviside function used in the spike generating process by a smooth surrogate gradient that is suitable for optimization. On the other hand, the ANN2SNN methods convert conventional artificial neural networks (ANNs) into SNNs by copying the weights from ANNs while trying to minimize the conversion error.
Other works have explored improving the spiking neurons using inspiration from biological mechanisms or techniques used in ANNs. The Parametric Leaky Integrate-and-Fire (PLIF) [12] incorporates learnable membrane time constants that could be trained jointly with synaptic weights.[18] proposes a method to dynamically adapt firing thresholds in order to improve continual learning in SNNs. Spike-Element-Wise ResNet [11] addresses the problem of vanishing/exploding gradient in the plain Spiking ResNet caused by sigmoid-like surrogate functions, and successfully trained the first deep SNN with more than 150 layers. Spikformer [48] adapts the softmax-based self-attention mechanism of Transformers [40] to a spike-based formulation.
These efforts have resulted in closing the gap between the performance of ANNs and SNNs on many widely used benchmarks.
Figure 1: Coincidence detection: we consider two neurons \(N1\) and \(N2\) with the same positive synaptic weight values. \(N2\) has a delayed synaptic connection denoted \(d_{21}\) of \(8\)ms, thus both spikes from spike train \(S1\) and \(S2\) will reach \(N2\) quasi-simultaneously. As a result, the membrane potential of \(N2\) will reach the threshold \(\vartheta\) and \(N2\) will emit a spike. On the other hand, \(N1\) will not react to these same input spike trains.
### Delays in SNNs
Few previous works considered learning delays in SNNs. [41] proposed a similar method to ours in which they convolve spike trains with an exponential kernel so that the gradient of the loss with respect to the delay can be calculated. However, their method is used only for a shallow SNN with no hidden layers.
Other methods like [16; 17; 47; 39] also proposed learning rules developed specifically for shallow SNNs with only one layer. [21] proposed to learn temporal delays with Spike Timing Dependent Plasticity (STDP) in weightless SNNs. [20] proposed a method for delay-weight supervised learning in optical spiking neural networks. [2] proposed a method for deep feedforward SNNs that uses a set of multiple fixed delayed synaptic connections for the same two neurons before pruning them depending on the magnitude of the learned weights.
To the best of our knowledge, [38; 37] are the only ones to learn delays and weights jointly in a deep SNN. They proposed an adaptive maximum delay value that depends on the distribution of delays on each network layer. However, they use finite difference approximation to numerically estimate the gradients of the spikes in respect to the delays, and we think that those gradients are not suitable as we achieve similar performance in our experiments with fixed random delays.
We propose a control test that was not considered by the previous works and that we deem necessary: the SNN with delay learning should outperform an equivalent SNN with fixed random and uniformly distributed delays, especially with sparse connectivity.
## 3 Methods
### Spiking Neuron Model
The spiking neuron, which is the fundamental building block of SNNs, can be simulated using various models. In this work, we use the Leaky Integrate-and-Fire model [14], which is the most widely used for its simplicity and efficiency. The membrane potential \(u_{i}^{(l)}\) of the \(i\)-th neuron in layer \(l\) follows the differential equation:
\[\tau\frac{du_{i}^{(l)}}{dt}=-(u_{i}^{(l)}(t)-u_{reset})+RI_{i}^{(l)}(t) \tag{1}\]
where \(\tau\) is the membrane time constant, \(u_{reset}\) the potential at rest, \(R\) the input resistance and \(I_{i}^{(l)}(t)\) the input current of the neuron at time \(t\). In addition to the sub-threshold dynamics, a neuron emits a unitary spike \(S_{i}^{(l)}\) when its membrane potential exceeds the threshold \(\vartheta\), after which it is instantaneously reset to \(u_{reset}\). Finally, the input current \(I_{i}^{(l)}(t)\) is stateless and represented as the sum of afferent weights \(W_{ij}^{(l)}\) multiplied by spikes \(S_{j}^{(l-1)}(t)\):
\[I_{i}^{(l)}(t)=\sum_{j}W_{ij}^{(l)}S_{j}^{(l-1)}(t) \tag{2}\]
We formulate the above equations in discrete time using Euler's method approximation, and using \(u_{reset}=0\) and \(R=\tau\).
\[u_{i}^{(l)}[t] =(1-\frac{1}{\tau})u_{i}^{(l)}[t-1]+I_{i}^{(l)}[t] \tag{3}\] \[I_{i}^{(l)}[t] =\sum_{j}W_{ij}^{(l)}S_{j}^{(l-1)}[t]\] (4) \[S_{i}^{(l)}[t] =\Theta(u_{i}^{l}[t]-\vartheta) \tag{5}\]
We use the surrogate gradient method [30] and define \(\Theta^{\prime}(x)\triangleq\sigma^{\prime}(x)\) during the backward step, where \(\sigma(x)\) is the surrogate arctangent function [12].
### Synaptic Delays as a Temporal Convolution
A feed-forward SNN model with delays is parameterized with \(W=(w_{ij}^{(l)})\in\mathbb{R}\) and \(D=(d_{ij}^{(l)})\in\mathbb{R}^{+}\), where the input of neuron \(i\) at layer \(l\) is
\[I_{i}^{(l)}[t]=\sum_{j}w_{ij}^{(l)}S_{j}^{(l-1)}[t-d_{ij}^{(l)}] \tag{6}\]
We model a synaptic connection from neuron \(j\) in layer \(l-1\) to neuron \(i\) in layer \(l\) which have a synaptic weight \(w_{ij}^{(l)}\) and delay \(d_{ij}^{(l)}\) as a one dimensional temporal convolution (see Figure 2) with kernel \(k_{ij}^{(l)}\) as follows:
\(\forall n\in\llbracket 0,...\ T_{d}-1\rrbracket\):
\[k_{ij}^{(l)}[n]=\begin{cases}w_{ij}^{(l)}&\text{if }n=T_{d}-d_{ij}^{(l)}-1\\ 0&\text{otherwise}\end{cases} \tag{7}\]
where \(T_{d}\) is the kernel size or maximum delay + 1. Thus we redefine the input \(I_{i}^{(l)}\) in Equation 6 as a sum of convolutions:
\[I_{i}^{(l)}=\sum_{j}k_{ij}^{(l)}*S_{j}^{(l-1)} \tag{8}\]
We used a zero left-padding with size \(T_{d}-1\) on the input spike trains \(S\) so that \(I[0]\) does correspond to \(t=0\).
Moreover, a zero right-padding could also be used, but it is optional; it could increase the expressivity of the learned delays with the drawback of increasing the processing time as the number of time-steps after the convolution will increase.
To learn the kernel elements positions (i.e., delays), we use the 1D version of DCLS [24] with a Gaussian kernel [25] centered at \(T_{d}-d_{ij}^{(l)}-1\), where \(d_{ij}^{(l)}\in\llbracket 0,\ T_{d}-1\rrbracket\), and of standard deviation \(\sigma_{ij}^{(l)}\in\mathbb{R}^{*}\), thus we have:
\(\forall n\in\llbracket 0,...\ T_{d}-1\rrbracket\):
\[k_{ij}^{(l)}[n]=\frac{w_{ij}^{(l)}}{c}\ \text{exp}\left(-\frac{1}{2}\left( \frac{n-T_{d}+d_{ij}^{(l)}+1}{\sigma_{ij}^{(l)}}\right)^{2}\right) \tag{9}\]
Figure 2: Example of one neuron with 2 afferent synaptic connections, convolving \(K1\) and \(K2\) with the zero left-padded \(S_{1}\) and \(S_{2}\) is equivalent to following Equation 6
With
\[c=\epsilon+\sum_{n=0}^{T_{d}-1}\text{exp}\left(-\frac{1}{2}\left(\frac{n-T_{d}+d_{ ij}^{(l)}+1}{\sigma_{ij}^{(l)}}\right)^{2}\right) \tag{10}\]
a normalization term and \(\epsilon=1e-7\) to avoid division by zero, assuming that the tensors are in float32 precision. During training, \(d_{ij}^{(l)}\) are clamped after every batch to ensure their value stays in \([0,...\,T_{d}-1]\).
The learnable parameters of the 1D DCLS layer with Gaussian interpolation are the weights \(w_{ij}\), the corresponding delays \(d_{ij}\), and the standard deviations \(\sigma_{ij}\). However, in our case, \(\sigma_{ij}\) are not learned, and all kernels in our model share the same decreasing standard deviation, which will be denoted as \(\sigma\). Throughout training, we exponentially decrease \(\sigma\) as our end goal is to have a sparse kernel where only the delay position is non-zero and corresponds to the weight.
The Gaussian kernel transforms the discrete positions of the delays into a smoother kernel (see Figure 3), which enables the calculation of the gradients \(\frac{\partial L}{\partial d_{ij}^{(l)}}\).
By adjusting the parameter \(\sigma\), we can regulate the temporal scale of the dependencies. A small value for \(\sigma\) enables the capturing of variations that occur within a brief time frame. In contrast, a larger value of \(\sigma\) facilitates the detection of temporal dependencies that extend over longer durations. Thus \(\sigma\) tuning is crucial to the trade-off between short-term precision and long-term dependencies.
We start with a high \(\sigma\) value and exponentially reduce it throughout the training process, after each epoch, until it reaches its minimum value of 0.5 (Fig. 4). This approach facilitates the learning of distant long-term dependencies at the initial time. Subsequently, when \(\sigma\) has a smaller value, it enables refining both weights and delays with more precision, making the Gaussian kernel more similar to the discrete kernel that is used at inference time. As we will see later in our ablation study (Section 4.3), this approach outperforms a constant \(\sigma\).
Indeed, the Gaussian kernel is only used to train the model; when evaluating on the validation or test set, it is converted to a discrete kernel as described in Equation 7 by rounding the delays. This permits to implement sparse kernels for inference which are very useful for uses on neuromorphic
Figure 3: Gaussian convolution kernels for \(N\) synaptic connections. The Gaussians are centered on the delay positions, and the area under their curves corresponds to the synaptic weights \(w_{i}\). On the right, we see the delayed spike trains after being convolved with the kernels. (the \(-1\) was omitted for figure clarity).
hardware, for example, as they correspond to only one synapse between pairs of neurons, with the corresponding weight and delay.
## 4 Experiments
### Experimental Setup
We chose to evaluate our method on the SHD and SSC/GSC datasets [6], as they require leveraging temporal patterns of spike times to achieve a good classification accuracy, unlike most computer vision spiking benchmarks. Both spiking datasets are constructed using artificial cochlear models to convert audio speech data to spikes; the original audio datasets are the Heidelberg Dataset (HD) and the Google Speech Commands v0.02 Dataset (SC) [42] for SHD and SSC, respectively.
The SHD dataset consists of 10k recordings of 20 different classes that consist of spoken digits ranging from zero to nine in both English and German languages. SSC and GSC are much larger datasets that consist of 100k different recordings. The task we consider on SSC and GSC is the top one classification on all 35 different classes (similar to [6; 3]) which is more challenging than the original key-word spotting task on 12 classes, proposed in [42].
For the two spiking datasets, we used spatio-temporal bins to reduce the input dimensions. Input neurons were reduced from 700 to 140 by binning every 5 neurons; as for the temporal dimension we used a discrete time-step \(\Delta t=10\) ms, and a zero right-padding to make sure all recordings in a batch have the same time duration. As for the non-spiking GSC, we used the Mel Spectrogram representation of the waveforms with 140 frequency bins and approximately 100 timesteps to remain consistent to the input sizes used in SSC.
Figure 4: This figure illustrates the evolution of the same delay kernels for an example of eight synaptic connections of one neuron throughout the training process. The x-axis corresponds to time, and each kernel is of size \(T_{d}=25\). And the y-axis is the synapse id. (a) corresponds to the initial phase where the standard deviation of the Gaussian \(\sigma\) is large (\(\frac{T_{d}}{2}\)), allowing to take into consideration long temporal dependencies. (b) corresponds to the intermediate phase, (c) is taken from the final phase where \(\sigma\) is at its minimum value (0.5) and weight tuning is more emphasized. Finally, (d) represents the kernel after converting to the discrete form with rounded positions.
We used a very simple architecture: a feedforward SNN with two or three hidden fully connected layers. Each feedforward layer is implemented using a DCLS module where each synaptic connection is modeled as a 1D temporal convolution with one Gaussian kernel element (as described in Section 3.2), followed by batch normalization, a LIF module (as described in Section 3.1) and dropout. Table 1 lists the values of some hyperparameters used for the three datasets (for more details, refer to the code repository).
The readout layer consists of \(n_{\text{classes}}\) LIF neurons with infinite threshold (where \(n_{\text{classes}}\) is 20 or 35 for SHD and SSC/GSC respectively). Similar to [3], the output \(\text{out}_{i}[t]\) for every neuron \(i\) at time \(t\) is
\[\text{out}_{i}[t]=\text{softmax}(u_{i}^{(r)}[t])=\frac{e^{u_{i}^{(r)}[t]}}{ \sum_{j=1}^{n_{\text{classes}}}e^{u_{j}^{(r)}[t]}} \tag{11}\]
where \(u_{i}^{(r)}[t]\) is the membrane potential of neuron \(i\) in the readout layer \(r\) at time \(t\).
And the final output of the model after \(T\) time-steps is defined as
\[\hat{y_{i}}=\sum_{t=1}^{T}\text{out}_{i}[t] \tag{12}\]
We denote the batch size by \(N\) and the ground truth by \(y\). We calculate the cross-entropy loss for one batch as
\[\mathcal{L}=\frac{1}{N}\sum_{n=1}^{N}-\log(\text{softmax}(\hat{y}_{y_{n}}[n])) \tag{13}\]
The Adam optimizer [26] is used for all models and groups of parameters with base learning rates \(lr_{w}=0.001\) for synaptic weights and \(lr_{d}=0.1\) for delays. We used a one-cycle learning rate scheduler [36] for the weights and cosine annealing [28] without restarts for the delays learning rates. Our work is implemented1 using the PyTorch-based SpikingJelly[10] framework.
Footnote 1: Our code is available at: [https://github.com/Thvnvtos/SNN-delays](https://github.com/Thvnvtos/SNN-delays)
### Results
We compare our method in Table 2 to previous works on the SHD, SSC and GSC-35 (35 denoting the 35 classes harder version) benchmark datasets in terms of accuracy, model size and whether recurrent connections or delays were used.
The reported accuracy of our method corresponds to the accuracy on the test set using the best performing model on the validation set. However, since there is no validation set provided for SHD we use the test set as the validation set (similar to [3]). The margins of error are calculated at a 95% confidence level using a t-distribution (we performed ten and five experiments using different random seeds for SHD and SSC/GSC respectively).
Our method outperforms the previous state-of-the-art accuracy on the three benchmarks (with a significant improvement on SSC and GSC) without using recurrent connections and with a substantially lower number of parameters, and using only vanilla LIF neurons. Other methods that use delays do have a slightly lower number of parameters than we do, yet we outperform them significantly on SHD; while they didn't report any results on the harder benchmarks SSC/GSC. Finally, by increasing the number of hidden layers, we found that the accuracy plateaued after two hidden layers for SHD, and three for SSC/GSC.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dataset & \# Hidden Layers & \# Hidden size & \(\tau\)(ms) & Maximum Delay(ms) & Dropout rate \\ \hline SHD & 2 & 256 & \(10.05^{*}\) & 250 & 0.4 \\ SSC/GSC & 2 or 3 & 512 & 15 & 300 & 0.25 \\ \hline \hline \end{tabular}
*We found that a LIF with quasi-instaneous leak \(\tau=10.05\) (since \(\Delta t=10\)) is better than using a Heaviside function for SHD.
\end{table}
Table 1: Network parameters for different datasets
### Ablation study
In this section, we conduct control experiments aimed at assessing the effectiveness of our delay learning method. The model trained using our full method will be referred to as _Decreasing_\(\sigma\), while _Constant_\(\sigma\) will refer to a model where the standard deviation \(\sigma\) is constant and equal to the minimum value of \(0.5\) throughout the training. Additionally, _Fixed random delays_ will refer to a model where delays are initialized randomly and not learned, while only weights are learned. Meanwhile _Decreasing_\(\sigma\) - _Fixed weights_ will refer to a model where the weights are fixed and only delays are learned with a decreasing \(\sigma\). Finally, _No delays_ denotes a standard SNN without delays. To ensure equal parameter counts across all models (for fair comparison), we increased the number of hidden neurons in the _No delays_ case. Moreover, to make the comparison even fairer, all models have the same initialization for weights, and if required, the same initialization for delays.
We compared the five different models as shown in Figure 4(a). The models with delays (whether fixed or learned) significantly outperformed the _No delays_ model both on SHD (FC) and SSC (FC); for us, this was an expected outcome given the temporal nature of these benchmarks, as achieving a high accuracy necessitates learning long temporal dependencies. However, we didn't expect the Fixed random delays model to be almost on par with models where delays were trained, with Decreasing \(\sigma\) model only slightly outperforming it.
To explain this, we hypothesized that a random uniformly distributed set of delay positions will most likely cover the whole possible temporal range. This hypothesis is plausible given the fact that the number of synaptic connections vastly outnumbers the total possible discrete delay positions for each kernel. Therefore, as the number of synaptic connections within a layer grows, the necessity of moving delay positions away from their initial state diminishes. And only tuning the weights of this set of fixed delays is enough to achieve comparable performance to delay learning.
In order to validate this hypothesis, we conducted a comparison using the same models with a significantly reduced number of synaptic connections. We applied fixed binary masks to the network's synaptic weight parameters. Specifically, for each neuron in the network we reduced the number of its synaptic connections to ten for both datasets (except for the No delays model, which has more connections to ensure equal parameters counts). This corresponds to 96% sparsity for SHD and 98% sparsity for SSC. With the number of synaptic connections reduced, it is unlikely that the random
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Dataset & Method & Recurrent & Delays & \# Params & Top1 Accuracy \\ \hline \multirow{8}{*}{**SHD**} & EventProp-GeNN [31] & ✓ & ✗ & N/a & 84.80 \(\pm\) 1.5\% \\ & Cuba-LIF [7] & ✗ & ✗ & N/a & 87.80 \(\pm\) 1.1\% \\ & Adaptive SRNN [44] & ✓ & ✗ & N/a & 90.40\% \\ & SNN with Delays [2] & ✗ & ✓ & 0.1M & 90.43\% \\ & TA-SNN [43] & ✗ & ✗ & N/a & 91.08\% \\ & STSC-SNN [46] & ✗ & ✗ & 2.1M & 92.36\% \\ & Adaptive Delays [37] & ✗ & ✓ & 0.1M & 92.45\% \\ & RadLIF [3] & ✓ & ✗ & 3.9M & 94.62\% \\ & **Our work (2 hidden layers)** & ✗ & ✓ & **0.2M** & **95.07 \(\pm\) 0.24\%** \\ \hline \multirow{4}{*}{**SSC**} & Recurrent SNN [6] & ✓ & ✗ & N/a & 50.90 \(\pm\) 1.1\% \\ & Heterogeneous RSNN [32] & ✓ & ✗ & N/a & 57.30\% \\ & SNN-CNN [34] & ✗ & ✓ & N/a & 72.03\% \\ & Adaptive SRNN [44] & ✓ & ✗ & N/a & 74.20\% \\ & SpikGRU [7] & ✓ & ✗ & N/a & 77.00 \(\pm\) 0.4\% \\ & RadLIF [3] & ✓ & ✗ & 3.9M & 77.40\% \\ & **Our work (2 hidden layers)** & ✗ & ✓ & **0.7M** & **79.77 \(\pm\) 0.09\%** \\ & **Our work (3 hidden layers)** & ✗ & ✓ & **1.2M** & **80.29 \(\pm\) 0.06\%** \\ \hline \multirow{4}{*}{**GSC-35**} & MSAT [22] & ✗ & ✗ & N/a & 87.33\% \\ & RadLIF [3] & ✓ & ✗ & 1.2M & 94.51\% \\ \cline{1-1} & **Our work (2 hidden layers)** & ✗ & ✓ & **0.7M** & **94.91 \(\pm\) 0.09\%** \\ \cline{1-1} & **Our work (3 hidden layers)** & ✗ & ✓ & **1.2M** & **95.29 \(\pm\) 0.11\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification accuracy on SHD, SSC and GSC-35 datasets
uniform initialization of delay positions will cover most of the temporal range. Thus, specific long term dependencies will need to be learned by moving the delays.
The test accuracies corresponding to this control test are shown in Figure 4(b). And it illustrates the difference in performance between the Fixed random delays model and the Decreasing/Constant \(\sigma\) models in the sparse case. This enforces our hypothesis and shows the need to perform this control test for delay learning methods. Furthermore, it also indicates the effectiveness of our method.
In addition, we also tested a model where only the delays are learned while the synaptic weights are fixed (Decreasing \(\sigma\) - Fixed weights). It can be seen that learning only the delays gives acceptable results in the fully connected case (in agreement with [15]) but not in the sparse case. To summarize, it is always preferable to learn both weights and delays (and decreasing \(\sigma\) helps). If one has to choose, then learning weights is preferable, especially with sparse connectivity.
## 5 Conclusion
In this paper, we propose a method for learning delays in feedforward spiking neural networks using dilated convolutions with learnable spacings (DCLS). Every synaptic connection is modelled as a 1D Gaussian kernel centered on the delay position, and DCLS is used to learn the kernel positions (i.e delays). The standard deviation of the Gaussians is decreased throughout training, such that at the end of training we obtain a SNN model with one discrete delay per synapse, which could potentially be compatible with neuromorphic implementations. We show that our method outperforms the state-of-the-art in the temporal spiking benchmarks SHD and SSC and the non-spiking benchmark GSC-35, while using fewer parameters than previous proposals. And we also perform a rigorous control test that demonstrates the effectiveness of our method. Future work could investigate the use of other kernel functions than the Gaussian, or applying our method to other network architectures like convolutional networks.
#### Limitations
The primary limitations of our work revolve around the compatibility and constraints of our delay learning method. Specifically, our method is limited to offline training conducted in discrete-time simulations, and it cannot handle recurrent connections. Additionally, a maximum delay limit, which corresponds to the size of the kernel, must be predetermined and fixed before the learning process.
Figure 5: Barplots of test accuracies on SHD and SSC datasets for different models. With (a): fully connected layers (FC) and (b): with sparse synaptic connections (S). Reducing the number of synaptic connections of each neuron to ten, for both SHD and SSC.
#### Computational resources
This project required about 500 GPU hours on a single Nvidia Tesla T4 GPU with two Intel(R) Xeon(R) CPUs @ 2.20 GHz threads. Given this hardware configuration, a single training session lasted approximately 1 hour for the SHD runs, while for the SSC/GSC runs, a single training session lasted around 7 hours. The available computing resources allowed us to perform the required calculations efficiently, leading to accurate and competitive outcomes within a reasonable time.
#### Acknowledgment
This research was supported in part by the Agence Nationale de la Recherche under Grant ANR-20-CE45-0005 BRAIN-Net. This work was granted access to the HPC resources of CALMIP supercomputing center under the allocation 2023-[P22021]. Support from the ANR-3IA Artificial and Natural Intelligence Toulouse Institute is gratefully acknowledged. We also want to thank Wei Fang for developing the SpikingJelly framework that we used in this work.
|
2309.13293 | Complete integrability and equilibrium thermodynamics of biaxial nematic
systems with discrete orientational degrees of freedom | We study a discrete version of a biaxial nematic liquid crystal model with
external fields via an approach based on the solution of differential
identities for the partition function. In the thermodynamic limit, we derive
the free energy of the model and the associated closed set of equations of
state involving four order parameters, proving the integrability and exact
solvability of the model. The equations of state are specified via a suitable
representation of the orientational order parameters, which imply two-order
parameter reductions in the absence of external fields. A detailed exact
analysis of the equations of state reveal a rich phase diagram where isotropic
versus uniaxial versus biaxial phase transitions are explicitly described,
including the existence of triple and tricritical points. Results on the
discrete models are qualitatively consistent with their continuum analog. This
observation suggests that, in more general settings, discrete models may be
used to capture and describe phenomena that also occur in the continuum for
which exact equations of state in closed form are not available. | Giovanni De Matteis, Francesco Giglio, Antonio Moro | 2023-09-23T07:27:50Z | http://arxiv.org/abs/2309.13293v2 | Complete integrability and equilibrium thermodynamics of biaxial nematic systems with discrete orientational degrees of freedom
###### Abstract
We study a discrete version of a biaxial nematic liquid crystal model with external fields via an approach based on the solution of differential identities for the partition function. In the thermodynamic limit, we derive the free energy of the model and the associated closed set of equations of state involving four order parameters, proving the integrability and exact solvability of the model. The equations of state are specified via a suitable representation of the orientational order parameters, which imply two-order parameter reductions in the absence of external fields. A detailed exact analysis of the equations of state reveal a rich phase diagram where isotropic versus uniaxial versus biaxial phase transitions are explicitly described, including the existence of triple and tricritical points. Results on the discrete models are qualitatively consistent with their continuum analog. This observation suggests that, in more general settings, discrete models may be used to capture and describe phenomena that also occur in the continuum for which exact equations of state in closed form are not available.
Keywords: Liquid Crystals \(|\) Integrability \(|\) Phase Transitions \(|\) Biaxiality
## 1 Introduction
Mean-field models in Statistical Mechanics and Thermodynamics are a powerful tool to explore general qualitative properties of thermodynamic systems that, otherwise, would not be analytically treatable. Both conceptual and historical importance of mean-field models is testified by the celebrated van der Waals and Curie-Weiss models [1], complemented by Maxwell's equal areas rule (see e.g. [2]) which provided the first qualitative description of the mechanisms for the occurrence of phase transitions in fluids and magnetic systems. It is also well established that, in order to obtain accurate quantitative predictions, mean-field models need to be replaced by models with finite range interactions which are generally more challenging, and solvable cases require the use of sophisticated techniques as, for example, the transfer matrix and the renormalisation group, see e.g. [3].
Spin models are the archetypal example of models aimed at describing the macroscopic and collective behaviour of systems made up of components with internal degrees of freedom (in the simple case the spin \(\sigma=\pm 1\)) with pairwise (and also higher order) interactions. Such models,
although originally introduced in condensed matter physics to explain magnetic properties of materials are, however, of universal importance, as testified by applications in other disciplines such as Biology, Economics, Social Sciences, see e.g. [4, 5, 6] and references therein. It is also worth noting that a resurgence of interest, in the last decade, for spin-like mean-field models is due to the studies concerning their deployment for information processing, classification, memory retrieval and, more generally, machine learning purposes [7]. These studies, originally inspired by the pioneering work of Hopfield [8], led to the definition of models for neural networks, such as the Boltzmann machines and their variations, based on spin glasses and statistical inference algorithms for training and learning [9]. The key idea in this context is that spin particles sit at a node of a graph and possess internal degrees of freedom, i.e. their spin values are interpreted as node states of the neural network associated to the graph. The spin-spin interaction constant corresponds to the weight associated to the links on the network.
In this paper, we consider a biaxial version of the discrete Maier-Saupe model for nematic liquid crystals (LCs) as studied in [10], whose structure resembles a multi-partite spin model with spin components subject to suitable constraints. The model consists of a system of particles endowed with an internal assigned geometry and symmetries with only orientational degrees of freedom. Not surprisingly, the exact analytical description of their macroscopic thermodynamic behaviour, phase transitions and emergent properties is, in general, not available and therefore alternative approaches and approximation techniques need to be adopted [11]. Numerical simulations [12], Landau's expansion of the free energy [13], group representation and bifurcation theory [14] are approaches that allow to explore, at least locally, i.e. in the neighbourhood of specified values for the thermodynamic parameters, the possible occurrence of criticalities and phase transitions, and estimate relevant thermodynamic quantities such as orientational order, specific heat, critical exponents. Mean-field models are effective in providing insights that complement and support the aforementioned methodologies and all together help achieve accurate qualitative description and predictions on key properties of LCs including those that are paramount for technological applications [15].
From a physical viewpoint, in the last few decades, the biaxial nematic liquid crystal phase has been the object of much intense study. The story of this phase has its roots back to 1970 [16], when the theoretical physicist Marvin Freiser noted that rather than possessing a rod-like shape (i. e. \(D_{\infty h}\) symmetry), as usually assumed, most thermotropic, mesogenic molecules were in fact closer to being board-like, thus intrinsically biaxial (i. e. endowed with \(D_{2h}\) symmetry). Usually, they produce uniaxial nematic phases as a consequence of the rotational disorder around the long molecular axis, which eventually yields the definition of a single macroscopic director. This rotational disorder can be overcome by molecular mutual interactions favoring the molecules to align parallel to one another, thus leading to a thermotropic biaxial nematic phase at sufficiently low temperatures. Accordingly, Freiser understood that mesogens should be expected to exhibit a biaxial nematic phase, in addition to the usual uniaxial one. The prediction of a second nematic phase possessing novel properties and promising potential applications, stimulated considerable interest, as well as not little debate. In fact, on the experimental side, stable biaxial phases have been observed in lyotropic systems since the pioneering work of Yu and Saupe [17]. In contrast, the experimental proof in favour of their existence in thermotropic systems has been subject of scrutiny and criticism in [18, 19, 20]. In the period 1986 to 2003, the matter remained controversial with no widely accepted results [18, 21, 22]. However, since 2004, clearer experimental evidence was provided for a few classes of compounds, such as polar bent-core or V-shaped molecules [23, 24, 25], and organosiloxane tetrapodes or their counterparts with a germanium core [26, 27, 28, 29, 30]. These compounds have been investigated by several techniques which led to measurements of biaxial order parameters [31]. According to these experimental results, an alternative picture of biaxial nematic order has emerged [32, 33, 34, 35, 36], based on the idea of biaxial domains reoriented by surface anchoring or external fields. Other researchers
[37, 38, 39] have also pointed out that the biaxial nematic order is related to the onset of smectic fluctuations. Moreover, it has also been remarked that biaxial nematics may be formed from molecules possessing a lower symmetry than the usually assumed \(D_{2h}\) one, as for istance the \(C_{2h}\) symmetry [35, 40, 41, 42, 43, 44]. In addition, quite recently [45, 46], low symmetry interaction models have been addressed, involving dipolar contribution, so as to describe polar bent-core molecules. The study of biaxial nematics is not only of theoretical origin, it is also connected with their potential technological applications in displays [32, 47, 48, 49, 50, 51]: orientation of the secondary director in response to external perturbations is expected to be significantly faster than the primary one [33, 40]. Biaxial nematic phases have also been produced in colloidal suspensions of inorganic compounds [52, 53, 54]. More recently, in [55], Smalyuhk et al. have considered a hybrid molecular-colloidal soft-matter system with orthorhombic biaxial orientational order and fluidity. This molecular-colloidal complex fluid is made up of only uniaxial rod-like building blocks. In contrast, this complex fluid exhibits a surprising self-assembly into a biaxial nematic liquid crystal with the \(D_{2h}\) point group symmetry. Finally, let us mention that very recently, the emergence of biaxial order upon mechanical strain has been proved experimentally in a nematic liquid crystal elastomer, the first synthetic auxetic material at a molecular level [56]. By measuring the order parameters during deformation, the deviation from Maier-Saupe theory was detected for the uniaxial order parameters and the biaxial order parameters were deduced, suggesting the occurrence of biaxiality in the initially uniaxial system.
On the theoretical side, after Freiser's first prediction [16], investigations were actively carried on along different approaches such as molecular-field or Landau theories, and later on by computer simulations. By the end of the past century, this collection of theoretical methodologies has shown that single-component models consisting of molecules possessing \(D_{2h}\) symmetry, and interacting via various continuous or hard-core potentials, are capable of producing a biaxial nematic phase under appropriate thermodynamic conditions [32, 57, 58, 59]. Theoretical studies usually predict a low-temperature biaxial phase, undergoing a transition to the uniaxial one, which, in turn, finally turns into the isotropic phase. In some cases, the transition takes place directly from the biaxial nematic to the isotropic phase. In the former cases, the ratio between the two transition temperatures (biaxial-to-uniaxial and uniaxial-to-isotropic) often turns out to be rather small in comparison with experimentally known stability ranges of the nematic phase. Both the isotropic-to-biaxial and uniaxial-to-biaxial phase transitions can be either first- or second-order, and, accordingly, the phase diagram exhibits _triple_ and _tricritical points_. However, in the low temperature range, other phases, such as smectic or solid ones, may become more likely to occur. On the other hand, most theoretical frameworks only allow for isotropic and nematic phases [33], being the positional order not accounted for. Over the years, a rather simple, continuous, biaxial mesogenic pair interaction model has been proposed and investigated by several authors and via several types of techniques. In the literature, this model is known as the _generalised Straley interaction_[60] and it finds its roots in the celebrated Maier-Saupe model for interacting uniaxial nematic molecules [61, 62, 63]. Actually, over the last two decades, several properties of this model have emerged, such as possible simplifications, additional symmetries and versatility in applications. More precisely, in 2003, new experimental findings on biaxial nematics boosted a renewed theoretical interest by some authors [64, 65, 66, 67, 68, 69]. More precisely, the generalised Straley pair potential model was studied by mean-field, as well as Monte Carlo simulation in the simple-cubic lattice-model version and, correspondingly, the effects produced on the resulting macroscopic behaviour were analysed [57, 58, 66, 67, 69, 70]. Moreover, motivated by the new experimental facts, the single-tensor Landau-de Gennes theory of biaxial nematics has been carefully revisited and a double-tensor Landau theory was put forward and studied [71, 72]. The hidden link between mean-field and Landau-de Gennes-type treatments has also been studied [73, 74, 75]. The Straley potential model involves three independent parameters, and the aforementioned studies have shown that the model is rather
versatile and capable of producing both biaxial and purely uniaxial order. In addition, the effect of strong antinematic terms, i.e. terms promoting misalignment, in the pair potential onto the resulting orientational order has been investigated [58, 76]. As shown in [76, 77], these antinematic terms in the Straley model may destroy biaxiality, producing only uniaxial orientational order, and in some cases show evidence of the existence of a continuous ordering transition, in contrast with the discontinuous phase transition predicted by the simple Maier-Saupe model. Moreover, in [78], the Straley potential only contains antinematic terms, and it is found to produce biaxial order via a mechanism of order by disorder. In [60] the authors investigated the effect of two predominant antinematic couplings of equal strength perturbed by a comparatively weaker calamitic one. The resulting phases are a pure calamitic uniaxial phase, accompanied by an intermediate antinematic uniaxial phase.
In this work, we consider a _discrete_ version of the celebrated Maier-Saupe model for nematic LCs as the one considered in [10] and study its _biaxial_ generalisation, i.e. Straley model, further extended to account for the effects of external fields. More specifically, molecules are assumed to be rigid cuboids, with two individual orientational degrees of freedom associated to two of the three principal axes of inertia, as the position of the third axis is automatically determined. It is also assumed that homologous principal axes of inertia interact pairwise for any pairs of molecules in the system. This assumption specifically characterises the mean-field models, where indeed any pair of molecules equally interact independently of their distance, and therefore positional degrees of freedom are not relevant. A further assumption is that orientational degrees of freedom are discrete, namely principal axes can only be parallel to the directions of a pre-defined Cartesian reference frame. The discretisation of orientational degrees of freedom for nematic liquid crystal models was firstly introduced by Zwanzig in [79] and successfully employed in various works, including recent papers [10, 80]. Although this assumption may seem to be at a glance restrictive, it captures, as observed in [10], with strikingly accuracy, properties of the continuum model. We show, via explicit examples, that the predictions obtained under specific symmetry reductions are consistent with the ones present in the literature for the corresponding continuum models.
It is also important to note that, although, on one hand, the above assumptions restrict the model and allow to derive explicit global equations of the thermodynamic order parameters, on the other hand, the model is more general than its continuum analogues and the methodology adopted naturally incorporates external fields interacting with each orientational degree of freedom. Therefore, to the best of our knowledge, we provide the first theoretical study on the equilibrium statistical mechanics of a molecular field theory for biaxial liquid crystals subject to external fields.
To solve the model, we show that the partition function \(Z_{N}\) of the \(N-\)molecules discrete Straley biaxial model with external fields satisfies a remarkable differential identity as a function of the temperature and coupling constants. Using suitably re-scaled independent variables, the differential identity for the partition function of the finite size model is equivalent, up to a linear change of variables, to the heat equation. The required solution is therefore obtained by solving a linear equation with a specific initial condition that is fixed by the value of the partition function for the non interacting model the solution of which is straightforward. The properties of the system in the thermodynamic regime are obtained by studying the behaviour of the free energy
\[\mathcal{F}_{N}:=\frac{1}{N}\log Z_{N}\]
in the limit as \(N\to\infty\), which corresponds to the _semi-classical_, or low diffusion, limit, of the heat equation, via a suitable asymptotic expansion of the free energy in powers of \(N^{-1}\). At the leading order, the problem is solved via a Hamilton-Jacobi equation, which can be explicitly integrated and the solution is given in terms of the orientational order parameters from which
the equations of state follow as a stationary point for the free energy functional. We study in detail the solution of the Hamilton-Jacobi equations and, specifically, a related system of quasi-linear PDEs for the orientational order parameters. The solution reveals a complex singularity structure describing transitions between isotropic, uniaxial and biaxial phases, both in presence and absence of external fields. We tackle the problem in its generality and classify all admissible reductions in absence of external fields. We show that our results on phase diagrams and stability of specific phases are consistent with those already present in the literature found via different techniques.
As pointed out in a number of papers [81, 82, 83, 84, 85, 86, 87, 10, 88, 89, 90] the nature of the PDEs derived for the orientational order parameters suggests a natural interpretation of the singularities as classical shocks propagating in the space of thermodynamic variables. This allows to explain and, qualitatively, predict some features of the phase diagram based on the general properties of shock waves, as for example the occurrence of _tricritical_ points as a collision and merging mechanism of two shock waves. This example demonstrates how such an interpretation is at the same time intriguing and of practical use.
The paper is organised as follows. In Section 2 we introduce the physical model under study, we derive differential identities for the statistical partition function and we provide exact solutions for the model in the finite-size regime. In Section 3, we perform the thermodynamic limit and derive exact equations of state for the full model. Two-parameter reductions are also obtained in the cases of i) zero fields and ii) non-zero fields under special constraints. In Section 4 we present the phase diagram of the model in absence of external fields, and discuss criticality and behavior of the corresponding order parameters. Section 5 is devoted to concluding remarks.
## 2 The discrete \(\lambda\)-model for biaxial nematics
Let us consider a system of \(N\) interacting Liquid Crystals molecules with \(D_{2h}\) symmetry, whose molecular directors \(\vec{m}\), \(\vec{e}\) and \(\vec{e}_{\perp}\) are mutually orthogonal unit vectors parallel to their principal axes. The orientational state of a given molecule is identified by the directions of its molecular axes. Introducing the tensors (see e.g. [91])
\[{\bf q}=\vec{m}\otimes\vec{m}-\frac{1}{3}{\bf I}\qquad,\qquad{\bf b}=\vec{e} \otimes\vec{e}-\vec{e}_{\perp}\otimes\vec{e}_{\perp} \tag{2.1}\]
where \({\bf I}\) is the \(3\times 3\) identity matrix, we consider the Hamiltonian of the form
\[H_{0}=-\frac{\mu}{2N}\sum_{i,j}\left({\bf q}_{i}\cdot{\bf q}_{j}+\lambda\,{ \bf b}_{i}\cdot{\bf b}_{j}\right)\,, \tag{2.2}\]
where \({\bf q}_{i}\) and \({\bf b}_{i}\) specify the orientational state of the \(i-\)th molecule and the scalar product is \({\bf a}\cdot{\bf b}:={\rm Tr}\,({\bf a}{\bf b})\), where \({\rm Tr}\,\) is the trace operator. Summation indices \(i\) and \(j\) run from \(1\) to \(N\), \(\mu\) is the non-negative mean-field coupling constant and \(\lambda\) is a parameter weighing the degree of biaxiality. In the present paper, we assume \(\lambda\in[0,1]\). In this range, the ground state for two interacting molecules corresponds to parallel homologous axes, that is \({\mathbf{e}}_{i}\) tend to line up with \({\mathbf{e}}_{j}\), \({\mathbf{m}}_{i}\) with \({\mathbf{m}}_{j}\) and \({\mathbf{e}}_{\perp,i}\) with \({\mathbf{e}}_{\perp,j}\). When \(\lambda=0\), the above Hamiltonian reduces to the classical Maier-Saupe model. The specific choice \(\lambda=1/3\) corresponds to the MMM model for liquid crystals with equally nematic interaction among corresponding molecular axes [66], i.e.
\[H_{1}=-\frac{\mu}{2N}\sum_{i,j}\left({\bf q}_{i}\cdot{\bf q}_{j}+\frac{1}{3}\, {\bf b}_{i}\cdot{\bf b}_{j}\right)\,=-\frac{\mu}{2N}\frac{2}{3}\sum_{i,j} \left[({\mathbf{m}}_{i}\cdot{\mathbf{m}}_{j})^{2}+({\mathbf{e}}_{i}\cdot{\mathbf{e}}_{j})^{2}+({\mathbf{e}}_ {\perp,i}\cdot{\mathbf{e}}_{\perp,j})^{2}-\frac{1}{2}\right]\,. \tag{2.3}\]
For convenience, we have included self-interaction terms corresponding to \(i=j\). This choice will not affect the result as it corresponds to a shift of the energy reference frame by a constant. Assuming that allowed configurations are such that molecular directors are parallel to the axes of a fixed Cartesian reference frame, the Hamiltonian (2.2) can be written as follows
\[H_{0}=-\frac{\mu}{2N}\sum_{i,j}\sum_{l,k\in\{1,2\}}c_{kl}\left(\Lambda_{i}^{l} \Lambda_{j}^{k}+\lambda\,\Lambda_{i}^{l+2}\Lambda_{j}^{k+2}\right)\,,\]
where \(c_{kl}=1+\delta_{kl}\) for \(k,l=1,2\), and \(\Lambda_{i}^{l}\), with \(i=1,\cdots,N\), and \(l=1,2,3,4\), parametrise the components of \(\mathbf{q}_{i}\) and \(\mathbf{b}_{i}\) as follows
\[\mathbf{q}_{i}=\text{diag}(\Lambda_{i}^{1},\Lambda_{i}^{2},-\Lambda_{i}^{1}- \Lambda_{i}^{2})\qquad,\qquad\mathbf{b}_{i}=\text{diag}(\Lambda_{i}^{3}, \Lambda_{i}^{4},-\Lambda_{i}^{3}-\Lambda_{i}^{4}) \tag{2.4}\]
giving six possible orientational states of each molecule. In particular, we have that for the \(i-\)th molecule, \(\Lambda_{i}=\left(\Lambda_{i}^{1},\Lambda_{i}^{2},\Lambda_{i}^{3},\Lambda_{i}^ {4}\right)\in\{\Lambda^{(1)},\Lambda^{(2)},\cdots,\Lambda^{(6)}\}\), where
\[\Lambda^{(1)} =\left(\frac{2}{3},-\frac{1}{3},0,-1\right)\qquad\Lambda^{(2)}= \left(\frac{2}{3},-\frac{1}{3},0,1\right)\qquad\qquad\Lambda^{(3)}=\left(- \frac{1}{3},\frac{2}{3},1,0\right) \tag{2.5a}\] \[\Lambda^{(4)} =\left(-\frac{1}{3},\frac{2}{3},-1,0\right)\qquad\Lambda^{(5)}= \left(-\frac{1}{3},-\frac{1}{3},-1,1\right)\qquad\Lambda^{(6)}=\left(-\frac{1 }{3},-\frac{1}{3},1,-1\right)\,. \tag{2.5b}\]
Upon introducing the quantities \(M^{l}=\sum_{i}\Lambda_{i}^{l}/N\) with \(l=1,2,3,4\), the Hamiltonian \(H_{0}\) reads as follows
\[H_{0}=-\mu N\left[(M^{1})^{2}+M^{1}M^{2}+(M^{2})^{2}+\lambda\left((M^{3})^{2} +M^{3}M^{4}+(M^{4})^{2}\right)\right]. \tag{2.6}\]
We now proceed with modelling the interaction between the liquid crystal and external fields. Consistently with previous studies on uniaxial [92, 93, 94, 95] and biaxial nematics [96], we assume that the interaction between an individual biaxial liquid crystal molecule and external fields produces a term that is linear in the molecular tensors.
Let \(\boldsymbol{\epsilon}=\text{diag}\left(\epsilon_{1},\epsilon_{2},\epsilon_{3}\right)\) and \(\boldsymbol{\chi}=\text{diag}\left(\chi_{1},\chi_{2},\chi_{3}\right)\) be two tensors associated with a general external field and let \(H_{ex}\) be the Hamiltonian modelling the interaction between the external field and the liquid crystal molecules. Our assumption implies that \(H_{ex}\) is of the form
\[H_{ex} =-\sum_{i}\left(\boldsymbol{\epsilon}\cdot\mathbf{q}_{i}+ \boldsymbol{\chi}\cdot\mathbf{b}_{i}\right) \tag{2.7}\] \[=-N\left[(\epsilon_{1}-\epsilon_{3})M^{1}+(\epsilon_{2}-\epsilon _{3})M^{2}+(\chi_{1}-\chi_{3})M^{3}+(\chi_{2}-\chi_{3})M^{4}\right]\,. \tag{2.8}\]
By introducing the notation \(\epsilon_{k3}=\epsilon_{k}-\epsilon_{3}\) and \(\chi_{k3}=\chi_{k}-\chi_{3}\) with \(k=1,2\), we can write
\[H_{ex}=-N\left(\epsilon_{13}M^{1}+\epsilon_{23}M^{2}+\chi_{13}M^{3}+\chi_{23} M^{4}\right)\,. \tag{2.9}\]
Hence, the full Hamiltonian for the mean-field model under study in this work is \(H=H_{0}+H_{ex}\). The associated partition function for the Gibbs distribution is given by the expression
\[Z_{N}=\sum_{\{(\mathbf{q},\mathbf{b})\}}\exp(-\beta H),\]
where the summation refers to all possible configurations of \((\mathbf{q}_{i},\mathbf{b}_{i})\) and \(\beta=1/T\) with \(T\) denoting the absolute temperature. Upon introducing the rescaled coupling constants \(t:=\beta\mu\), \(x:=\beta\epsilon_{13}\), \(y:=\beta\epsilon_{23}\), \(z:=\beta\chi_{13}\) and \(w:=\beta\chi_{23}\), the partition function reads as
\[Z_{N}=\sum_{\{(\mathbf{q},\mathbf{b})\}}e^{N\left\{t\left[(M^{1})^{2}+M^{1}M^{ 2}+(M^{2})^{2}+\lambda\left((M^{3})^{2}+M^{3}M^{4}+(M^{4})^{2}\right)\right]+xM ^{1}+yM^{2}+zM^{3}+wM^{4}\right\}}\,. \tag{2.10}\]
In the following, similarly to the case of van der Waals type models [84, 86], spin systems [97] and the generalisation of the Maier-Saupe model in [10], we look for a differential identity satisfied by the partition function and calculate the associated initial condition. We observe that the partition function (2.10) satisfies the \((4+1)\)-dimensional linear PDE
\[\frac{\partial Z_{N}}{\partial t}=\frac{1}{N}\left[\frac{\partial^{2}Z_{N}}{ \partial x^{2}}+\frac{\partial^{2}Z_{N}}{\partial x\;\partial y}+\frac{ \partial^{2}Z_{N}}{\partial y^{2}}+\lambda\left(\frac{\partial^{2}Z_{N}}{ \partial z^{2}}+\frac{\partial^{2}Z_{N}}{\partial z\;\partial w}+\frac{ \partial^{2}Z_{N}}{\partial w^{2}}\right)\right]. \tag{2.11}\]
Note that, for \(\lambda>0\), equation (2.11) can be transformed via a linear transformation of the spatial coordinates into the heat equation
\[\frac{\partial Z_{N}}{\partial t}=\sigma\left(\frac{\partial^{2}Z_{N}}{ \partial x^{\prime 2}}+\frac{\partial^{2}Z_{N}}{\partial y^{\prime 2}}+\frac{ \partial^{2}Z_{N}}{\partial z^{\prime 2}}+\frac{\partial^{2}Z_{N}}{\partial w^{ \prime 2}}\right)\,,\]
where \(x^{\prime}\),\(y^{\prime}\), \(z^{\prime}\), \(w^{\prime}\) denote the new coordinates and \(\sigma=1/N\) is the analogue of the heat conductivity. More precisely, the transformation of coordinates is given by \(\mathbf{u}^{\prime}=\mathbf{P}_{\lambda}\mathbf{u}\), where
\[\mathbf{u}^{\prime}=(x^{\prime},y^{\prime},z^{\prime},w^{\prime})^{T},\ \mathbf{u}=(x,y,z,w)^{T}\ \text{and}\ \mathbf{P}_{\lambda}=\left(\begin{array}{cccc}2&-2&0&0\\ 2/\sqrt{3}&2/\sqrt{3}&0&0\\ 0&0&2/\sqrt{\lambda}&-2/\sqrt{\lambda}\\ 0&0&2/\sqrt{3\lambda}&2/\sqrt{3\lambda}\end{array}\right)\,.\]
The associated initial condition, \(Z_{0,N}(x,y,z,w):=Z_{N}(x,y,z,w,t=0)\), corresponds to the value of the partition function of the model for non mutual interacting molecules. Given that the exponential is linear in the variables \(M^{1}\), \(M^{2}\), \(M^{3}\) and \(M^{4}\), the initial condition can be evaluated by recursion and gives the following formula
\[Z_{0,N}=\left(\sum_{i=1}^{6}e^{x\Lambda^{1,i}+y\Lambda^{2,i}+z\Lambda^{3,i}+w \Lambda^{4,i}}\right)^{N}, \tag{2.12}\]
where the index \(i\) labels the quadruples \(\Lambda^{(i)}=\left(\Lambda^{1,i},\Lambda^{2,i},\Lambda^{3,i},\Lambda^{4,i}\right)\) defined in Eqs. (2.5a)-(2.5b).
The exact solution to the equation (2.11) for a given number of molecules \(N\) can be formally obtained by separation of variables using as a basis the set of exponential functions obtained by expanding the \(N-\)th power at the r.h.s. of equation (2.12). The solution reads as
\[Z_{N}=\sum_{\{\vec{k}\}}B_{\vec{k}}\ A_{\vec{k}}(t;\lambda)\exp\left(x\,\omega _{\vec{k}}^{1}+y\,\omega_{\vec{k}}^{2}+z\,\omega_{\vec{k}}^{3}+w\,\omega_{ \vec{k}}^{4}\right) \tag{2.13}\]
where \(\vec{k}=(k_{1},\ldots,k_{6})\) is a multi-index such that \(k_{i}=0,\ldots,N_{i}\) with \(N_{1}=N\), \(N_{i}=N_{i-1}-k_{i-1}\) for \(i=2,\ldots,5\), \(k_{6}=N-\sum_{i=1}^{5}k_{i}\), \(\omega_{\vec{k}}^{l}=\sum_{i=1}^{6}\Lambda^{l,i}k_{i}\), \(l=1,2,3,4\) and
\[B_{\vec{k}}=\prod_{i=1}^{6}\binom{N_{i}}{k_{i}},\quad A_{\vec{k}}=\exp\left\{ \frac{t}{N}\left[\left(\omega_{\vec{k}}^{1}\right)^{2}+\omega_{\vec{k}}^{1} \omega_{\vec{k}}^{2}+\left(\omega_{\vec{k}}^{2}\right)^{2}+\lambda\left(\left( \omega_{\vec{k}}^{3}\right)^{2}+\omega_{\vec{k}}^{3}\omega_{\vec{k}}^{4}+ \left(\omega_{\vec{k}}^{4}\right)^{2}\right)\right]\right\}.\]
Let us define the scalar order parameters \(m_{N}^{1}\), \(m_{N}^{2}\), \(m_{N}^{3}\) and \(m_{N}^{4}\) as the expectation values of, respectively, \(M^{1}\), \(M^{2}\), \(M^{3}\) and \(M^{4}\), i.e.
\[m_{N}^{l}:=\langle M^{l}\rangle=\frac{1}{Z_{N}}\sum_{\{(\mathbf{q},\mathbf{b}) \}}M^{l}e^{-\beta H},\qquad l=1,2,3,4. \tag{2.14}\]
Upon introducing the free-energy density as \(\mathcal{F}_{N}:=(1/N)\log Z_{N}\), the order parameters can be calculated by direct differentiation as follows
\[m_{N}^{1}=\frac{\partial\mathcal{F}_{N}}{\partial x}\quad,\quad m_{N}^{2}= \frac{\partial\mathcal{F}_{N}}{\partial y}\quad,\quad m_{N}^{3}=\frac{\partial \mathcal{F}_{N}}{\partial z}\quad,\quad m_{N}^{4}=\frac{\partial\mathcal{F}_{N }}{\partial w}\,. \tag{2.15}\]
Equation (2.11) implies that the free-energy density satisfies the following differential identity
\[\frac{\partial\mathcal{F}_{N}}{\partial t} =\left(\frac{\partial\mathcal{F}_{N}}{\partial x}\right)^{2}+\frac {\partial\mathcal{F}_{N}}{\partial x}\frac{\partial\mathcal{F}_{N}}{\partial y }+\left(\frac{\partial\mathcal{F}_{N}}{\partial y}\right)^{2}+\lambda\left[ \left(\frac{\partial\mathcal{F}_{N}}{\partial z}\right)^{2}+\frac{\partial \mathcal{F}_{N}}{\partial z}\frac{\partial\mathcal{F}_{N}}{\partial w}+\left( \frac{\partial\mathcal{F}_{N}}{\partial w}\right)^{2}\right]\] \[+\frac{1}{N}\left[\frac{\partial^{2}\mathcal{F}_{N}}{\partial x^ {2}}+\frac{\partial^{2}\mathcal{F}_{N}}{\partial x}\frac{\partial y}{\partial y }+\frac{\partial^{2}\mathcal{F}_{N}}{\partial y^{2}}+\lambda\left(\frac{ \partial^{2}\mathcal{F}_{N}}{\partial z^{2}}+\frac{\partial^{2}\mathcal{F}_{N }}{\partial z\ \partial w}+\frac{\partial^{2}\mathcal{F}_{N}}{\partial w^{2}} \right)\right]. \tag{2.16}\]
In Section 3, we derive the equations of state in the thermodynamic (large \(N\)) regime via a direct asymptotic approximation of the solution to equation (2.16). Before proceeding, it is worth to emphasise that the case \(\lambda=0\) implies a reduction of the model (2.16) to the one studied in [10], although the initial condition considered in that work depends on the intrinsic molecular biaxiality parameter \(\Delta\), differently from the present case in which the degree of biaxiality of the interaction is entirely contained in the internal energy term. The differences in the two treatments arise as in this paper we are working with two order tensors, while in [10] the so-called geometric approximation on the interaction potential allowed to work with a single order tensor, that is a linear combination of \(\mathbf{q}\) and \(\mathbf{b}\).
## 3 Thermodynamic limit and equations of state
The thermodynamic limit is defined as the regime where the number of particles \(N\) is large, i.e. \(N\to\infty\). Under the assumption that the free-energy admits the expansion of the form \(\mathcal{F}_{N}=F+O\left(1/N\right)\) and by using Eq. (2.16) we obtain, at the leading order, the following Hamilton-Jacobi type equation
\[\frac{\partial F}{\partial t}=\left(\frac{\partial F}{\partial x}\right)^{2}+ \frac{\partial F}{\partial x}\frac{\partial F}{\partial y}+\left(\frac{ \partial F}{\partial y}\right)^{2}+\lambda\left[\left(\frac{\partial F}{ \partial z}\right)^{2}+\frac{\partial F}{\partial z}\frac{\partial F}{ \partial w}+\left(\frac{\partial F}{\partial w}\right)^{2}\right]. \tag{3.1}\]
A similar asymptotic expansion for the order parameters \(m_{N}^{l}=m^{l}+O(1/N)\) implies the relations
\[m^{1}=\frac{\partial F}{\partial x}\quad,\quad m^{2}=\frac{\partial F}{ \partial y}\quad,\quad m^{3}=\frac{\partial F}{\partial z}\quad,\quad m^{4} =\frac{\partial F}{\partial w}\,.\]
Equation (3.1) is completely integrable and can be solved via the method of characteristics. In particular, the solution can be expressed via the free-energy functional
\[\begin{split} F&=xm^{1}+ym^{2}+\,zm^{3}+\,wm^{4}\\ &+t\left[(m^{1})^{2}+m^{1}m^{2}+(m^{2})^{2}+\lambda\left((m^{3})^ {2}+m^{3}m^{4}+(m^{4})^{2}\right)\right]\\ &+S(m^{1},m^{2},m^{3},m^{4})\,,\end{split} \tag{3.2}\]
where \(m^{1}\), \(m^{2}\), \(m^{3}\) and \(m^{4}\) are stationary points of the free-energy, i.e.
\[\frac{\partial F}{\partial m^{l}}=0\quad\text{for }l=1,2,3,4\,.\]
Equivalently, order parameters are solutions to the following system of equations
\[\begin{split}\Psi_{1}&:=x+(2m^{1}+m^{2})t+\frac{ \partial S}{\partial m^{1}}=0,\qquad\Psi_{2}:=y+(m^{1}+2m^{2})t+\frac{\partial S }{\partial m^{2}}=0\,,\\ \Psi_{3}&:=z+(2m^{3}+m^{4})\lambda t+\frac{\partial S }{\partial m^{3}}=0,\qquad\Psi_{4}:=w+(m^{3}+2m^{4})\lambda t+\frac{\partial S }{\partial m^{4}}=0\,.\end{split} \tag{3.3}\]
The term \(S(m^{1},m^{2},m^{3},m^{4})\) represents the entropy of the system and, as discussed below, is uniquely fixed via the initial condition \(F_{0}=F(x,y,z,w,t=0)\).
The system (3.3) represents the set of equations of state for the \(\lambda\)-model. Hence, phase transitions can be studied through the analysis of critical points of the equations (3.3). Similarly to the thermodynamic models studied in [84, 85, 86, 89], order parameters \(m^{l}\) can be viewed as solutions to a nonlinear integrable system of hydrodynamic type where coupling constants \(x\), \(y\), \(z\), \(w\) and \(t\) play the role of, respectively, space and time variables. In this framework, state curves within the critical region of a phase transition are the analog of shock waves of the hydrodynamic flow. In order to specify completely equations of state (3.3) we have to determine the function \(S(m^{1},m^{2},m^{3},m^{4})\). We proceed by evaluating Eqs. (3.3) at \(t=0\), that is
\[x(m_{0}^{1},m_{0}^{2},m_{0}^{3},m_{0}^{4})= -\left.\frac{\partial S}{\partial m^{1}}\right|_{m^{l}=m_{0}^{l}}, \quad y(m_{0}^{1},m_{0}^{2},m_{0}^{3},m_{0}^{4})= -\left.\frac{\partial S}{\partial m^{2}}\right|_{m^{l}=m_{0}^{l}},\] \[z(m_{0}^{1},m_{0}^{2},m_{0}^{3},m_{0}^{4})= -\left.\frac{\partial S}{\partial m^{3}}\right|_{m^{l}=m_{0}^{l} },\quad w(m_{0}^{1},m_{0}^{2},m_{0}^{3},m_{0}^{4})= -\left.\frac{\partial S}{\partial m^{4}}\right|_{m^{l}=m_{0}^{l}}, \tag{3.4}\]
where \(m_{0}^{l}=m^{l}(x,y,z,w,t=0)\), with \(l=1,2,3,4\). Equations (3.4) show that the function \(S(m^{1},m^{2},m^{3},m^{4})\) can be obtained, locally, by expressing \(x\), \(y\), \(z\) and \(w\) as functions of the order parameters \(m^{l}\) evaluated at \(t=0\) and then integrating Eqs. (3.4). Indeed, observing that the initial condition for \(F\) is \(F_{0}={\cal F}_{N,0}=(1/N)\log Z_{0,N}\), where \(Z_{0,N}\) is given in (2.12), the required functions can be obtained by inverting the system
\[m_{0}^{1}=\frac{\partial F_{0}}{\partial x}(x,y,z,w)\,,\,m_{0}^{2}=\frac{ \partial F_{0}}{\partial y}(x,y,z,w)\,,\,m_{0}^{3}=\frac{\partial F_{0}}{ \partial z}(x,y,z,w)\,,\,m_{0}^{4}=\frac{\partial F_{0}}{\partial w}(x,y,z,w)\,. \tag{3.5}\]
More explicitly, equations (3.5) read as follows
\[\sum_{i=1}^{6}\left(m_{0}^{l}-\Lambda^{l,i}\right)X^{\Lambda^{1,i}}Y^{ \Lambda^{2,i}}Z^{\Lambda^{3,i}}W^{\Lambda^{4,i}}=0\quad,\,l=1,2,3,4\,, \tag{3.6}\]
where we have introduced the notation \(X=\exp(x)\), \(Y=\exp(y)\), \(Z=\exp(z)\), \(W=\exp(w)\). Hence, equations of state (3.3) for the model with external fields are completely determined in terms of the roots of system of equations (3.6). We should also emphasise that system (3.6) is algebraic with respect to the variables \(X\), \(Y\), \(Z\) and \(W\).
**Remark.** The order parameters introduced here are related to the scalar order parameters adopted in [64] by the following linear transformation
\[m^{1}=T-S/3\,,\quad m^{2}=-T-S/3\,,\quad m^{3}=T^{\prime}-S^{\prime}/3\,,\quad m ^{4}=-T^{\prime}-S^{\prime}/3\,, \tag{3.7}\]
where \(S,T,S^{\prime},T^{\prime}\) are the scalar order parameters characterising the tensors \({\bf Q}:=\langle{\bf q}\rangle\) and \({\bf B}:=\langle{\bf b}\rangle\) in their common eigenframe, once the thermodynamic limit is performed. Specifically, by considering the eigenframe \((\vec{e}_{x},\vec{e}_{y},\vec{e}_{z})\), the order tensors can be written as
\[{\bf Q} =S\left(\vec{e}_{z}\otimes\vec{e}_{z}-\frac{1}{3}{\bf I}\right)+T \left(\vec{e}_{x}\otimes\vec{e}_{x}-\vec{e}_{y}\otimes\vec{e}_{y}\right) \tag{3.8}\] \[{\bf B} =S^{\prime}\left(\vec{e}_{z}\otimes\vec{e}_{z}-\frac{1}{3}{\bf I} \right)+T^{\prime}\left(\vec{e}_{x}\otimes\vec{e}_{x}-\vec{e}_{y}\otimes\vec{ e}_{y}\right)\,. \tag{3.9}\]
The inverse of the linear transformation (3.7) is
\[S=-\frac{3}{2}(m^{1}+m^{2})\,,\quad T=\frac{1}{2}(m^{1}-m^{2})\,,\quad S^{ \prime}=-\frac{3}{2}(m^{3}+m^{4})\,,\quad T^{\prime}=\frac{1}{2}(m^{3}-m^{4})\,. \tag{3.10}\]
In [66] it is claimed that, in the absence of external fields, reductions \(T=S^{\prime}=0\), or \(T=\pm S\) and \(S^{\prime}=\pm 3T^{\prime}\) hold, these latter obtained by swapping the axes of the reference frame, \(\vec{e}_{x},\vec{e}_{y},\vec{e}_{z}\). These conditions read as \(m^{1}=m^{2}=-S/3\) and \(m^{3}=-m^{4}=T^{\prime}\).
In the next section, we will introduce a new parametrisation based on the introduction of the molecular Gibbs weights, which leads to the explicit solutions of the model.
### Equations of state
A convenient approach to the evaluation of the entropy of the discrete model and the corresponding equations of state starts from the statistical analysis of the 'initial condition', namely the evaluation of the partition function (2.12) as a function of the external fields at \(t=0\). Indeed, at \(t=0\), liquid crystal molecules are mutually independent and expectation values can be evaluated by looking at the one-molecule partition function,
\[Z_{0,1}=\sum_{i=1}^{6}e^{x\Lambda^{1,i}+y\Lambda^{2,i}+z\Lambda^{3,i}+w\Lambda^ {4,i}}\,. \tag{3.11}\]
The molecular Gibbs weights [2] at \(t=0\) and as functions of the external fields take the following form
\[p_{0,i}(x,y,z,w):=\frac{e^{x\Lambda^{1,i}+y\Lambda^{2,i}+z\Lambda^{3,i}+w \Lambda^{4,i}}}{Z_{0,1}}\quad,\quad i=1,\ldots,6\,.\]
Notice that the partition function (3.11), ensures that the Gibbs weights fulfil the standard normalisation condition,
\[\sum_{i=1}^{6}p_{0,i}=1\,. \tag{3.12}\]
The configurational entropy of the model is standardly given by \(S=-\sum_{k=1}^{6}p_{k}\log p_{k}\). At \(t=0\), this reads, \(S_{0}=-\sum_{k=1}^{6}p_{0,k}\log p_{0,k}\). By inspection, the following holds at \(t=0\)
\[x=\frac{1}{2}\log\frac{p_{0,1}\,p_{0,2}}{p_{0,5}\,p_{0,6}}\quad,\quad y=\frac{ 1}{2}\log\frac{p_{0,3}\,p_{0,4}}{p_{0,5}\,p_{0,6}}\quad,\quad z=\frac{1}{2} \log\frac{p_{0,3}}{p_{0,4}}\quad,\quad w=\frac{1}{2}\log\frac{p_{0,2}}{p_{0,1} }\,. \tag{3.13}\]
In the specific case of the model under study, one can verify that only four out of six Gibbs weights are functionally independent. Indeed, additionally to the normalisation constraint (3.12), one can readily verify the following
\[\prod_{k=1}^{3}p_{0,2k-1}=\prod_{k=1}^{3}p_{0,2k}\,. \tag{3.14}\]
By using Eqs. (3.12) and (3.14) one can express \(p_{0,6}\) and \(p_{0,5}\) in terms of \(p_{0,1}\), \(p_{0,2}\), \(p_{0,3}\) and \(p_{0,4}\) as follows
\[p_{0,5}=\frac{p_{0,2}p_{0,4}(1-\sum_{i=1}^{4}p_{0,i})}{p_{0,1}\,p_{0,3}+p_{0,2} \,p_{0,4}}\quad,\quad p_{0,6}=\frac{p_{0,1}p_{0,3}(1-\sum_{i=1}^{4}p_{0,i})}{p _{0,1}\,p_{0,3}+p_{0,2}\,p_{0,4}}\,. \tag{3.15}\]
Note that the entropy density, as well as the Gibbs weights, depend on the temperature and the fields via the scalar order parameters only (see Eq. (3.2)). Therefore, the identities in Eqs. (3.15) hold at every \(t\). The Gibbs weights are related to the order parameters \(m^{l}\) via the transformation \(\varphi:(p_{1},p_{2},p_{3},p_{4})\in[0,1]^{4}\to(m^{1},m^{2},m^{3},m^{4})\in \mathcal{D}\subset\mathbb{R}^{4}\) where
\[\begin{split} m^{1}=&\,p_{1}+p_{2}-\frac{1}{3}\quad, \quad m^{3}=\frac{(p_{1}\,p_{3}-p_{2}\,p_{4})(1-p_{1}-p_{2})+2p_{3}\,p_{4}(p_ {2}-p1)}{p_{1}\,p_{3}+p_{2}\,p_{4}},\\ m^{2}=&\,p_{3}+p_{4}-\frac{1}{3}\quad,\quad m^{4}= \frac{(p_{2}\,p_{4}-p_{1}\,p_{3})(1-p_{3}-p_{4})+2p_{1}\,p_{2}(p_{3}-p_{4})}{ p_{1}\,p_{3}+p_{2}\,p_{4}}\,.\end{split} \tag{3.16}\]
The domain \(\mathcal{D}\) is identified by the following constraints
\[\begin{split}-2/3\leq m^{1}+m^{2}\leq 1/3\;\;,\;\;-(2/3+m^{1}+m^{2 })\leq m^{1}-m^{2}\leq 2/3+m^{1}+m^{2}\\ -2\leq m^{3}-m^{4}\leq 2\;\;,\;\;-(2/3+m^{1}+m^{2})\leq m^{3}+m^{4} \leq 2/3+m^{1}+m^{2}\,.\end{split}\]
Using the relations (3.13), (3.15) and (3.16), and the observation (3.4) one obtains the following set of equations for \(p_{1}\), \(p_{2}\), \(p_{3}\) and \(p_{4}\) in terms of the fields and the temperature
\[x+(2p_{1}+2p_{2}+p_{3}+p_{4}-1)t-\frac{1}{2}\log\left(\frac{(p_{1 }p_{3}+p_{2}p_{4})^{2}}{p_{3}p_{4}(1-p_{1}-p_{2}-p_{3}-p_{4})^{2}}\right) =0 \tag{3.17a}\] \[y+(2p_{3}+2p_{4}+p_{1}+p_{2}-1)t-\frac{1}{2}\log\left(\frac{(p_{1 }p_{3}+p_{2}p_{4})^{2}}{p_{1}p_{2}(1-p_{1}-p_{2}-p_{3}-p_{4})^{2}}\right) =0\] (3.17b) \[z+\left(\frac{p_{1}p_{3}(1-2p_{1}+p_{3}-3p_{4}-p_{2}p_{4}(1-2p_{ 2}-3\,p_{3}+p_{4}))}{p_{1}p_{3}+p_{2}p_{4}}\right)\lambda t-\frac{1}{2}\log \left(\frac{p_{3}}{p_{4}}\right) =0\] (3.17c) \[w+\left(\frac{p_{2}p_{4}(1-3p_{1}+p_{2}-2p_{4})-p_{1}p_{3}(1+p_{ 1}-3p_{2}-2p_{3})}{p_{1}p_{3}+p_{2}p_{4}}\right)\lambda t-\frac{1}{2}\log \left(\frac{p_{2}}{p_{1}}\right) =0\,. \tag{3.17d}\]
Equations (3.17a)-(3.17d) can be viewed as the equations of state of the discrete \(\lambda\)-model subject to external fields, parametrised by \(p_{i}\) and intensive thermodynamic variables \(x\), \(y\), \(z\), \(w\) and \(t\), which are the control parameters of the model. Notice that Eqs. (3.17a)-(3.17d) are the critical points of the free-energy which can now be given the form
\[F= \,x\,m^{1}+\,y\,m^{2}+\,z\,m^{3}+w\,m^{4}+\frac{t}{2}\left[\operatorname{ Tr}\mathbf{Q}^{2}+\lambda\operatorname{Tr}\mathbf{B}^{2}\right]-\sum_{k=1}^{6}p_{k} \log p_{k}, \tag{3.18}\]
where \(\mathbf{Q}=\operatorname{diag}(m^{1},m^{2},-m^{1}-m^{2})\) and \(\mathbf{B}=\operatorname{diag}(m^{3},m^{4},-m^{3}-m^{4})\), and \(m^{l}=m^{l}(p_{1},p_{2},p_{3},p_{4})\), with \(l=1,2,3,4\), and \(p_{5,6}=p_{5,6}(p_{1},p_{2},p_{3},p_{4})\) are given by Eqs. (3.16) and Eqs. (3.15), respectively.
#### 3.1.1 Two-parameter reductions
In this subsection, we will focus on the derivation of two-parameter reductions of the equations of state (3.17a)-(3.17d). Such reductions arise naturally when considering the liquid crystal system constrained to suitable forms of external fields, including the case in which external fields are not present and the phase behaviour is entirely regulated by the mutual interactions among liquid crystal molecules and the temperature. The following holds in the absence of external fields.
**Lemma 3.1**.: In the absence of external fields, that is at \(x=y=z=w=0\), solutions to the system (3.17a)-(3.17d) are given by one of the following 2-parameter reductions:
1. \[p_{4}=p_{2}\text{ and }p_{3}=p_{1}\text{, with}\] \[\left(1-3\,p_{1}-3\,p_{2}\right)t=\frac{1}{2}\log\left(\frac{p_{ 1}\,p_{2}\left(1-2\,p_{1}-2\,p_{2}\right)^{2}}{\left(p_{1}^{2}+p_{2}^{2} \right)^{2}}\right)\] (3.19) \[\left(p_{1}-p_{2}\right)\left(1+\frac{4\,p_{1}\,p_{2}-p_{1}-p_{2 }}{p_{1}^{2}+p_{2}^{2}}\right)\lambda\,t=\frac{1}{2}\log\left(\frac{p_{2}}{p_ {1}}\right)\,.\] (3.20)
2. \(p_{3}=\left[\left(1-2\,p_{1}-2\,p_{2}\right)p_{2}^{2}\right]/(p_{1}^{2}+p_{2}^ {2})\) and \(p_{4}=\left[\left(1-2\,p_{1}-2\,p_{2}\right)p_{1}^{2}\right]/(p_{1}^{2}+p_{2}^ {2})\), where \(p_{1}\) and \(p_{2}\) satisfy Eqs. (3.19)-(3.20).
3. \(p_{1}=\left[\left(1-2\,p_{3}-2\,p_{4}\right)p_{4}^{2}\right]/(p_{3}^{2}+p_{4}^ {2})\) and \(p_{2}=\left[\left(1-2\,p_{3}-2\,p_{4}\right)p_{3}^{2}\right]/(p_{3}^{2}+p_{4}^ {2})\), where \(p_{3}\) and \(p_{4}\) satisfy \[\left(1-3\,p_{3}-3\,p_{4}\right)t=\frac{1}{2}\log\left(\frac{p_{ 3}\,p_{4}\left(1-2\,p_{3}-2\,p_{4}\right)^{2}}{\left(p_{3}^{2}+p_{4}^{2} \right)^{2}}\right)\] (3.21) \[\left(p_{4}-p_{3}\right)\left(1+\frac{4\,p_{3}\,p_{4}-p_{3}-p_{4} }{p_{3}^{2}+p_{4}^{2}}\right)\lambda\,t=\frac{1}{2}\log\left(\frac{p_{3}}{p_{4} }\right)\,.\] (3.22)
2. \(p_{3}=p_{2}\) and \(p_{4}=p_{1}\), with \[\left(1-3\,p_{1}-3\,p_{2}\right)t =\frac{1}{2}\log\left(\frac{(1-2\,p_{1}-2\,p_{2})^{2}}{4\,p_{1}\,p_ {2}}\right)\] (3.23) \[3\left(p_{1}-p_{2}\right)\lambda\,t =\frac{1}{2}\log\left(\frac{p_{1}}{p_{2}}\right)\,.\] (3.24)
3. \(p_{3}=p_{4}=1/2-p_{1}-p_{2}\), with Eqs. (3.23)-(3.24) holding for \(p_{1}\) and \(p_{2}\).
4. \(p_{2}=p_{1}=1/2-p_{3}-p_{4}\), with \[\left(1-3\,p_{3}-3\,p_{4}\right)t =\frac{1}{2}\log\left(\frac{(1-2\,p_{3}-2\,p_{4})^{2}}{4\,p_{3}\, p_{4}}\right)\] (3.25) \[3\left(p_{4}-p_{3}\right)\lambda\,t =\frac{1}{2}\log\left(\frac{p_{4}}{p_{3}}\right)\,.\] (3.26)
Proof.: Let us consider Eqs. (3.17a)-(3.17d) restricted to the condition \(x=y=z=w=0\). Observing that Eq. (3.17a) has special solutions such that
\[2p_{1}+2p_{2}+p_{3}+p_{4}-1= 0 \tag{3.27}\] \[(p_{1}p_{3}+p_{2}p_{4})^{2}-p_{3}p_{4}(1-p_{1}-p_{2}-p_{3}-p_{4})^ {2}= 0, \tag{3.28}\]
the above system admits two solutions for \(p_{3}\) and \(p_{4}\) as functions of \(p_{1}\) and \(p_{2}\): one is \(p_{3}=\left[(1-2\,p_{1}-2\,p_{2})p_{2}^{2}\right]/(p_{1}^{2}+p_{2}^{2})\), \(p_{4}=\left[(1-2\,p_{1}-2\,p_{2})p_{1}^{2}\right]/(p_{1}^{2}+p_{2}^{2})\), and the other is \(p_{3}=p_{4}=1/2-p_{1}-p_{2}\). Substituting the first into Eq (3.17b) we obtain Eq. (3.19), while the same constraints imply that Eqs. (3.17c)-(3.17d) reduce to Eq. (3.19), thus proving reduction i-b). If we consider the second solution instead, we obtain (3.23) from Eqs. (3.17b) and (3.24) from Eqs. (3.17c)- (3.17d), thus proving reduction ii) b).
Similarly, Eq (3.17b) admits solutions such that
\[2p_{3}+2p_{4}+p_{1}+p_{2}-1= 0 \tag{3.29}\] \[(p_{1}p_{3}+p_{2}p_{4})^{2}-p_{1}p_{2}(1-p_{1}-p_{2}-p_{3}-p_{4}) ^{2}= 0 \tag{3.30}\]
which provide two solutions: one is \(p_{1}=\left[(1-2\,p_{3}-2\,p_{4})p_{4}^{2}\right]/(p_{3}^{2}+p_{4}^{2})\), \(p_{2}=\left[(1-2\,p_{3}-2\,p_{4})p_{3}^{2}\right](p_{3}^{2}+p_{4}^{2})\), and the other is \(p_{2}=p_{1}\) and \(p_{3}=1/2-p_{1}-p_{4}\). Substituting the first solution into Eq (3.17a), one obtains (3.44), while the same constraints imply that Eqs. (3.17c)-(3.17d) reduce to Eq. (3.25), that is the reduction i-c). If we consider the second solution instead, we obtain (3.23) from Eq. (3.17b) and (3.24) from Eq. (3.17c)-(3.17d), thus yielding reduction ii)c).
When \(2\,p_{1}+2\,p_{2}+p_{3}+p_{4}-1\neq 0\) and \(2\,p_{3}+2\,p_{4}+p_{1}+p_{2}-1\neq 0\), we can eliminate \(t\) from Eqs. (3.17a)-(3.17b) to get the following
\[\log\left(\frac{p_{1}\,p_{3}+p_{2}\,p_{4}}{p_{3}\,p_{4}}\right) =\frac{p_{1}+p_{2}-p_{3}-p_{4}}{1-p_{1}-p_{2}-2\,p_{3}-2\,p_{4}} \log\left(\frac{(1-p_{1}-p_{2}-p_{3}-p_{4})^{2}}{p_{1}\,p_{3}+p_{2}\,p_{4}}\right)\] \[+\frac{1-2\,p_{1}-2\,p_{2}-p_{3}-p_{4}}{1-p_{1}-p_{2}-2\,p_{3}-2\, p_{4}}\,\log\left(\frac{p_{1}\,p_{3}+p_{2}\,p_{4}}{p_{1}\,p_{2}}\right)\,. \tag{3.31}\]
Let \(k>0\) be an arbitrary constant and \(\vartheta\) be the scale transformation defined by \(\vartheta\) : \(p_{i}\to k\,p_{i}\) for \(i=1,2,4,6\). The l.h.s. of Eq. (3.31) is invariant under the action of \(\vartheta\), and is therefore independent of \(k\). For consistency, the r.h.s. must retain the same property. By applying \(\vartheta\) to
Eq (3.31) and requiring that the r.h.s. is does not dependent on \(k\), one obtains that solutions satisfy
\[p_{1}+p_{2}=p_{3}+p_{4}\,. \tag{3.32}\]
We proceed by eliminating the factor \(\lambda\,t\) from Eqs. (3.17c)-(3.17d), obtaining
\[\log\left(\frac{p_{1}}{p_{2}}\right)\,=\frac{(1-3\,p_{1}+p_{2}-2\,p_{4})\,p_{2 }\,p_{4}-(1+p_{1}-2\,p_{3}-3\,p_{2})\,p_{1}\,p_{3}}{(1-2\,p_{1}+p_{3}-3\,p_{4}) \,p_{1}\,p_{3}-(1-2\,p_{2}-3\,p_{3}+p_{4})\,p_{2}\,p_{4}}\log\left(\frac{p_{3}} {p_{4}}\right). \tag{3.33}\]
The generic solution is obtained by the same scaling argument. More precisely, invariance of both sides of Eq (3.33) under the action of \(\vartheta\) gives \((p_{1}\,p_{3}-p_{2}\,p_{4})(p_{1}\,p_{3}+p_{2}\,p_{4})(p_{1}+p_{4}-p_{2}-p_{3 })\log\left(\frac{p_{4}}{p_{3}}\right)=0\), which can be realised in the two following cases
\[p_{1}+p_{4} =p_{2}+p_{3} \tag{3.34}\] \[p_{1}\,p_{3} =p_{2}\,p_{4}\,. \tag{3.35}\]
System of Eqs. (3.32)-(3.34) has solutions \(p_{3}=p_{1}\) and \(p_{4}=p_{2}\), while system of Eqs. (3.32)-(3.35) has solutions \(p_{3}=p_{2}\) and \(p_{4}=p_{1}\). By imposing the first of the two sets of constraints to Eqs. (3.17a)-(3.17d), one obtains the system of equations (3.19)-(3.20), hence proving the reduction i-a), while the second set of constraints gives Eqs. (3.23)-(3.24), thus proving the reduction ii- a).
As we prove in Theorem 3.1, Lemma 3.1 has a remarkable implication on the structure of the two order tensors of the theory. In order to proceed, it may be convenient to recall a criterion to characterise the degree of biaxiality of a given order tensor \(\mathbf{\Omega}\). This will be based on the _biaxiality parameter_\(\beta^{2}(\mathbf{\Omega}):=1-6\frac{\mathrm{Tr}^{2}(\mathbf{\Omega}^{3})}{ \mathrm{Tr}^{2}(\mathbf{\Omega}^{2})}\), satisfying \(0\leq\beta^{2}\leq 1\)[98].
**Definition 3.1**.: A tensor \(\mathbf{\Omega}\) is said to be uniaxial if \(\beta^{2}(\mathbf{\Omega})=0\) and biaxial if \(0<\beta^{2}(\mathbf{\Omega})\leq 1\). Furthermore, in the extreme case \(\beta^{2}(\mathbf{\Omega})=1\), \(\mathbf{\Omega}\) is said to be maximally biaxial.
The following theorem characterises the allowed forms of the two order tensors of the model.
**Theorem 3.1**.: In the absence of external fields, at all temperatures and values of \(\lambda\), the order tensors take one of the following two forms
1. \(\mathbf{Q}\) uniaxial and \(\mathbf{B}\) maximally biaxial;
2. \(\mathbf{Q}\) and \(\mathbf{B}\) both uniaxial.
Proof.: The result is readily obtained by considering Lemma 3.1 and the transformation \(\varphi\) specified by Eqs. (3.16). The subcases a), b) and c) of Lemma 3.1 in each of the two cases i) and ii), correspond to a particular choice of the principal axis. For instance, the transformation \(\varphi\) evaluated along with case i) a) implies \(\mathbf{Q}=\operatorname{diag}\,(m^{1},m^{1},-2\,m^{1})\) and \(\mathbf{B}=\operatorname{diag}\,(m^{3},-m^{3},0)\), with
\[m^{2}=m^{1}=-\frac{1}{3}+p_{1}+p_{2}\quad\text{ and }\quad m^{4}=-m^{3}=(p_{1}-p_ {2})\left(1+\frac{4\,p_{1}\,p_{2}-p_{1}-p_{2}}{p_{1}^{2}+p_{2}^{2}}\right)\,,\]
while case ii) a) leads to \(\mathbf{Q}=\operatorname{diag}\,(m^{1},m^{1},-2\,m^{1})\) and \(\mathbf{B}=\operatorname{diag}\,(m^{3},m^{3},-2m^{3})\), with
\[m^{2}=m^{1}=-\frac{1}{3}+p_{1}+p_{2}\quad\text{ and }\quad m^{4}=m^{3}=p_{2}-p_{1}\,.\]
Similarly, case i) b) corresponds to \(\mathbf{Q}=\operatorname{diag}\,(m^{1},-2\,m^{1},m^{1})\) and \(\mathbf{B}=\operatorname{diag}\,(m^{3},0,-m^{3})\), with
\[m^{2}=-2m^{1}=2\left(\frac{1}{3}-p_{1}-p_{2}\right)\quad\text{ and }\quad m^{3}=(p_{1}-p_{2})\left(1+\frac{4\,p_{1}\,p_{2}-p_{1}-p_{2}}{p_{1}^{2}+ p_{2}^{2}}\right)\,,\ m^{4}=0\,,\]
and case ii) b) corresponds to \({\bf Q}={\rm diag}\,(m^{1},-2\,m^{1},m^{1})\) and \({\bf B}={\rm diag}\,(m^{3},-2\,m^{3},m^{3})\), with
\[m^{2}=-2m^{1}=2\left(\frac{1}{3}-p_{1}-p_{2}\right)\quad\mbox{ and }\quad m^{4}=-2\,m^{3}=-2(p_{1}-p_{2})\,.\]
Finally, case i) c) corresponds to \({\bf Q}={\rm diag}\,(-2m^{2},\,m^{2},m^{2})\) and \({\bf B}={\rm diag}\,(0,m^{4},-m^{4})\), with
\[m^{1}=-2m^{2}=2\left(\frac{1}{3}-p_{3}-p_{4}\right)\quad\mbox{ and }\quad m^{3}=0\,,\,m^{4}=(p_{4}-p_{3})\left(1+\frac{4\,p_{3}\,p_{4}-p_{3}-p_{4}}{ p_{3}^{2}+p_{4}^{2}}\right)\,\]
and case ii) c) leads to \({\bf Q}={\rm diag}\,(-2m^{2},\,m^{2},m^{2})\) and \({\bf B}={\rm diag}\,(-2m^{4},m^{4},m^{4})\), with
\[m^{2}=m^{1}=-\frac{1}{3}+p_{1}+p_{2}\quad\mbox{ and }\quad m^{3}=-2m^{4}=1-2\,p_{1}-4 \,p_{4}\,.\]
The statement is proven by evaluating the biaxiality parameter \(\beta^{2}\) for \({\bf Q}\) and \({\bf B}\) along all cases. Due to the invariance by exchange of principal axes, \(\beta^{2}\) for \({\bf Q}\) and \({\bf B}\) will take same values for all subcases a), b) and c) of a given case. Without loss of generality, we can consider cases i-a) and ii - a) to get, respectively,
1. \(\beta^{2}({\bf Q})=1-6\frac{{\rm Tr}^{2}({\rm diag}\,((m^{1})^{3},(m^{1})^{3}, -8\,(m^{1})^{3}))}{{\rm Tr}^{3}({\rm diag}\,((m^{1})^{2},(m^{1})^{2},4\,(m^{1} )^{2}))}=0\,\,\beta^{2}({\bf B})=1-6\frac{{\rm Tr}^{2}({\rm diag}\,((m^{3})^{3},-(m^{3}) ^{3},0))}{{\rm Tr}^{3}({\rm diag}\,((m^{3})^{2},(m^{3})^{2},0))}=1\), that is \({\bf Q}\) uniaxial and \({\bf B}\) maximally biaxial;
2. \(\beta^{2}({\bf Q})=1-6\frac{{\rm Tr}^{2}({\rm diag}\,((m^{1})^{3},(m^{1})^{3}, -8\,(m^{1})^{3}))}{{\rm Tr}^{3}({\rm diag}\,((m^{1})^{2},(m^{1})^{2},4\,(m^{1} )^{2}))}=0\), \(\beta^{2}({\bf B})=1-6\frac{{\rm Tr}^{2}({\rm diag}\,((m^{3})^{3},(m^{3})^{3}, -8\,(m^{3})^{3}))}{{\rm Tr}^{2}({\rm diag}\,((m^{3})^{2},(m^{3})^{2},4\,(m^{2} )))}=0\), hence \({\bf Q}\) and \({\bf B}\) both uniaxial.
A direct consequence of the reductions ii) in Theorem 3.1 and the transformation \(\varphi\) is the following corollary.
**Corollary 3.1**.: The equations of state for the model in the case of \({\bf Q}\) and \({\bf B}\) both uniaxial, cases ii) in Theorem (3.1), can be written explicitly in terms of the eigenvalues \(m^{l}\). In particular, we have that reductions ii) a), b) and c) can be written as follows
* a) \(m^{2}=m^{1}\) and \(m^{4}=m^{3}\) with \[6\,m^{1}t =\log\left(\frac{(1+3\,m^{1}+3\,m^{3})(1+3\,m^{1}-3\,m^{3})}{(1-6 \,m^{1})^{2}}\right)\,,\] (3.36) \[6\,m^{3}\,\lambda\,t =\log\left(\frac{1+3\,m^{1}+3\,m^{3}}{1+3\,m^{1}-3\,m^{3}}\right)\,;\] (3.37)
* b) \(m^{2}=-2\,m^{1}\) and \(m^{4}=-2\,m^{3}\) with \(m^{1}\) and \(m^{3}\) specified by Eqs. (3.36)-(3.37);
* \(m^{1}=-2\,m^{2}\) and \(m^{3}=-2\,m^{3}\) with
\[6\,m^{2}t =\log\left(\frac{(1+3\,m^{2}+3\,m^{4})(1+3\,m^{2}-3\,m^{4})}{(1-6 \,m^{2})^{2}}\right)\,, \tag{3.38}\] \[6\,m^{4}\,\lambda\,t =\log\left(\frac{1+3\,m^{2}+3\,m^{4}}{1+3\,m^{2}-3\,m^{4}}\right)\,. \tag{3.39}\]
Proof.: As shown in the proof of Theorem (3.1), the transformation \(\varphi\) is linear when restricted to the case of \({\bf Q}\) and \({\bf B}\) both uniaxial. Hence, the transformation can be easily inverted to get the projection \(\varphi^{-1}:(m^{1},m^{2},m^{3},m^{4})\longmapsto(p_{1},p_{2},p_{3},p_{4})\) along with each particular 2-parameter reduction. The equations in terms of the eigenvalues are then obtained by application of the
inverse transformation for the specific reduction to the corresponding set of equations in the \(p-\)variables. Taking the case ii) a) as an example, the application of \(\varphi_{a}^{-1}:=\{\varphi\,|_{p_{3}=p_{2},\,p_{4}=p_{1}}\}^{-1}\) explicitly given by
\[\varphi_{a}^{-1}:\ (m^{1},m^{3})\longmapsto(p_{1},p_{2})=\left(\frac{1+3\,m^{1}-3 \,m^{3}}{6},\frac{1+3\,m^{1}+3\,m^{3}}{6}\right)\,,\]
to Eqs. (3.23)-(3.24) gives Eqs. (3.36)-(3.37). Equations (3.36)-(3.37) and (3.38)-(3.39) for b) and c), respectively, are obtained in a similar fashion.
Unlike uniaxial-uniaxial reductions discussed above, uniaxial-maximally biaxial reductions cannot be written in explicit simple form in terms of the \(m^{l}\) variables.
In this paper, we will focus our discussion on the phase behaviour in the absence of external fields. We note, however, that the above reduction is also compatible with non-zero fields values subject to suitable constraints. Indeed, the following proposition allows to identify the constraints on the external fields so that the system admits _uniaxial-maximally biaxial_ and _uniaxial-uniaxial_ solutions for \(\mathbf{Q}\) and \(\mathbf{B}\). In such cases we can still consider 2-parameter reductions of the system, with the equations of state also accounting for the action of the fields.
**Proposition 3.1.1**.: In the presence of external fields, the system (3.17a)-(3.17d) admits the following _uniaxial-maximally biaxial_ two-parameter reductions:
* \(p_{3}=p_{1}\) and \(p_{4}=p_{2}\), provided that external fields satisfy \(y=x\) and \(w=-z\), that is \(\epsilon_{1}=\epsilon_{2}\) and \(\chi_{3}=\frac{\chi_{1}+\chi_{2}}{2}\), specified by \[x+(3\,p_{1}+3\,p_{2}-1)\,t =\frac{1}{2}\log\left(\frac{\left(p_{1}^{2}+p_{2}^{2}\right)^{2} }{p_{1}\,p_{2}\,(1-2\,p_{1}-2\,p_{2})^{2}}\right)\] (3.40) \[z+(p_{2}-p_{1})\left(1+\frac{4\,p_{1}\,p_{2}-p_{1}-p_{2}}{p_{1}^{2}+p_{2}^{2}} \right)\lambda\,t =\frac{1}{2}\log\left(\frac{p_{1}}{p_{2}}\right)\,;\] (3.41)
* \(p_{3}=\frac{(1-2\,p_{1}-2\,p_{2})p_{2}^{2}}{p_{1}^{2}+p_{2}^{2}}\) and \(p_{4}=\frac{(1-2\,p_{1}-2\,p_{2})p_{1}^{2}}{p_{1}^{2}+p_{2}^{2}}\), provided that \(x=0\) and \(z=2w\), that is \(\epsilon_{1}=\epsilon_{3}\) and \(\chi_{2}=\frac{\chi_{1}+\chi_{3}}{2}\), specified by \[y+(1-3\,p_{1}-3\,p_{2})\,t =\frac{1}{2}\log\left(\frac{p_{1}\,p_{2}\,(1-2\,p_{1}-2\,p_{2})^ {2}}{\left(p_{1}^{2}+p_{2}^{2}\right)^{2}}\right)\] (3.42) \[w+(p_{1}-p_{2})\left(1+\frac{4\,p_{1}\,p_{2}-p_{1}-p_{2}}{p_{1}^{2}+p_{2}^{2}} \right)\lambda\,t =\frac{1}{2}\log\left(\frac{p_{2}}{p_{1}}\right)\,;\] (3.43)
* \(p_{1}=\frac{(1-2\,p_{3}-2\,p_{4})p_{4}^{2}}{p_{3}^{2}+p_{4}^{2}}\) and \(p_{2}=\frac{(1-2\,p_{3}-2\,p_{4})p_{4}^{2}}{p_{3}^{2}+p_{4}^{2}}\), provided that \(y=0\) and \(w=2z\), that is \(\epsilon_{2}=\epsilon_{3}\) and \(\chi_{1}=\frac{\chi_{2}+\chi_{3}}{2}\), and specified by \[x+(1-3\,p_{3}-3\,p_{4})\,t =\frac{1}{2}\log\left(\frac{p_{3}\,p_{4}\,(1-2\,p_{3}-2\,p_{4})^ {2}}{\left(p_{3}^{2}+p_{4}^{2}\right)^{2}}\right)\] (3.44) \[z+(p_{4}-p_{3})\left(1+\frac{4\,p_{3}\,p_{4}-p_{3}-p_{4}}{p_{3}^{2}+p_{4}^{2}} \right)\lambda\,t =\frac{1}{2}\log\left(\frac{p_{3}}{p_{4}}\right)\,;\] (3.45) and _uniaxial-uniaxial_ two-parameter reductions:
2. \(p_{3}=p_{2}\) and \(p_{4}=p_{1}\), provided that \(x=y\) and \(z=w\), that is \(\epsilon_{1}=\epsilon_{2}\) and \(\chi_{1}=\chi_{2}\), specified by \[x+\left(3\,p_{1}+3\,p_{2}-1\right)t =\frac{1}{2}\log\left(\frac{4\,p_{1}\,p_{2}}{\left(1-2\,p_{1}-2\, p_{2}\right)^{2}}\right)\] (3.46) \[z+3\left(p_{2}-p_{1}\right)\lambda\,t =\frac{1}{2}\log\left(\frac{p_{2}}{p_{1}}\right)\,;\] (3.47) 2. \(p_{3}=p_{4}=\frac{1}{2}-p_{1}-p_{2}\), provided by \(x=0\) and \(z=0\), that is \(\epsilon_{1}=\epsilon_{3}\) and \(\chi_{1}=\chi_{3}\), specified by \[y+\left(1-3\,p_{1}-3\,p_{2}\right)t =\frac{1}{2}\log\left(\frac{\left(1-2\,p_{1}-2\,p_{2}\right)^{2} }{4\,p_{1}\,p_{2}}\right)\] (3.48) \[w+3\left(p_{2}-p_{1}\right)\lambda\,t =\frac{1}{2}\log\left(\frac{p_{2}}{p_{1}}\right)\,;\] (3.49) 3. \(p_{2}=p_{1}=\frac{1}{2}-p_{3}-p_{4}\), provided that \(y=0\) and \(w=0\), that is \(\epsilon_{2}=\epsilon_{3}\) and \(\chi_{2}=\chi_{3}\), specified by \[x+\left(1-3\,p_{3}-3\,p_{4}\right)t =\frac{1}{2}\log\left(\frac{\left(1-2\,p_{3}-2\,p_{4}\right)^{2} }{4\,p_{3}\,p_{4}}\right)\] (3.50) \[z+3\left(p_{3}-p_{4}\right)\lambda\,t =\frac{1}{2}\log\left(\frac{p_{3}}{p_{4}}\right)\,.\] (3.51)
Proof.: The constraints on the fields follow from looking for either uniaxial-maximally biaxial or uniaxial-uniaxial reductions of the whole set of equations of state, Eqs. (3.17a)-(3.17d). For instance, the uniaxial-maximally biaxial reduction i) a) requires \(p_{3}=p_{1}\) and \(p_{4}=p_{2}\). Eqs. (3.17a)-(3.17d) restricted to this constraint imply that compatibility of the first two equations restricts fields to \(y=x\), while compatibility of the third and fourth requires \(w=-z\). Elementary algebraic manipulations lead to Eqs. (3.40)-(3.41). The set of field-dependent equations of state in the case i) b) and c), and ii) a), b) and c) are obtained following the same procedure, together with the associated constraints on the fields.
## 4 Order parameters in the 2-parameter reductions
The equations of state (3.17a)-(3.17d) are the critical points of the free-energy functional (3.18). According to the definition of free-energy adopted in this paper, global maxima identify stable states of the system and associated phases. Coexistence curves (hypersurfaces, in general) arise as sets of control parameters for which two or more local maxima are resonant, hence identifying the coexistence of the corresponding phases. In order to proceed, we should therefore first identify all local maxima for each choice of control parameters \(x,y,z,w,\lambda\) and \(t\). In this section, we focus on the complete characterisation of the system in the absence of external fields, i.e. \(x\!=\!y\!=\!z\!=w\!=\,0\), hence relying on Theorem 3.1 and its implications. An immediate consequence is that the onset of phase transitions is determined by analysing the singularities of 2-dimensional maps defined by Eqs. (3.19)-(3.26) (see [10] for an exhaustive treatment). Noticeably, this is a more affordable task compared to the full 4-dimensional problem governing the system when external fields are present, Eqs. (3.17a)-(3.17d).
By evaluating the Hessian matrix of the free energy density (3.18), it turns out that none of the critical points in the uniaxial-uniaxial reductions, cases ii)a)-c) in Lemma 3.1) are stable.
This result is consistent with what is known from mean-field theories based on a continuum of molecular orientational states [66]. Therefore, the uniaxial-maximally biaxial reductions, cases i)a)-c) in Lemma 3.1, are the only ones relevant from the equilibrium thermodynamics viewpoint. The following subsections focus on the analysis of the resulting phase diagram and the associated order parameters behaviour, with case i)a) being considered for such purpose. Cases i)b) and i)c) can be straightforwardly obtained from case i)a) via suitable linear transformations on \(m^{1}\) and \(m^{3}\), which merely correspond to permutations of axes.
### Phase behaviour in the absence of external fields
The phase diagram of the model in the absence of external fields is shown in Fig 1. The left panel shows the phase diagram in the \(\lambda-t\) plane, while the right panel shows the phase diagram in the \(\lambda-T^{*}\) plane, where \(T^{*}\) is the dimensionless temperature defined by \(T^{*}:=1/t=(k_{B}T)/\mu\). The \(\lambda\)-\(t\) plane is divided in three regions identifying three distinct macroscopic phases, namely the _isotropic_ (I), the _uniaxial nematic_ (U) and the _biaxial nematic_ (B). The lines separating the different regions are either dotted black lines or solid black lines. The former identify the so-called second-order transition lines, that is the lines associated with phase changes characterised by continuous order parameters but discontinuous derivatives. The latter are instead associated to first-order lines, that is the order parameters and their derivatives experience a discontinuity when the line is crossed. Similarly to the analysis performed in [10], second-order lines are identified by cusp points of two-dimensional maps. The cusp points of the model (red lines in Fig. 1) are given explicitly in terms of the transcendental curve
\[\mathcal{C}=\Big{\{}(\lambda,t)\in[0,1]\times[0,+\infty)\mid e^{t-\frac{1}{ \lambda}}\,(2-\lambda\,t)+1-2\lambda\,t=0\Big{\}}\,.\]
Notice that the cusp set can be seen as the union of two curves intersecting at the point \((\lambda,t)=(1/3,3)\). The model admits two tricritical points, \((\lambda_{\rm tc}^{(UB)},t_{\rm tc}^{(UB)})=(0.217,2.854)\) (red circle) and \(\Big{(}\lambda_{\rm tc}^{(IB)},t_{\rm tc}^{(IB)}\Big{)}=(2/3,3/2)\) (blue circle). The three phases coexist at the triple point, \((\lambda_{\rm tp},t_{\rm tp})=(0.234,2.773)\) (green circle) identified by the resonance condition for the corresponding maxima. A closer look in the region surrounding the triple point and the uniaxial-biaxial tricritical point is provided in the top-right corner. The cusp points in the \(\lambda-T^{*}\) plane are given by the set
\[\mathcal{C}^{*}=\Big{\{}(\lambda,T^{*})\in[0,1]^{2}\mid e^{\frac{1}{T^{*}}- \frac{1}{\lambda}}\left(2-\frac{\lambda}{T^{*}}\right)+1-\frac{2\lambda}{T^{* }}=0\Big{\}}.\]
The constraint \(\lambda=T^{*}\) identifies the subset of cusp points associated to second-order lines for \(\lambda\geq 2/3\).
### Order parameters in the absence of external fields
In this section, we analyse the order parameters behaviour for the reduction \(m^{2}=m^{1}\), \(m^{4}=-m^{3}\) in the absence of external fields as the temperature changes. Following our discussion on the phase diagram displayed in Fig 1, we proceed by showing the expectation values \(m^{1}\) and \(m^{3}\) at different increasing values of \(\lambda\). The values chosen for \(\lambda\) aim at displaying the whole phenomenology predicted by the phase diagram.
Fig. 2 shows the behaviour of order parameters in the absence of external fields for small values of \(\lambda\). The case \(\lambda=0\) (left column) reproduces the phenomenology of the standard Maier-Saupe model, with the biaxial order parameter vanishing and a discontinuous isotropic-to-uniaxial nematic phase transition at \(t_{c}^{NI}=4\log 2\). For small values of \(\lambda\) (central column), that is \(0<\lambda<\lambda_{tc}^{UB}\approx 0.217\), additionally to the isotropic-to-nematic phase transition, a continuous phase change at lower temperatures is displayed from the uniaxial phase to the biaxial phase.
Consistently with the phase diagram in Fig 1, the uniaxial-to-biaxial phase transition becomes first-order at the uniaxial-biaxial tricritical point (right column), where both order parameters experience a gradient catastrophe at \(t=t_{tc}^{(UB)}=2.854\).
The behaviour for values of \(\lambda\) in the interval \(\left(\lambda_{tc}^{(UB)},\lambda_{tp}\right)\) is displayed in Fig 3. For values of \(\lambda\) in this range the model predicts two first-order phase transitions, a isotropic-to-uniaxial phase transition at high temperature (low values of \(t\)) followed by a uniaxial-to-biaxial phase transition at lower temperatures (higher values of \(t\)). While the former is associated to a shock that is static in \(\lambda\), the latter is originated at the uniaxial-biaxial tricritical point and is identified by a classical shock whose location moves from low temperatures to higher temperatures as the biaxiality parameter \(\lambda\) increases.
As shown in Figs. 1 and 4 (left column), for \(\lambda=\lambda_{tp}=0.234\) the system displays a triple point at which all three phases coexist. This situation is realised as the two shocks associated with the isotropic-to-uniaxial and uniaxial-to-biaxial phase transitions merge at zero external fields, giving rise to a single shock having an amplitude given by the sum of the amplitudes of the two individual shocks. Consistently with the phase diagram in Fig. 1, the uniaxial phase is not energetically accessible for \(\lambda>\lambda_{tp}\). For instance, for \(\lambda=0.284\) (centre column of Figure 4), the order parameters jump from the isotropic phase to the biaxial phase as the temperature is lowered. This is also the case when \(\lambda=1/3\) (right column). The case \(\lambda=1/3\), as also discussed in [66], leads to proportionality between the stable branches of the order parameters. Precisely this is \(3m^{1}\pm m^{3}=0\), corresponding to \(T^{{}^{\prime}}=\pm S\) in the convention adopted by Virga and co-authors in [66].
For values of \(\lambda\) exceeding the value \(1/3\), the isotropic-to-biaxial phase transition remains first-order until a second tricritical point is disclosed. In Fig. 5, the change in order of the phase isotropic-to-biaxial phase transition is displayed. For \(\lambda<\lambda_{tc}^{(IB)}\) (left column) both order parameters undergo a discontinuous jump from the isotropic solution to the biaxial one. The
Figure 1: Zero-fields phase diagram. _L_eft: phase diagram in the \(\lambda-t\) plane. _R_ight: phase diagram in the \(\lambda-T^{*}\) plane. The small windows top-right and top-bottom in the two figures represent a magnification of the phase diagram in the region surrounding the triple point (green circle) and the uniaxial-biaxial tricritical point (red circle). The line associated with uniaxial cusp points is indicated in red.
shock disappears when \(\lambda=\lambda_{tc}^{(IB)}=2/3\) is considered, and consequently both order parameters experience a gradient catastrophe at \(t=t_{tc}^{(IB)}=3/2\). Larger values of \(\lambda\) lead to a direct second-order transition from the isotropic to the biaxial phase, with the transition value given by \(t^{(IB)}=1/\lambda\).
Our results are, both qualitatively and quantitatively, consistent with previous studies [57, 66, 59]. In particular, according to the Monte Carlo simulation results reported in [57, 66, 59], the values of \(\lambda\) at the first and second tricritical points are \(\simeq 0.24\) and \(\simeq 2/3\), respectively. Moreover, the global Monte Carlo study performed in [59] predicts \(\lambda\simeq 0.26\) for the triple point.
## 5 Concluding remarks
In this paper we have analysed in detail a discrete mean-field model for a biaxial nematic liquid crystal subject to external fields, using an approach based on the differential identity (2.11) for the partition function. Upon the introduction of suitable variables, namely the order parameters, the multidimensional linear PDE satisfied by the partition function leads, in the thermodynamic limit, to a set of equations of state involving all four orientational order parameters. The equations are completely solvable by the method of characteristics proving the integrability of the models.
Via the introduction of a novel set of order parameters corresponding to orientational Gibbs weights, we have obtained the equations of state in explicit form. We proved that, in the absence of external fields, the system is fully characterised by two-parameter reductions, and such reductions persist in the case of non-zero external fields subject to suitable constraints.
Figure 2: Order parameters in the 2-parameter reduction for small values of \(\lambda\). Each column shows both order parameters, \(m^{1}\) (black) and \(m^{3}\) (red) versus \(t\) at a specific value of \(\lambda\). The solutions corresponding to a global maximum of the free energy are displayed with solid lines, while other solutions are indicated with dotted lines. _Left column_: \(\lambda=0\), that is the uniaxial Maier-Saupe interaction potential. _Centre column_: \(\lambda=1/6\). _Right column_: \(\lambda=\lambda_{tc}^{UB}=0.217\).
A detailed analysis demonstrates the existence of a rich phase diagram, that is remarkably consistent with the results known in the literature for the standard Maier-Saupe model and its biaxial extensions. Hence, the discrete models of the type studied in this paper capture, at least qualitatively, the most important features of continuum models with external fields for which explicit analytic formulae are not available. These results indeed encourage further studies on integrable biaxial models where the Hamiltonian contains a more general nonlinear dependence on the tensors \(\mathbf{q}\) and \(\mathbf{b}\). Such cases are currently under investigation and results will be reported in due course.
## Acknowledgements
We would like to thank the Isaac Newton Institute for Mathematical Sciences for the hospitality during the six-month programme 'Dispersive hydrodynamics: mathematics, simulation and experiments, with applications in nonlinear waves', Cambridge July-December 2022, under the EPSRC Grant Number EP/R014604/1, where this work has been partly developed, and GNFM - Gruppo Nazionale per la Fisica Matematica, INdAM (Istituto Nazionale di Alta Matematica). F.G. also acknowledges the hospitality of the Department of Mathematics, Physics and Electrical Engineering of Northumbria University Newcastle. A.M. is supported by the Leverhulme Trust Research Project Grant 2017-228, the Royal Society International Exchanges Grant IES-R2-170116 and London Mathematical Society.
Figure 3: Order parameters in the two-parameter reduction for values of \(\lambda\) in the interval \(\lambda_{tc}^{(UB)}\leq\lambda<\lambda_{tp}\). The two column shows both order parameters versus \(t\) for \(\lambda=0.226\in\left(\lambda_{tc}^{(UB)},\lambda_{tp}\right)\) as an example. The right column displays a magnification of the order parameters around the tricritical temperature and triple point temperature.
Figure 4: Order parameters in the 2-parameter reduction for values of \(\lambda\) in the interval \(\lambda_{tp}\leq\lambda\leq 1/3\). _Left column_: \(\lambda=\lambda_{tp}=0.234\). _Centre column_: \(\lambda=0.284\). _Right column_: \(\lambda=1/3\). |
2309.04937 | LONER: LiDAR Only Neural Representations for Real-Time SLAM | This paper proposes LONER, the first real-time LiDAR SLAM algorithm that uses
a neural implicit scene representation. Existing implicit mapping methods for
LiDAR show promising results in large-scale reconstruction, but either require
groundtruth poses or run slower than real-time. In contrast, LONER uses LiDAR
data to train an MLP to estimate a dense map in real-time, while simultaneously
estimating the trajectory of the sensor. To achieve real-time performance, this
paper proposes a novel information-theoretic loss function that accounts for
the fact that different regions of the map may be learned to varying degrees
throughout online training. The proposed method is evaluated qualitatively and
quantitatively on two open-source datasets. This evaluation illustrates that
the proposed loss function converges faster and leads to more accurate geometry
reconstruction than other loss functions used in depth-supervised neural
implicit frameworks. Finally, this paper shows that LONER estimates
trajectories competitively with state-of-the-art LiDAR SLAM methods, while also
producing dense maps competitive with existing real-time implicit mapping
methods that use groundtruth poses. | Seth Isaacson, Pou-Chun Kung, Mani Ramanagopal, Ram Vasudevan, Katherine A. Skinner | 2023-09-10T05:45:36Z | http://arxiv.org/abs/2309.04937v3 | # Loner: LiDAR Only Neural Representations for Real-Time SLAM
###### Abstract
This paper proposes _LONER_, the first real-time LiDAR SLAM algorithm that uses a neural implicit scene representation. Existing implicit mapping methods for LiDAR show promising results in large-scale reconstruction, but either require groundtruth poses or run slower than real-time. In contrast, LONER uses LiDAR data to train an MLP to estimate a dense map in real-time, while simultaneously estimating the trajectory of the sensor. To achieve real-time performance, this paper proposes a novel information-theoretic loss function that accounts for the fact that different regions of the map may be learned to varying degrees throughout online training. The proposed method is evaluated qualitatively and quantitatively on two open-source datasets. This evaluation illustrates that the proposed loss function converges faster and leads to more accurate geometry reconstruction than other loss functions used in depth-supervised neural implicit frameworks. Finally, this paper shows that LONER estimates trajectories competitively with state-of-the-art LiDAR SLAM methods, while also producing dense maps competitive with existing real-time implicit mapping methods that use groundtruth poses.
SLAM, Mapping, Deep Learning Methods, Implicit Representations, NeRF
## I Introduction
Neural implicit scene representations, such as Neural Radiance Fields (NeRFs), offer a promising new way to represent maps for robotics applications [1]. Traditional NeRFs employ a Multi Layer Perceptron (MLP) to estimate the radiance and volume density of each point in space, enabling dense scene reconstruction and novel view synthesis. The learned scene representation has several advantages over conventional map representations, such as point clouds and occupancy grids. First, because the domain of the NeRF is continuous and does not enforce discretization, any point in the scene can be queried for occupancy. The continuity of the scene can be exploited to solve a variety of robotics problems. For example, as demonstrated in [2], a motion planner can integrate the volume density along a proposed trajectory to evaluate the likelihood of a collision. Other benefits include the ability to produce realistic renders of the scene [1]. Further, NeRFs can be used to estimate uncertainty of renders to enable view selection for active exploration [3]. This paper advances neural implicit scene representations for robotics applications. Specifically, we introduce the first real-time LiDAR-only SLAM algorithm that achieves accurate pose estimation and map reconstruction and learns a neural implicit representation of a scene.
Several recent papers have proposed real-time NeRF-based visual SLAM systems using monocular or RGB-D cameras [5, 6, 7]. These systems demonstrate impressive performance on indoor scenes. For outdoor environments, prior work has focused on using neural implicit representations for LiDAR to enable dense 3D reconstruction and novel view synthesis for large-scale scenes [8, 9, 10]. Recent methods have even shown promising results for LiDAR localization and mapping with neural implicit frameworks in large-scale outdoor scenes [11, 12]. Still, these LiDAR-supervised algorithms do not operate in real-time, which is necessary for robotics applications. The contributions of this paper are as follows:
1. We propose the first real-time neural implicit LiDAR SLAM method, which adapts to outdoor environments and provides accurate online state estimation.
2. We introduce a novel loss function that leads to faster convergence and more accurate reconstruction than existing loss functions.
We demonstrate that our proposed method, LONER, runs in real-time and estimates both trajectories and maps more accurately than baselines. Figure 1 shows the reconstruction results on the Fusion Portable dataset [4]. A project page is available at https://umaut Robots.github.io/loner.
The remainder of this paper is organized as follows: In Section II, we review related work. In Section III, we describe
Fig. 1: LONER reconstruction on a courtyard scene [4]. The top-right is a mesh reconstruction with the estimated trajectory in red. The surrounding images are rendered depth images from novel views outside of the training trajectory, demonstrating LONER’s ability to reconstruct dense novel views of an environment.
LONER. In Section IV, we evaluate LONER, and in Section V, we conclude and discuss both limitations and future work.
## II Related Works
### _LiDAR SLAM_
LiDAR SLAM has been an active research area over the past several decades [13, 14, 15, 16, 17]. The primary goal of these methods is to estimate the trajectory of the ego vehicle. Modern methods such as LeGO-LOAM estimate motion by aligning features extracted from consecutive scans, then accumulate LiDAR scans to build a map [13, 15]. These works primarily focus on accurate trajectory estimation, and thus creating dense, realistic maps is not a focus of the approach. In contrast, our method aims to achieve similar or better trajectory estimation while also estimating dense maps.
### _Real-time NeRF-based SLAM_
NeRFs use images of a scene captured from known camera poses to train an MLP to predict what the scene will look like from novel poses [1]. While originally developed for offline use with known camera poses, NeRFs have recently been used to learn an implicit scene representation in RGB and RGB-D SLAM frameworks [5, 6, 7]. By representing the scene with a NeRF, these algorithms perform both tracking and mapping via gradient descent on an MLP or related feature vectors. For example, iMAP represents the scene as a single MLP [5]. Each RGB-D frame is first tracked by fixing the MLP weights and optimizing the camera poses. Then, the new information is incorporated into the map by jointly optimizing the MLP and pose estimates. iMAP shows promising results in small scenarios but does not scale to larger or outdoor scenes. NICE-SLAM replaces iMAP's simple MLP with a hierarchical feature grid combined with MLP decoders [6]. This approach demonstrates better scalability than the single MLP used in iMAP, but NICE-SLAM still only works in indoor scenarios. Additionally, NeRF-SLAM uses DROID-SLAM [18] as the tracking front-end, which allows them to use a probabilistic volumetric NeRF to perform uncertainty-aware mapping and pose refinement [7]. Recently, several more papers have introduced architectures and encodings to improve neural-implicit SLAM's memory efficiency, computation speed, and accuracy [19, 20, 21, 22]. Our method extends these recent advances to leverage implicit scene representation for real-time LiDAR-only SLAM, which allows operation in large, outdoor environments.
### _Neural Implicit Representations for LiDAR_
While neural implicit representations were initially developed for visual applications, several works have introduced neural implicit representations for LiDAR to improve outdoor 3D reconstruction performance [8, 23, 9]. Urban Radiance Fields (URF) is an early example of LiDAR-intergraded NeRF [8]. URF uses a novel Line-of-Sight (LOS) loss to improve LiDAR supervision. CLONeR uses LiDAR and camera data to train two decoupled MLPs, one of which learns scene structure and the other of which learns scene color [9]. CLONeR combines the decoupled NeRF with occupancy-grid enabled sampling heuristics and URF's Line-of-Sight loss to enable training with as few as two input views [9]. Both URF and CLONeR require known sensor poses and assume offline training. In contrast, our proposed method performs real-time LiDAR SLAM that both reconstructs 3D environments and estimates sensor poses for sequential input data.
In [12], a method is introduced that inputs LiDAR scans and approximate poses, then uses a novel occlusion-aware loss function to jointly optimize the poses and a NeRF. This work assumes a-priori availability of all data. Thus, it can be effectively viewed as a LiDAR-based structure-from-motion algorithm, whereas we present a full SLAM algorithm. Recently, SHINE Mapping presented a LiDAR mapping method based on neural signed distance function (SDF) and sparse feature embedding [10]. While this embedding helps scale to large scenes, in a real-time configuration, it presents a trade-off between hole-filling and overall map resolution. Our method instead uses a dense feature embedding, which enables improved performance across both hole-filling capability and map resolution. NeRF-LOAM extends this to a LiDAR SLAM system and proposes a dynamic voxel embedding generation strategy to adapt to large-scale scenarios [11]. However, it does not operate in real-time.
### _Loss for Depth-supervised NeRF_
Depth-supervised NeRF frameworks, such as those that use RGB-D sensors, typically use the difference between rendered and sensed depth as a loss to learn geometry from 2D images by volumetric rendering [5, 6]. Other works use depth measurements directly in 3D space to perform depth-supervision [23, 8, 9, 12]. The Binary Cross-Entropy (BCE) loss proposed in [12] reasons about occluded objects, but does not consider measurement uncertainty. The KL divergence loss presented by DS-NeRF [23] and Line-Of-Sight (LOS) loss introduced by URF [8] approximate each LiDAR ray's termination depth as a normal distribution centered at the measured depth. The variance of the distribution is correlated with a margin parameter \(\epsilon\). The loss functions encourage the network to predict weights along a ray equal to the PDF of the normal distribution. While the KL loss leaves the variance fixed during training, [8] shows that decaying \(\epsilon\) during training improves reconstruction accuracy when using the LOS loss.
While uniformly decaying a margin is successful offline, using a single margin for all rays is unsuitable for real-time SLAM, which has incremental input and limited training samples. Using a uniform margin can force the NeRF model to forget learned geometry when adding a new LiDAR scan and can cause slower convergence. Therefore, this paper proposes a novel dynamic margin loss that applies a different margin for each ray. We demonstrate the proposed loss function leads to better 3D reconstruction than previous loss functions within fewer training samples, and enables real-time performance.
## III Method
This section provides a high-level overview of our proposed system, LONER, before explaining each component in detail.
### _System Overview_
An overview of LONER is shown in Fig. 2. As is common in the SLAM literature [5, 6, 24], the system comprises parallel threads for tracking and mapping. The tracking thread processes incoming scans and estimates odometry using ICP. LONER is designed for use without an IMU, so ICP uses the identity transformation as an initial guess. In parallel and at a lower rate, the mapping thread uses the current scan and selected prior scans as KeyFrames, which are used to update the training of the neural scene representation.
### _Tracking_
Incoming LiDAR scans are decimated to a fixed frequency of 5Hz. The relative transform \(P_{i-1,i}\in SE(3)\) from the previous scan to the current scan is estimated using Point-to-Plane ICP [25]. We adopted ICP rather than inverse-NeRF due to strong performance and to save the GPU resources for the mapping module to maintain real-time performance. The ICP estimate is later refined in our mapping optimization. The LiDAR pose \(\mathbf{x}_{i}\in SE(3)\) is then estimated as \(\hat{\mathbf{x}}_{i}=\hat{\mathbf{x}}_{i-1}\cdot P_{i-1,i}\). Given the previous and current pose, LiDAR scans are motion-compensated by assuming constant velocity motion between scans.
### _Implicit Map Representation_
The scene is represented as an MLP with the hierarchical feature grid encoding from [26]. During online training, the parameters \(\Theta\) of the MLP and the feature grid are updated to predict the volume density \(\sigma\) of each point in space. To train the network and estimate depths, we follow the standard volumetric rendering procedure [1]. In particular, for a LiDAR ray \(\vec{r}\) with origin \(\vec{\sigma}\) and direction \(\vec{d}\), we choose distances \(t_{i}\in[t_{near},t_{far}]\) to create \(N_{S}\) samples \(s_{i}=\vec{\sigma}+t_{i}\vec{d}\). LiDAR intrinsics dictate \(t_{near}\), while \(t_{far}\) depends on the scale of the scene. The feature grid and MLP, collectively \(\mathcal{F}(s_{i};\Theta)\), are queried to predict the occupancy state \(\sigma_{i}\). Then, weights transmittances \(T_{i}\) and weights \(w_{i}\) are computed according to:
\[T_{i} =\exp(-\sum_{j=1}^{i-1}\sigma_{j}\delta_{j}) \tag{1}\] \[w_{i} =T_{i}\sigma_{i} \tag{2}\]
where \(\delta_{j}=t_{j+1}-t_{j}\), and \(\sigma_{i}\) is the density at sample \(s_{i}\) predicted by the MLP. The weights \(w_{i}\) are used by the loss function and represent the probability that the ray terminates at each point. Therefore, the expected termination depth of a ray \(\hat{D}(\vec{r})\) can be estimated as
\[\hat{D}(\vec{r})=\sum_{i=1}^{N}w_{i}t_{i}. \tag{3}\]
### _Mapping_
The mapping thread receives LiDAR scans from the tracking thread and determines whether to form a KeyFrame. If the scan is accepted, the map is jointly optimized with the poses.
#### Iii-D1 KeyFrames
KeyFrames are selected temporally: if \(t_{KF}\) has passed since the previous KeyFrame, a new KeyFrame is added. Each time a KeyFrame is accepted, the optimizer is updated. \(N_{W}\) total KeyFrames are used in the update, including the current KeyFrame and \(N_{W}-1\) random selected past KeyFrames.
#### Iii-D2 Optimization
Once the window of KeyFrames has been selected, the map is jointly optimized with the poses of KeyFrames in the optimization window. For a KeyFrame \(\mathbf{KF}_{i}\) with estimated pose \(\hat{\mathbf{x}}_{i}\) in the world frame, a twist vector \(\hat{\xi}_{i}\in\mathbb{R}^{6}\) is formed to be used as the optimization variable. Specifically, \(\hat{\xi}_{i}=(\hat{\omega}_{i},\hat{v}_{i})\) where \(\hat{\omega}_{i}\) is the axis-angle representation of the rotation component of \(\hat{\mathbf{x}}_{i}\), and \(\hat{v}_{i}\) is the translation component. In the forward pass, this vector is converted back into a pose \(\hat{\mathbf{x}}_{i}\) and used to compute the origin of rays. \(N_{R}\) rays are sampled at random from the LiDAR scan, and \(N_{S}\) depth samples are taken from each ray using the occupancy grid heuristic introduced by [9].
In the backward pass, gradients are computed for MLP and feature grid parameters \(\Theta\) and twist vectors \(\hat{\xi}_{i}\). At the end of the optimization, the optimized twist vectors \(\xi_{i}^{*}\) are converted into \(SE(3)\) transformation matrices \(\mathbf{x}_{i}^{*}\). The tracking thread is informed of this change, such that future tracking is performed relative to the optimized poses.
### _JS Dynamic Margin Loss Function_
The primary loss function in our system is a novel dynamic margin loss. This is combined with terms for depth loss and sky loss as follows:
\[\mathcal{L}(\Theta)=\mathcal{L}_{JS}+\lambda_{1}\mathcal{L}_{depth}+\lambda_ {2}\mathcal{L}_{sky}. \tag{4}\]
Fig. 2: LONER system overview. Incoming scans are decimated then tracked with ICP, after which the sky is segmented from the scene geometry. Selected scans are chosen as KeyFrames. Each map update includes the current KeyFrame and randomly selected past KeyFrames. Our novel loss function is used to update the poses and MLP weights. The resulting implicit map can be rendered offline to a variety of formats including depth images and meshes.
Each of these terms is explained below.
#### Iii-B1 JS Loss Formulation
The LOS loss used by [8, 9] uses a single margin for all rays; we use a similar formulation but introduce a novel strategy based on the Jensen-Shannon Divergence [27] to assign a unique margin to each ray. For a given LiDAR ray \(\vec{r}\), the samples along the ray are \(s_{i}=\vec{o}+t_{i}\vec{d}\), and \(z^{*}\) denotes the measured depth along the ray. \(t_{i}\) denotes the distance of individual training samples along the ray, and \(w_{i}\) represents a corresponding weight prediction from an MLP, as defined in Equation 2. We define a truncated Gaussian distribution \(\mathcal{K}_{e}\) that has a bounded domain parameterized by margin \(\epsilon\), with \(\mathcal{K}_{e}=\mathcal{N}(0,(\epsilon/3)^{2})\) as the training distribution. Thus, target weights are given by \(w_{i}^{*}=\mathcal{K}_{e}(t_{i}-z^{*})\). The JS loss is defined as
\[\mathcal{L}_{JS}(\Theta)=\underbrace{\|w_{i}^{*}-w_{i}\|_{1}}_{\text{ Primary Loss}}+\underbrace{\|1-\sum_{i}w_{i}\|_{1}}_{\text{Opacity Loss}}, \tag{5}\]
where the opacity loss (explained in more detail by [9]) ensures weights along each ray sum to one and thus form a probability distribution. Note that while URF [8] uses an L2 loss to compute the LOS loss, we follow [9] and use an L1 loss. The effect of this is discussed in Section IV-D.
In [8, 9], the margin decays exponentially throughout training and, at each iteration, a single margin is shared by all of the rays. In contrast, we present a JS divergence-based dynamic margin that computes a unique margin for each ray to improve the training convergence and reconstruction accuracy.
In a SLAM application, continuous optimization, sparse sampling, and incremental input lead to different regions of the map being learned to varying degrees during online training. As shown in Fig. 3, using a uniform \(\epsilon\) in the LOS loss causes forgetting in regions that have already been learned. The idea of the JS dynamic margin is to use a larger margin for rays pointing toward regions of the map with unknown geometry while using a smaller margin for rays pointing toward well-learned regions. This allows the system to learn new regions while preserving and refining learned geometry. We use the JS divergence to measure the dissimilarity between the goal distribution and the sample distribution for each ray, which represents how well the map has learned along the ray. Learned regions have similar goal and sample distributions, which lead to smaller JS divergence. We define a goal distribution \(G=\mathcal{N}(z^{*},\sigma^{*})\), where \(\sigma^{*}=\epsilon_{min}/3\). Further, we define the sample distribution \(S=\mathcal{N}(\bar{\mu}_{w},\bar{\sigma}_{w})\), where \(\bar{\mu}_{w}\) and \(\bar{\sigma}_{w}\) denote mean and standard deviation of the predicted weights along a particular ray. The dynamic margin is then defined as
\[\epsilon_{dyn}=\epsilon_{min}(1+\alpha\mathbf{J}^{*}) \tag{6}\]
\[\mathbf{J}^{*}=\begin{cases}0&JS(G||S)<JS_{min}\\ JS_{max}&JS(G||S)>JS_{max}\\ JS(G||S)&\text{otherwise,}\end{cases} \tag{7}\]
where \(\alpha\) is a constant scaling parameter. \(JS_{max}\) denotes the upper bound of the JS score, and \(JS_{min}\) denotes a threshold for scaling. Once the JS score is smaller than \(JS_{min}\), \(\epsilon_{dyn}\) is equal to \(\epsilon_{min}\).
#### Iii-B2 Depth Loss
As in [8], we use the depth loss as an additional term in the loss function. The depth loss is the error between rendered depth and LiDAR-measured depth along each ray. The loss is defined as
\[\mathcal{L}_{depth}(\Theta)=\|\hat{D}(\vec{r})-z^{*}\|_{2}^{2} \tag{8}\]
We found the depth loss contributes to blurry reconstruction with limited training time, but still provides good hole-filling, as shown in Fig. 6. Hence, unlike [8] which weights depth loss and LOS loss equally, we down-weight the depth loss by setting \(\lambda_{1}=5\times 10^{-6}\).
#### Iii-B3 Sky Loss
Similar to [8], we add an additional loss to force weights on rays pointing at the sky to be zero. While [8] segments the sky with camera-based semantic segmentation, we determine sky regions by observing holes in the LiDAR scans. First, each scan is converted to a depth image. This is then filtered via a single dilate and erode. Any points which remain empty reflect regions of the LiDAR scan where no return was received. If the ray corresponding to each of these points has a positive elevation angle in the global frame, it is determined to point to the sky. Thus, this heuristic works as long as the LiDAR is approximately level during initialization. For sky rays, the opacity loss is not enforced. Then, for all sky rays, the following loss is computed:
\[\mathcal{L}_{sky}(\Theta)=\|w\|_{1}. \tag{9}\]
### _Meshing_
To form a mesh from the implicit geometry, a virtual LiDAR is placed at estimated KeyFrame poses. We compute weights along LiDAR rays, then bucket the weights into a 3D grid. When multiple weights fall within the same grid cell, the
Fig. 3: Illustration of the difference between the JS loss and the LOS loss. The LOS loss sets a uniform margin \(\epsilon\) for rays pointing to both learned and unobserved regions. This strategy corrupts the learned information by forcing learned regions to predict higher variances. In contrast, the proposed JS loss sets the dynamic margin \(\epsilon\) for each ray depending on the similarity between goal distribution and predicted sample distribution. The JS loss sets higher margins for rays in unobserved regions to improve convergence, and sets lower margins for rays in learned regions to refine learned geometry.
maximum value is kept. Marching cubes is then used to form a mesh from the result. This process runs offline for visualization and evaluation, and is not a part of online training.
## IV Experiments
This section evaluates the trajectory estimation and mapping accuracy of LONER against state-of-the-art baselines. We further evaluate the choice of loss function and perform ablation studies over key features.
### _Implementation Details_
Table I provides parameters used for evaluation of LONER, which we found to generalize well across the tested datasets. All values were tuned experimentally to maximize performance while maintaining real-time operation. For all experiments, each method and configuration was run 5 times and we report the median result, as in [24]. The complete data from our evaluations is available on the project webpage.
### _Baselines_
We evaluate against NICE-SLAM [6] and LeGO-LOAM [15], which represent state-of-the-art methods in neural-implicit SLAM and LiDAR SLAM respectively. Additionally, we evaluate our SLAM pipeline with the loss functions from CLONeR [9] and URF [8]. We refer to these approaches as "LONER w./ \(\mathcal{L}_{\text{CLONeR}}\)" and "LONER w./ \(\mathcal{L}_{\text{URF}}\)" respectively. Finally, mapping performance is compared to SHINE mapping, which is run with groundtruth poses [10]. Since NeRFLOAM [11] is a recent work and code is not yet available, it is excluded from this evaluation in favor of SHINE. Note that NeRF-LOAM does not operate in real-time.
### _Datasets_
We evaluate performance on two open source datasets, Fusion Portable [4] and Newer College [28]. Collectively, the chosen sequences represent a range of scales and difficulties. From Fusion Portable, we select three scenes. The first sequence is MCR Slow 01, which is a small indoor lab scene collected on a quadruped. The others are Canteen Day and Garden Day, which are medium-scale semi-outdoor courlyard areas collected on a handheld platform. Both sequences contain few dynamic objects, as handling dynamic objects is left to future work. From Newer College, we evaluate on the Quad Easy sequence, which consists of two laps of a large outdoor college quad area. Because Newer College has monochrome fisheye cameras, it is incompatible with NICE-SLAM. Hence, NICE-SLAM is excluded from the Newer College results.
Note that the sequences used in testing do not have RGB-D sensors. Hence, we instead simulate RGB-D from stereo offline using RAFT optical flow estimation [29], and run NICE-SLAM on the result. NICE-SLAM can fail to converge in these semi-outdoor scenarios in a real-time configuration, so we increased the number of samples and iterations to improve performance. We ran NICE-SLAM for 350 iterations per KeyFrame, used \(2^{14}\) samples per ray, and selected a KeyFrame every 5 frames. This results in offline runtime performance. To bound the computational complexity, we set the middle and fine grid sizes to 0.64m and 0.32m respectively.
### _Performance Analysis_
#### Iv-D1 Trajectory Tracking Evaluation
Trajectory estimates from each algorithm are evaluated according to the procedure described in [30]. We use an open-source package for these evaluations1. Trajectories are aligned, then, the root-mean-squared absolute pose error is computed and referred to as \(t_{APE}\).
Footnote 1: [https://github.com/MichaelGrupp/evo](https://github.com/MichaelGrupp/evo)
Table II compares trajectory performance to state-of-the-art methods [15, 6]. Our method offers performance competitive with or better than existing state-of-the-art LiDAR SLAM. On the evaluated scenes, we outperform LeGO-LOAM except for on Newer College Quad, which is the largest and most open sequence. Even on Quad, our estimated trajectory is within millimeters of the LeGO-LOAM result. Additionally, unlike LeGO-LOAM, our method creates dense implicit maps of the scene. On the MCR sequence, NICE-SLAM successfully estimated a trajectory four of five times. The resulting trajectories were reasonable, but not competitive with the LiDAR-based methods. On the other larger sequences, NICE-SLAM failed to track the scene. This reflects that NICE-SLAM was developed for small indoor scenarios and does not scale to more challenging scenes.
LONER with the CLONeR loss achieves trajectory accuracy similar to LONER on some sequences. However, it consistently performs worse, especially in Quad. We found LONER using the URF loss is less competitive. We also tested LONER with the KL loss proposed by DS-NeRF [23]. However, it crashes when the initial pose estimation is poor because using fixed uncertainty for the goal distribution can cause numerical instability in the SLAM context.
#### Iv-D2 Reconstruction Evaluation
To evaluate maps, point clouds are created by first generating a mesh, then sampling
\begin{table}
\begin{tabular}{l|c c c c} & **MCR** & **Canteen** & **Garden** & **Quad** \\ \hline
**LeGO-LOAM** & 0.052 & 0.129 & 0.161 & **0.126** \\
**NICE-SLAM** & 0.248 & \(\mathbf{\chi}\) & \(\mathbf{\chi}\) & \(\mathbf{\chi}\) \\ \hline
**LONER w/ \(\mathcal{L}_{\text{URF}}\)** & 0.047 & 0.952 & 0.928 & 0.931 \\
**LONER w/ \(\mathcal{L}_{\text{CLONER}}\)** & 0.034 & 0.071 & 0.073 & 0.306 \\ \hline
**LONER** & **0.029** & **0.064** & **0.056** & 0.130 \\ \hline \end{tabular}
\end{table} TABLE II: Pose tracking results on Fusion Portable and Newer College sequences. Reported metric is RMS APE (m). An \(\mathbf{\chi}\) indicates the algorithm failed.
\begin{table}
\begin{tabular}{l c c} \hline Description & Symbol & Value \\ \hline Time per KeyFrame & \(t_{KF}\) & 3 \\ KeyFrame Window Size & \(N_{W}\) & 8 \\ Rays per KeyFrame & \(N_{R}\) & 512 \\ Samples per Ray & \(N_{S}\) & 512 \\ Min Depth Margin & \(\epsilon_{min}\) & 0.5 \\ JS Scale Hyperparameter & \(\alpha\) & 1 \\ Min/Max JS Divergence & \(JS_{min},JS_{max}\) & 1, 10 \\ Loss Coefficients & \(\lambda_{1},\lambda_{2}\) & \(5\times 10^{-6}\), 1 \\ \hline \end{tabular}
\end{table} TABLE I: Parameters for LONER.
a point cloud from the mesh. To bound the size of the generated point clouds, all maps (estimated and groundtruth) are downsampled to a voxel grid size of 5cm, except for the small MCR sequence, which uses a voxel grid size of 1cm. Finally, because groundtruth maps may extend beyond the field-of-view of the sensor used to collect each sequence, we crop each groundtruth map to the geometry observed by the sensor during data collection.
Map metrics include accuracy (mean distance from each point in the estimated map to each point in the groundtruth map) and completion (mean distance from each point in the groundtruth map to each point in the estimated map) [5, 6]. Additionally, precision and recall are computed with a 0.1m threshold. Table III shows quantitative evaluation for map reconstruction performance. LONER performs competitively with or better than the baselines in all tests. LONER and SHINE Mapping out-perform the other baselines. Qualitatively, Fig. 5 shows that SHINE and LONER estimate the most accurate maps. SHINE estimates more complete geometry, while LONER recovers finer detail and produces maps with fewer artifacts.
### _Runtime_
Runtime performance was evaluated on a computer with an AMD Ryzen 5950X CPU and an NVidia A6000 GPU, which is similar to the platform used to benchmark NICE-SLAM [6]. Each tracking step takes an average of 14ms to compute, which is faster than is needed by the 5Hz configuration. The map is updated continuously throughout operation, with 50 iterations allocated per KeyFrame and one KeyFrame added every 3 seconds. When run in parallel with the tracker, the average time to perform these 50 iterations is 2.79 seconds, or approximately 56ms per iteration. Hence, the map is updated at approximately 18Hz, and the system finishes processing a KeyFrame in under the 3 seconds allotted per KeyFrame. This ensures the system can keep up with input sensor data.
### _Loss Function Performance_
We evaluate each component of the loss function in isolation. In addition to the JS dynamic margin loss, we evaluate an LOS loss with three different exponential decay rates: 0.99 (Slow), 0.95 (Medium), and 0.85 (Fast). Finally, we consider the depth loss in isolation, as is used in [6, 5]. As a qualitative comparison of mapping quality, depth images rendered from each configuration are shown in Fig. 6. The proposed JS loss shows the most complete and detailed reconstruction of the tested configurations.
Additionally, Fig. 4 demonstrates that JS loss converges faster than other losses. In this experiment, we evaluate the convergence of each function when training on a single scan in simulated data. We use the CARLA simulator, where we obtain groundtruth depth images for evaluations. We compute the mean squared error of rendered depth images throughout training to show convergence performance. The results show that our JS loss converges faster than other losses.
### _Ablation Study_
This section describes the ablation studies over key components of the SLAM framework and the loss function. To compare maps in the ablations, we evaluate the L1 depth loss by comparing rendered depth to depth measured by the LiDAR. This is analogous to the L1 Depth metric commonly used in NeRF frameworks [5, 6, 7]. We compute the L1 depth across 25 randomly selected scans and report the mean value.
#### Iv-G1 SLAM Framework
In Table IV, we compare the impact of three changes to the SLAM framework. Disabling pose optimization is confirmed to strongly impact localization and mapping. Replacing the random KeyFrame selection with either the \(N_{W}\) most recent or \(N_{W}/2\) recent KeyFrames and \(N_{W}/2\) randomly selected KeyFrames generally reduces performance. Finally, on the outdoor dataset, disabling sky segmentation has little effect on localization but degrades reconstruction accuracy.
#### Iv-G2 Loss Function
Finally, we consider disabling features of the proposed loss function, which includes both the JS loss and the depth loss. In Table V, we evaluate using only depth loss, depth loss, and LOS loss with the fixed medium decay rate, LOS loss with dynamic margin and no depth loss, and the full system. The results demonstrate that the proposed system performs best in all metrics on all datasets.
Fig. 4: Training on a single LiDAR scan from the CARLA simulator indicates that the JS loss function converges faster than alternatives. The left images show simulated camera and LiDAR data. The plot on the right compares MSE (m\({}^{2}\)) between groundtruth depth and estimated depth throughout training.
\begin{table}
\begin{tabular}{c c|c c|c c|c} & \multicolumn{2}{c|}{**NICE**} & \multicolumn{2}{c|}{**LONER w/**} & \multicolumn{1}{c|}{**LONER w/**} & \multicolumn{1}{c}{**LONER**} \\ & & \multicolumn{2}{c|}{**SLAM**} & \multicolumn{2}{c|}{\(\mathcal{C}_{\text{LONER}}\)} & \multicolumn{1}{c|}{\(\mathcal{C}_{\text{LER}}\)} & \multicolumn{1}{c}{\(\mathcal{C}_{\text{LER}}\)} \\ \hline \multirow{6}{*}{**LONER**} & Acc. & 0.621 & 0.164 & **0.110** & 0.153 & 0.186 \\ & Cmp. & 0.419 & 0.075 & 0.080 & 0.102 & **0.069** \\ & Prec. & 0.124 & 0.624 & **0.665** & 0.449 & 0.473 \\ & Rec. & 0.476 & 0.757 & **0.940** & 0.884 & 0.932 \\ \cline{2-7} & Acc. & - & - & - & - & - \\ & Cmp. & \multirow{2}{*}{**✗**} & \multirow{2}{*}{0.116} & \multirow{2}{*}{0.220} & \multirow{2}{*}{0.190} & \multirow{2}{*}{**0.105**} \\ & Prec. & & - & - & - & - \\ & Rec. & & 0.753 & 0.524 & 0.846 & **0.878** \\ \hline \multirow{6}{*}{**LONER**} & Acc. & \multirow{2}{*}{**✗**} & \multirow{2}{*}{0.330} & \multirow{2}{*}{0.533} & \multirow{2}{*}{0.539} & \multirow{2}{*}{0.157} \\ & Cmp. & & & - & - & - \\ \cline{1-1} & Prec. & & - & - & - & - \\ \cline{1-1} & Rec. & & - & - & - & - \\ \cline{1-1} \cline{2-7} & Acc. & \multirow{2}{*}{**✗**} & \multirow{2}{*}{0.657} & \multirow{2}{*}{0.469} & \multirow{2}{*}{0.623} & \multirow{2}{*}{**0.784**} \\ \cline{1-1} & Prec. & & - & - & - & - \\ \cline{1-1} & Rec. & & 0.657 & 0.469 & 0.623 & **0.784** \\ \hline \multirow{6}{*}{**LONER**} & Acc. & - & **0.301** & 0.663 & 0.552 & 0.380 \\ \cline{1-1} & Cmp. & - & **0.148** & 0.543 & 0.895 & 0.373 \\ \cline{1-1} & Prec. & - & **0.453** & 0.150 & 0.127 & 0.327 \\ \cline{1-1} & Rec. & - & 0.717 & 0.602 & 0.484 & **0.809** \\ \end{tabular}
\end{table} TABLE III: Comparison of map Accuracy (m), Completion (m), Precision, and Recall between proposed and baseline algorithms. Unlike the others, SHINE used ground truth poses. A ‘-’ indicates invalid configurations, while ✗ indicates that the algorithm failed.
We perform an ablation study over the SLAM framework by disabling key features, and show the proposed system outperforms alternatives.
We perform an ablation study over the loss. The first row uses only depth loss. The second uses depth loss and LOS loss with no dynamic margin. The third row uses LOS Loss with dynamic margin. The final row is the proposed system.
\begin{table}
\begin{tabular}{c c c c|c c c|c c|c c}
**Depth** & **LOS** & **Dynamic** & \multicolumn{2}{c|}{**MCR**} & \multicolumn{2}{c|}{**Canteen**} & \multicolumn{2}{c|}{**Garden**} & \multicolumn{2}{c}{**Quad**} \\
**Loss** & **Loss** & **Margin** & \(t_{APE}\) & L1 Depth & \(t_{APE}\) & L1 Depth & \(t_{APE}\) & L1 Depth & \(t_{APE}\) & L1 Depth \\ \hline \(\bullet\) & \(\mathsf{O}\) & N/A & 0.046 & 0.355 & 1.236 & 3.014 & 0.788 & 2.447 & 0.779 & 2.265 \\ \(\bullet\) & \(\mathsf{\phi}\) & \(\mathsf{O}\) & 0.033 & 0.338 & 0.075 & 1.453 & 0.076 & 1.304 & 0.570 & 1.747 \\ \(\mathsf{O}\) & \(\mathsf{\phi}\) & \(\mathsf{\phi}\) & 0.030 & 0.358 & 0.068 & 1.907 & **0.056** & 1.490 & 0.154 & 2.262 \\ \hline \(\bullet\) & \(\mathsf{\phi}\) & \(\mathsf{\phi}\) & **0.029** & **0.284** & **0.064** & **1.296** & **0.056** & **1.198** & **0.130** & **0.880** \\ \end{tabular}
\end{table} TABLE V: We perform an ablation study over the loss. The first row uses only depth loss. The second uses depth loss and LOS loss with no dynamic margin. The third row uses LOS Loss with dynamic margin. The final row is the proposed system.
Fig. 5: Reconstruction of meshes on each sequence with the benchmarked algorithms. LONER and SHINE offer the most complete and detailed results. SHINE has slightly more complete geometry, noticeable in the top-left of the Quad images where LONER omits pillars captured by SHINE. However, LONER captures details better and has fewer artifacts.
Fig. 6: The depth images rendered from the MLP trained by LONER with different loss functions. The depth loss provides blurry geometry with limited training samples. The LOS loss with a fast decay rate provides more detailed geometry but worse hole-filling. In contrast, the LOS loss with a slow decay rate estimates the untrained region better but results in blurry geometry. The proposed JS loss combines the advantages of both fast and slow decay rates, which provides good hole-filling results while preserving geometry details.
## V Conclusions and future work
This paper proposed LONER, the first real-time LiDAR SLAM algorithm with an implicit neural map representation. To achieve SLAM in real-time, we presented a novel loss function for depth-supervised training. Results demonstrated that the JS loss outperforms current loss functions in both reconstruction accuracy and hole-filling while maintaining low computational costs. By testing this method on public datasets, we demonstrated that LONER achieves state-of-the-art map and trajectory quality, while providing an implicit geometry representation to support novel view depth rendering.
There are several avenues of future work to continue improving LONER. First, adding RGB data without compromising runtime performance would aid in the realism of reconstructions. Additionally, considering alternate input feature embeddings and ray selection heuristics could improve the ability of LONER to operate in city-scale scenarios. Further, inertial data could help the system track accurately under rapid rotation and in feature-sparse scenarios, where the LiDAR data is less informative. Finally, to function in highly dynamic environments, more work is needed to handle dynamic objects in the scene.
|
2309.15184 | Characterising semi-Clifford gates using algebraic sets | Motivated by their central role in fault-tolerant quantum computation, we
study the sets of gates of the third-level of the Clifford hierarchy and their
distinguished subsets of `nearly diagonal' semi-Clifford gates. The Clifford
hierarchy gates can be implemented via gate teleportation given appropriate
magic states. The vast quantity of these resource states required for achieving
fault-tolerance is a significant bottleneck for the practical realisation of
universal quantum computers. Semi-Clifford gates are important because they can
be implemented with far more efficient use of these resource states.
We prove that every third-level gate of up to two qudits is semi-Clifford. We
thus generalise results of Zeng-Chen-Chuang (2008) in the qubit case and of the
second author (2020) in the qutrit case to the case of qudits of arbitrary
prime dimension $d$.
Earlier results relied on exhaustive computations whereas our present work
leverages tools of algebraic geometry. Specifically, we construct two schemes
corresponding to the sets of third-level Clifford hierarchy gates and
third-level semi-Clifford gates. We then show that the two algebraic sets
resulting from reducing these schemes modulo $d$ share the same set of rational
points. | Imin Chen, Nadish de Silva | 2023-09-26T18:41:57Z | http://arxiv.org/abs/2309.15184v2 | # Characterising semi-Clifford gates using algebraic sets
###### Abstract
Motivated by their central role in fault-tolerant quantum computation, we study the sets of gates of the third-level of the Clifford hierarchy and their distinguished subsets of 'nearly diagonal' semi-Clifford gates. The Clifford hierarchy gates can be implemented via gate teleportation given appropriate magic states. The vast quantity of these resources states required for achieving fault-tolerance is a significant bottleneck for experimental implementations of universal quantum computers. Semi-Clifford gates are important because they can be implemented with far more efficient use of these resource states.
We prove that every third-level gate of up to two qudits is semi-Clifford. We thus generalise results of Zeng-Chen-Chuang (2008) in the qubit case and of the second author (2020) in the qutrit case to the case of qudits of arbitrary prime dimension \(d\).
Earlier results relied on exhaustive computations whereas our present work leverages tools of algebraic geometry. Specifically, we construct two schemes corresponding to the sets of third-level Clifford hierarchy gates and third-level semi-Clifford gates. We then show that the two algebraic sets resulting from reducing these schemes modulo \(d\) share the same set of rational points.
Introduction
In this article, we are concerned with two classes of fundamental quantum computational operations. The first contains those which are potentially costly to perform but are a standard ingredient in achieving universal quantum computation. The second is a subset of the first: those which can be performed using a protocol that is significantly more efficient than the standard one. We prove that, in common scenarios, these sets coincide. Doing so leads to an application of tools and language of algebraic geometry to quantum information--including some which have yet to be applied to this field.
In quantum computation, elementary units of information are represented by _states_: vectors in \(\mathbb{C}^{d}\) where \(d\in\mathbb{N}\) is a number of discrete degrees of freedom of a physical medium. The \(d=2\) case corresponds to a _qubit_, the quantum analogue of a classical bit. While this the most common case considered within quantum information, recently, more attention has been paid to the higher-dimensional _qudit_ case, e.g. the qutrit case of \(d=3\). Qudit-based computation promises increased capacity and efficiency [4, 23]. The advantages of qudit-based computation has led to rapidly accelerating development by experimentalists [5, 6, 14, 15, 20, 21, 24]. In the future, once quantum technology has progressed, we expect qudit-based computation to become commonplace. A system of \(n\) qudits is modeled by \((\mathbb{C}^{d})^{\otimes n}\simeq\mathbb{C}^{d^{n}}\).
_Quantum gates_, the elementary physical and computational operations acting on these data, are represented by unitary operators on \(\mathbb{C}^{d^{n}}\). A universal quantum computer must be capable of applying any unitary transformation, to an arbitrarily good approximation, to its input data. However, in practice, only a subgroup of the full unitary group is directly physically implemented. This restriction is necessary for enabling the techniques of fault-tolerance and error correction required for building a large-scale practical quantum computer.
The most common family of quantum error correcting codes are stabiliser codes [10]. In these schemes, data are encoded as eigenvectors of _Pauli gates_. Members of the group of _Clifford gates_ are special in that they can be fault-tolerantly applied to encoded data. Quantum universality, however, further requires the ability to fault-tolerantly perform non-Clifford gates. The standard choices of non-Clifford gates come from a set called the _third level of the Clifford hierarchy_. These can be fault-tolerantly performed via the _gate teleportation_ protocol using only Clifford gates supplemented with ancillary _magic state_ resources [11].
A significant practical barrier to achieving quantum universality via the supplementation of Clifford gates with magic states is the need to prepare such states for every desired application of a non-Clifford gate. The original gate teleportation protocol implemented \(n\)-qubit third-level gates using magic states of \(2n\) qubits. The need to reduce the burden of this substantial resource overhead cost led to the study of more efficient gate teleportation protocols.
Third-level qubit gates that are diagonal in the standard basis can be implemented using magic states of only \(n\) qubits [26]. This more efficient gate teleportation protocol was generalised to allow performing 'nearly diagonal' _semi-Clifford gates_, i.e. those gates \(G\) such that \(G=C_{1}DC_{2}\) for \(C_{1},C_{2}\) being Clifford gates and \(D\) a diagonal gate of the Clifford hierarchy [25]. Research into the Clifford hierarchy and semi-Clifford gates has remained active from their discovery twenty-five years ago to the present [1, 7, 8, 11, 18, 19, 25, 26].
Zeng-Chen-Chuang [25] proved that all Clifford hierarchy gates of one or two qubits is semi-Clifford; as they restricted their results to the single case of \(d=2\), they were able to employ a proof involving exhaustive computations. Beigi-Shor [3] showed that, other than the case of three-qubit third-level gates, this no longer holds in the setting of more than two qubits.
More recent work of the second author [8] initiated the study of qudit semi-Clifford gates via a unifying perspective based on the finite-dimensional Stone-von Neumann theorem. The efficient gate teleportation protocols for qubit semi-Clifford gates were generalised to the qudit case, ensuring that the notion of semi-Clifford gate is still of interest in the qudit setting. In this work, it was proved that all third-level gates of one qudit or two qutrits are semi-Clifford. In the case of two qutrits, the proof relied on exhaustive computations.
Below, we generalise the existing results for two qubits or qutrits to the case of arbitrary odd prime dimension, thus establishing that all third-level gates of up to two qudits are semi-Clifford.
**Theorem 1** (Main theorem).: _For any odd prime dimension \(d\in\mathbb{N}\), every two-qudit third-level gate \(G\in\mathcal{C}_{3}^{2}\) is semi-Clifford._
### Mathematical methods
In order to establish our result for infinitely many dimensions simultaneously, our mathematical arguments are necessarily highly abstract. We anticipate that our methods (and their theoretical justification) are applicable more widely within quantum information and beyond.
We first transform our original question into one about two systems of polynomial equations over the finite field \(\mathbb{Z}_{d}\). That is, we show that is sufficient to prove our theorem for a subset of two-qudit third-level gates that can be described by solutions to a family \(\mathcal{F}_{1}\) of polynomials over \(\mathbb{Z}_{d}\). Next, we show that those solutions that describe semi-Clifford third-level gates must satisfy an additional family \(\mathcal{F}_{2}\). It is therefore sufficient to establish that the radical ideals of the the generated ideals \((\mathcal{F}_{1}),(\mathcal{F}_{1}\cup\mathcal{F}_{2})\subseteq\mathbb{Z}_{d} [x_{1},...x_{n}]\) coincide.
In theory, this can be verified computationally using Grobner bases. In practice, the polynomial systems \(\mathcal{F}_{1},\mathcal{F}_{2}\) may be too complex to practically compute. This is the case in our problem and we supply the necessary methods for simplifying them. The analysis of our simplified equations naturally benefits from the application of geometric intuition.
Our result then follows from showing that two geometric spaces (specifically, _algebraic sets_), one arising from the set of third-level gates and the other from its subset of semi-Clifford gates, share the same set of (rational) points. Using _schemes_, an even more abstract notion of geometric space, we are to able establish our result for all prime dimensions with only one series of computations.
## 2 Preliminaries
To render this article accessible to both mathematicians and quantum information theorists, we provide background information on quantum computing and algebraic geometry.
### Quantum computation
Let \(d\in\mathbb{N}\) an odd be prime and \(n\in\mathbb{N}\). Denote by \(\omega=e^{i2\pi/d}\) the \(d\)-th primitive root of unity and by \([n]\) the set \(\{1,...,n\}.\) The _computational basis_ of \((\mathbb{C}^{d})^{\otimes n}\) is the standard basis whose members are denoted by the kets \(\left|\vec{z}\right\rangle\) for \(\vec{z}\in\mathbb{Z}_{d}^{n}\) where \(\mathbb{Z}_{d}\) is the ring of integers modulo \(d\). The unitary group of \((\mathbb{C}^{d})^{\otimes n}\) is denoted \(\mathcal{U}(d^{n})\). When we refer to a unitary \(U\)_up to phase_ we mean its equivalence class \([U]\) under the equivalence relation \(U\sim V\iff U=e^{i\theta}V\) for some \(\theta\in\mathbb{R}\).
#### 2.1.1 Pauli gates
The Pauli gates form the basis of the stabiliser codes of quantum error correction in that quantum data are encoded as simultaneous eigenvectors of commuting sets of Pauli gates.
The _basic Pauli gates_ for a single qudit are \(Z\in\mathcal{U}(d)\) and \(X\in\mathcal{U}(d)\): \(Z\left|z\right\rangle=\omega^{z}\left|z\right\rangle\) and \(X\left|z\right\rangle=\left|z+1\right\rangle\) where the addition is taken modulo \(d\); these unitaries have order \(d\). For \(n>1\) qudits and \(i\in[n]\), define \(Z_{i}\in\mathcal{U}(d^{n})\) to be a tensor product of \(n-1\) identity matrices of size \(d\times d\) with \(Z\) in the \(i\)-th factor: \(\mathbb{I}\otimes...\otimes Z\otimes...\otimes\mathbb{I}\); \(X_{i}\) is defined similarly. The \(Z_{i},X_{i}\) are the \(n\)_-qudit basic Pauli gates_ and they satisfy the Weyl commutation relations:
\[Z_{i}X_{i}=\omega X_{i}Z_{i} \tag{1}\]
and, for \(i\neq j\), the pairs \(Z_{i},X_{j}\); \(Z_{i},Z_{j}\); and, \(X_{i},X_{j}\) commute.
For each \(\vec{p},\vec{q}\in\mathbb{Z}_{d}^{n}\) we define the following unitaries as products of the basic Pauli gates:
\[Z^{\vec{p}}=Z_{1}^{p_{1}}\cdots Z_{n}^{p_{n}}\qquad X^{\vec{q}}=X_{1}^{q_{1}} \cdots X_{n}^{q_{n}}. \tag{2}\]
**Definition 1**.: _The group of Pauli gates is the subgroup of \(\mathcal{U}(d^{n})\) generated by the basic Pauli gates:_
\[\mathcal{C}_{1}^{n}=\{\omega^{c}Z^{\vec{p}}X^{\vec{q}}\ \ |\ c\in\mathbb{Z}_{d},(\vec{p},\vec{q}) \in\mathbb{Z}_{d}^{2n}\}.\]
The set of pairs \((\vec{p},\vec{q})\in\mathbb{Z}_{d}^{2n}\) form a symplectic vector space over \(\mathbb{Z}_{d}\) with the symplectic product:
\[[(\vec{p_{1}},\vec{q_{1}}),(\vec{p_{2}},\vec{q_{2}})]=\vec{p_{1}}\cdot\vec{q_{ 2}}-\vec{p_{2}}\cdot\vec{q_{1}}. \tag{3}\]
Pauli gates obey the multiplication law:
\[Z^{\vec{p_{1}}}X^{\vec{q_{1}}}\cdot Z^{\vec{p_{2}}}X^{\vec{q_{2}}}=\omega^{[( \vec{p_{1}},\vec{q_{1}}),(\vec{p_{2}},\vec{q_{2}})]}Z^{\vec{p_{2}}}X^{\vec{q_ {2}}}\cdot Z^{\vec{p_{1}}}X^{\vec{q_{1}}}. \tag{4}\]
#### 2.1.2 Conjugate tuples
If we conjugate \(Z,X\in\mathcal{U}(d)\) by any unitary gate \(G\), the gates \(G_{Z}=GZG^{*}\) and \(G_{X}=GXG^{*}\) also satisfy the Weyl commutation relations: \({G_{Z}}^{d}={G_{X}}^{d}=\mathbb{I}\) and \(G_{Z}G_{X}=\omega G_{X}G_{Z}\). Remarkably, if \((U,V)\) is any pair of unitaries satisfying these relations, there exists a gate \(G\), unique up to phase, such that \(U=G_{Z}\) and \(V=G_{X}\).
**Definition 2**.: _An ordered pair of unitaries \((U,V)\in\mathcal{U}(d^{n})\times\mathcal{U}(d^{n})\) is a conjugate pair if_
1. \(U^{d}=\mathbb{I}\) _and_ \(V^{d}=\mathbb{I}\)_,_
2. \(UV=\omega VU\)_._
There is a bijective correspondence between one-qudit gates \(G\in\mathcal{U}(d)\) (up to phase) and conjugate pairs \((U,V)\). This is a consequence of the Stone-von Neumann theorem [8, Lemma 3.4]. This notion naturally extends to gates of \(n\) qudits.
**Definition 3**.: _A conjugate tuple \(\{(U_{i},V_{i})\}_{i\in[n]}\) is an ordered set of \(n\) conjugate pairs of \(n\)-qudit gates such that any two elements of distinct pairs commute._
There is a bijective correspondence between \(n\)-qudit gates \(G\in\mathcal{U}(d^{n})\) (up to phase) and conjugate tuples [8, Lemma 3.8].
**Definition 4**.: _The conjugate tuple of an \(n\)-qudit gate \(G\in\mathcal{U}(d^{n})\) is \(\{(GZ_{i}G^{*},GX_{i}G^{*})\}_{i\in[n]}\)._
Below, we study gates via their conjugate tuples.
#### 2.1.3 Clifford gates
Clifford gates are those which can be fault-tolerantly performed directly on data encoded using a stabiliser code.
**Definition 5**.: _The group of Clifford gates is the normaliser of the group of Pauli gates as a subgroup of \(\mathcal{U}(d^{n})\):_
\[\mathcal{C}_{2}^{n}=\{C\in\mathcal{U}(d^{n})\mid C\mathcal{C}_{1}^{n}C^{*} \subseteq\mathcal{C}_{1}^{n}\}. \tag{5}\]
**Lemma 1**.: _For any Clifford gate \(C\in\mathcal{C}_{2}^{n}\) and Pauli gate \(P_{1}\in\mathcal{C}_{1}^{n}\), there exists a Pauli gate \(P_{2}\in\mathcal{C}_{1}^{n}\) such that_
\[CP_{1}=P_{2}C. \tag{6}\]
**Definition 6**.: _A gate \(G_{1}\in\mathcal{U}(d^{n})\) is Clifford-conjugate to \(G_{2}\in\mathcal{U}(d^{n})\) if there exists a Clifford gate \(C\in\mathcal{C}_{2}^{n}\) such that \(G_{1}=CG_{2}C^{*}\)._
The Clifford gates, up to phase, are in correspondence with affine symplectic transformations of \(\mathbb{Z}_{d}^{2n}\). First, we define the group of Clifford gates up to phase: \([\mathcal{C}_{2}^{n}]=\mathcal{C}_{2}^{n}/\mathbb{T}\). The group \(Sp(n,\mathbb{Z}_{d})\ltimes\mathbb{Z}_{d}^{2n}\) of affine symplectic transformations of \(\mathbb{Z}_{d}^{2n}\) is are pairings of \(2n\times 2n\) symplectic matrices over \(\mathbb{Z}_{d}\) and translations in \(\mathbb{Z}_{d}^{2n}\) with the composition law:
\[(S,v)\circ(T,w)=(ST,Sw+v). \tag{7}\]
There is a (_Weil_ or _metaplectic_) projective representation \(\rho:Sp(n,\mathbb{Z}_{d})\ltimes\mathbb{Z}_{d}^{2n}\rightarrow[\mathcal{C}_{2} ^{n}]\) that is an isomorphism between the groups of affine symplectic transformations and Clifford gates up to phase [12, 17].
While \(\mathcal{C}_{2}^{n}\) is maximal as a subgroup of \(\mathcal{U}(d^{n})\), it is not dense in \(\mathcal{U}(d^{n})\) and thus, approximately performing an arbitrary computation requires the ability to fault-tolerantly perform a non-Clifford gate.
#### 2.1.4 Third-level gates
The standard choices for non-Clifford gates to supplement the group of Clifford gates in order to achieve universal quantum computation come from the _Clifford hierarchy_. This is a recursively-defined and nested sequence of subsets of \(\mathcal{U}(d^{n})\). The groups of Pauli and Clifford gates form the first and second _levels_ respectively: \(\mathcal{C}_{1}^{n}\subset\mathcal{C}_{2}^{n}\subset\mathcal{C}_{3}^{n} \subset....\).
Non-Clifford gates that are in the third level or higher can be implemented fault-tolerantly on encoded data indirectly via a gate teleportation protocol. Such a protocol takes as input an arbitrary data state and a resource magic state and as output produces, using only Clifford gates, the data state with the desired gate applied to it. We are concerned in this work only with third-level gates as these can be implemented with the simplest gate teleportation protocols and are, as such, the standard choices in practical quantum computational schemes.
The set of third-level gates is defined very similarly to how the group of Clifford gates is defined. It is the set of gates that conjugate any Pauli gate to yield a Clifford gate. The simplest qudit example is the \(T\) gate [13] defined by Howard-Vala.
**Definition 7**.: _The set of third-level gates is the subset of \(\mathcal{U}(d^{n})\):_
\[\mathcal{C}_{3}^{n}=\{G\in\mathcal{U}(d^{n})\mid\text{GC}_{1}^{n}G^{*}\subseteq \mathcal{C}_{2}^{n}\}. \tag{8}\]
Note that \(\mathcal{C}_{3}^{n}\) is closed under multiplication on the left and right by elements of \(\mathcal{C}_{2}^{n}\).
From the above definition, we see that the conjugate tuple of a third-level gate consists of Clifford gates. Conversely, under the correspondence between gates (up to phase) and conjugate tuples, any conjugate tuple of Clifford gates yields a third-level gate. This was first observed in the qubit case by Beigi-Shor [3] and extended to higher dimensions and levels of the Clifford hierarchy in [8].
**Theorem 2** ([8, Theorem 3.12]).: _Gates of the third-level of the Clifford hierarchy, up to phase, are in bijective correspondence with conjugate tuples of Clifford gates._
Of particular interest is the subset of the Clifford hierarchy gates consisting of those whose matrices in the computational basis are diagonal.
**Definition 8**.: _The group of diagonal \(k\)-th-level gates is:_
\[\mathcal{D}_{k}^{n}=\{G\in\mathcal{C}_{k}^{n}\mid\text{G is a diagonal gate}\}. \tag{9}\]
#### 2.1.5 Semi-Clifford gates
Zhou-Leung-Chuang [26] introduced a simplified gate teleportation protocol, based on Bennett-Gottesman's one-bit teleportation, capable of fault-tolerantly implementing certain qubit Clifford hierarchy gates using half the ancillary resources required in the original Gottesman-Chuang protocol. This class of gates includes the diagonal Clifford hierarchy gates. Zeng-Chen-Chuang [25] introduced the notion of semi-Clifford gates which are 'nearly diagonal' in the sense of being within Clifford corrections of diagonal gates:
**Definition 9**.: _A third-level gate \(G\in\mathcal{C}_{3}^{n}\) is semi-Clifford if \(G=C_{1}DC_{2}\) where \(C_{1},C_{2}\in\mathcal{C}_{2}^{n}\) are Clifford gates and \(D\in\mathcal{D}_{3}^{n}\) is a diagonal third-level gate._
The following gate teleportation protocol for implementing the qudit semi-Clifford gate \(G=C_{1}DC_{2}\) using the magic state \(\left|M\right\rangle=D\left|+\right\rangle\) was introduced in [8, SS5(a)]. It ensures that the notion of semi-Clifford is still relevant in the qudit setting.
The following two lemmas are a direct consequence of Definition 9.
**Lemma 2**.: _If \(G\in\mathcal{C}_{3}^{n}\) is semi-Clifford then, for any Clifford \(C\in\mathcal{C}_{2}^{n}\), \(GC\) and \(CG\) are also semi-Clifford._
**Lemma 3**.: _If \(G_{1}\in\mathcal{U}(d^{n})\) is Clifford-conjugate to \(G_{2}\in\mathcal{U}(d^{n})\), then \(G_{1}\) is semi-Clifford if and only if \(G_{2}\) is._
In [8], an equivalent characterisation is given for the property of a gate being semi-Clifford.
**Definition 10**.: _A Lagrangian semibasis of a symplectic vector space of dimension \(2n\) is a linearly independent set of \(n\) vectors \(\{v_{1},...,v_{n}\}\) satisfying \([v_{i},v_{j}]=0\) for all \(i,j\in[n]\)._
Motivating the definition of a Langrangian semibasis is the fact that there exists a Clifford gate \(C\in C_{2}^{n}\) such that \(CZ_{i}C^{*}=\omega^{ci}Z^{\vec{p}_{i}}X^{\vec{q}_{i}}\) for \(i\in[n],c_{i}\in\mathbb{Z}_{d},\vec{p}_{i},\vec{q}_{i}\in\mathbb{Z}_{d}^{n}\) if and only if \(\{(\vec{p}_{i},\vec{q}_{i})\}_{i\in[n]}\) is a Lagrangian semibasis.
**Theorem 3** ([8, Theorem 5.4]).: _Suppose \(G\in\mathcal{C}_{k}^{n}\) and denote its conjugate tuple by \((U_{i},V_{i})_{i\in[n]}\). \(G\) is semi-Clifford if and only if there exists a Lagrangian semibasis \(\{(\vec{p}_{i},\vec{q}_{i})\}_{i\in[n]}\subseteq\mathbb{Z}_{d}^{2n}\) such that, for each \(i\in[n]\), \(U^{\vec{p}_{i}}V^{\vec{q}_{i}}\) is a Pauli gate._
#### 2.1.6 Quadratic and almost diagonal Clifford gates
Here we define two special classes of Clifford gates that will arise as members of the conjugate tuples of third-level gates below.
For any \(n\times n\) symmetric matrix \(\Phi\) over \(\mathbb{Z}_{d}\) the \(2n\times 2n\) block matrix
\[S=\begin{pmatrix}\mathbb{I}&0\\ \Phi&\mathbb{I}\end{pmatrix} \tag{10}\]
is symplectic with respect to the symplectic pairing of Equation 3 above. Under the representation \(\rho\) of [17], the image of \((S,0)\in Sp(n,\mathbb{Z}_{d})\ltimes\mathbb{Z}_{d}^{2n}\) is, up to phase, the diagonal Clifford gate \(D_{\Phi}\in\mathcal{D}_{2}^{n}\) defined by \(D_{\Phi}\ket{\vec{z}}=\omega^{\vec{z}^{t}\cdot\Phi\vec{z}}\ket{\vec{z}}\).
**Definition 11**.: _A quadratic Clifford gate \(D_{\Phi}\in\mathcal{D}_{2}^{n}\) is a diagonal gate of the form \(D_{\Phi}\ket{\vec{z}}=\omega^{\vec{z}^{t}\cdot\Phi\vec{z}}\ket{\vec{z}}\) where \(\Phi\) is a \(n\times n\) symmetric matrix over \(\mathbb{Z}_{d}\)._
**Definition 12**.: _A Clifford gate \(C\in\mathcal{C}_{2}^{n}\) is almost diagonal if it is of the form \(D_{\Phi}P\) where \(D_{\Phi}\in\mathcal{D}_{2}^{n}\) is a quadratic Clifford gate and \(P\in\mathcal{C}_{1}^{n}\) is a Pauli gate._
The almost diagonal Clifford gates form a group; we will require only the following lemma.
**Lemma 4**.: _If \(C_{1}=D_{\Phi_{1}}P_{1}\in\mathcal{C}_{2}^{n}\) and \(C_{2}=D_{\Phi_{2}}P_{2}\in\mathcal{C}_{2}^{n}\) are two almost diagonal Clifford gates, then \(C_{1}C_{2}\) is the almost diagonal Clifford gate \(D_{\Phi_{1}+\Phi_{2}}P\) for some Pauli gate \(P\in\mathcal{C}_{1}^{n}\)._
Proof.: \[C_{1}C_{2} =D_{\Phi_{1}}P_{1}\cdot D_{\Phi_{2}}P_{2}\] (11) \[=D_{\Phi_{1}}D_{\Phi_{2}}P_{3}P_{2}\] (12) \[=D_{\Phi_{1}+\Phi_{2}}P\] (13)
The second equality follows from Lemma 1. The last equality follows choosing \(P=P_{3}P_{2}\) and because
\[D_{\Phi_{1}}D_{\Phi_{2}}\ket{\vec{z}}=\omega^{\vec{z}^{t}\cdot\Phi_{1}\vec{z} }\omega^{\vec{z}^{t}\cdot\Phi_{2}\vec{z}}\ket{\vec{z}}=\omega^{\vec{z}^{t} \cdot\Phi_{1}\vec{z}+\vec{z}^{t}\cdot\Phi_{2}\vec{z}}\ket{\vec{z}}=\omega^{ \vec{z}^{t}(\Phi_{1}+\Phi_{2})\vec{z}}\ket{\vec{z}}=D_{\Phi_{1}+\Phi_{2}}\ket{ \vec{z}}. \tag{14}\]
#### 2.1.7 Characterisation of simplified third-level gates via a polynomial system
In this subsection, we define a subset of the third-level gates that are easily described by variables in \(\mathbb{Z}_{d}\).
**Definition 13**.: _A third-level gate \(G\in\mathcal{C}_{3}^{n}\) is simplified if its conjugate tuple contains only almost diagonal Clifford gates._
By using the symplectic formalism for Clifford gates and applying a classification of the maximal abelian subgroups of the symplectic group over finite fields due to Barry [2], we can'simplify' any third-level gate.
**Theorem 4** ([8, Lemma 5.7]).: _Every third-level gate \(G\in\mathcal{C}_{3}^{n}\) is Clifford-conjugate to a simplified one._
There is a surjection from simplified two-qudit third-level gates \(G\) to the set of solutions to a certain system of 18 polynomial equations over \(\mathbb{Z}_{d}\) in 28 variables. As we shall see, such a third-level gate is semi-Clifford if and only if its image under this surjection satisfies additional polynomial constraints.
Let \(G\in\mathcal{C}_{2}^{2}\) be a simplified two-qudit third-level gate and denote its conjugate tuple by \(\{(U_{1},V_{1}),(U_{2},V_{2})\}\), i.e.
\[U_{1}=GZ_{1}G^{*},\quad V_{1}=GX_{1}G^{*},\quad U_{2}=GZ_{2}G^{*},\quad V_{2}= GX_{2}G^{*}. \tag{15}\]
Since \(G\) is simplified, \(U_{1}=D_{\Phi_{1}}P_{1}\) is a quadratic Clifford gate. Expressing \(P_{1}\) as \(\omega^{c_{1}}Z^{\vec{p}_{1}}Z^{\vec{q}_{1}}\), we see that \(U_{1}\) is characterised up to phase by \(\Phi_{1}\) (a \(2\times 2\) symmetric matrix over \(\mathbb{Z}_{d}\)), \(\vec{p}_{1}\in\mathbb{Z}_{d}^{2}\), and \(\vec{q}_{1}\in\mathbb{Z}_{d}^{2}\): that is, by 7 elements of \(\mathbb{Z}_{d}\).
We similarly define \((\Phi_{i},\vec{p}_{i},\vec{q}_{i})\) for \(i\in\{2,3,4\}\) to characterise \(V_{1},U_{2},V_{2}\) up to phase respectively. We can give necessary and sufficient conditions for four septuples of \(\mathbb{Z}_{d}\) to arise in this way from a conjugate tuple of a simplified third-level gate. The following theorem follows directly from the definition of conjugate tuple and [8, Lemma 5.8].
**Theorem 5**.: _The ordered pair of ordered pairs of almost diagonal Clifford gates_
\[((\omega^{c_{1}}D_{\Phi_{1}}Z^{\vec{p}_{1}}X^{\vec{q}_{1}},\,\omega^{c_{2}}D_{ \Phi_{2}}Z^{\vec{p}_{2}}X^{\vec{q}_{2}}),\,\,(\omega^{c_{3}}D_{\Phi_{3}}Z^{\vec{p }_{3}}X^{\vec{q}_{3}},\,\omega^{c_{4}}D_{\Phi_{4}}Z^{\vec{p}_{4}}X^{\vec{q}_{4}})) \tag{16}\]
_is the conjugate tuple of a simplified two-qudit third-level gate if and only if its variables satisfy the following set of polynomial equations for \(1\leq i<j\leq 4\):_
\[\Phi_{i}\vec{q}_{j}=\Phi_{j}\vec{q}_{i}\] (17) \[\vec{q}_{i}^{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
_Example 26_.: Let \(f_{1},\ldots,f_{m}\in R\). Then
\[I=Rf_{1}+\ldots+Rf_{m} \tag{27}\]
is an ideal of \(R\), called the ideal generated by \(f_{1},\ldots,f_{m}\) and denoted by \((f_{1},\ldots,f_{m})\).
We first describe the dictionary of classical algebraic geometry in order to orient the reader. It is not necessary to replace \(k\) by an algebraic closure of \(k\) in order for the theory to work, but it is technically easier to understand and provides intuition for the more general theory of schemes that we will use to describe our computations.
We define _affine \(n\)-space_ to be the set \(\mathbb{A}^{n}=K^{n}\) where \(K\) is a choice of algebraic closure of \(k\). An algebraic set in \(\mathbb{A}^{n}\) is the _vanishing set_ of an ideal of \(K[x_{1},\ldots,x_{n}]\), i.e. a subset of the form:
\[V(I)=\left\{(x_{1},\ldots,x_{n})\in\mathbb{A}^{n}\mid\forall f\in I\;f(x_{1}, \ldots,x_{n})=0\right\}. \tag{28}\]
For \(B\) being any subset of \(R\), we define
\[V(B)=V(\langle B\rangle) \tag{29}\]
where \(\langle B\rangle\) is the intersection of all ideals of \(R\) containing \(B\), called the ideal generated by \(B\).
_Remark 1_.: An algebraic set in \(\mathbb{A}^{n}\) is just the solution set of a subset of polynomials in \(K[x_{1},\ldots,x_{n}]\).
**Theorem 7**.: _(Hilbert basis) Let \(I\) be an ideal of \(R\). Then \(I=(f_{1},\ldots,f_{m})\) for a finite number of \(f_{1},\ldots,f_{m}\in R\)._
The _radical_ of an ideal \(I\) in \(R\) is the ideal in \(R\) given by
\[\sqrt{I}=\left\{f\in R\mid\exists k\in\mathbb{N}\;f^{k}\in I\right\}, \tag{30}\]
and we say an ideal \(I\) in \(R\) is a radical ideal if \(\sqrt{I}=I\). The radical of an ideal \(I\) in \(R\) is the smallest radical ideal containing \(I\).
An ideal \(I\) in \(R\) is _prime_ if \(fg\in I\) implies \(f\in I\) or \(g\in I\). A prime ideal is automatically a radical ideal from these definitions.
**Theorem 8**.: _(Nullstellensatz) There is a bijective correspondence between algebraic sets in \(\mathbb{A}^{n}\) and radical ideals in \(K[x_{1},\ldots,x_{n}]\)._
Under this correspondence, a radical ideal \(I\subset K[x_{1},\ldots,x_{n}]\) is mapped to the algebraic set \(V(I)\subset\mathbb{A}^{n}\) and an algebraic set \(S\subset\mathbb{A}^{n}\) is mapped to the radical ideal:
\[I(S)=\{f\in K[x_{1},\ldots,x_{n}]\mid\forall(x_{1},...,x_{n})\in S\;f(x_{1},...,x_{n})=0\}. \tag{31}\]
The _Zariski topology_ on \(\mathbb{A}^{n}\) is given by taking the algebraic sets to be the collection of closed sets. We say an algebraic set \(S\) is _irreducible_ if \(S\) is not a union of two proper closed subsets of \(\mathbb{A}^{n}\). In terms of the Nullstellensatz correspondence, irreducible algebraic sets correspond to prime ideals of \(K[x_{1},\ldots,x_{n}]\): \(I(S)\) is prime whenever \(S\) is an irreducible algebraic set. Irreducible algebraic sets in \(\mathbb{A}^{n}\) are called (affine) varieties.
Every algebraic set \(S\) can be uniquely (up to order) decomposed into a finite number of irreducible algebraic sets \(S_{1},\ldots,S_{m}\), called the _irreducible components_ of \(S\), in the sense that
\[S=S_{1}\cup\ldots\cup S_{m}, \tag{32}\]
where
\[S_{i}\not\subseteq\cup_{j\neq i}S_{j}. \tag{33}\]
By Theorem 8, the decomposition in (32) translates into the so-called _radical decomposition_ of the ideal \(I\)
\[I=I_{1}\cap\ldots\cap I_{m}, \tag{34}\]
where \(I\) is the radical ideal of \(K[x_{1},\ldots,x_{n}]\) corresponding to \(S\) and \(I_{1},\ldots,I_{m}\) are the prime ideals in \(K[x_{1},\ldots,x_{n}]\) corresponding to the irreducible components \(S_{1},\ldots,S_{m}\), respectively, i.e. \(I=I(S)\) and \(I_{i}=I(S_{i})\).
We now describe a more arithmetic version of this theory which is intermediate between classical algebraic geometry and the general theory of schemes and often used in explicit arithmetic geometry.
An algebraic set \(S\subseteq\mathbb{A}^{n}\) is _defined over \(k\)_ if its corresponding ideal \(I(S)\) in \(K[x_{1},\ldots,x_{n}]\) is generated by polynomials in \(k[x_{1},\ldots,x_{n}]\). If we define
\[I(S/k)=I(S)\cap R \tag{35}\]
then an algebraic set \(S\subseteq\mathbb{A}^{n}\) is defined over \(k\) if and only if
\[I(S)=I(S/k)K[x_{1},\ldots,x_{n}] \tag{36}\]
(see [22]). The _\(k\)-rational points_ of an algebraic set \(S\) defined over \(k\) is the set
\[S(k)=\left\{(x_{1},\ldots,x_{n})\in k^{n}\mid\forall f\in I(S/k)\;f(x_{1}, \ldots,x_{n})=0\right\}. \tag{37}\]
If \(S\subseteq\mathbb{A}^{n}\) is defined over \(k\) and \(I\subseteq k[x_{1},\ldots,x_{n}]\) is its corresponding ideal, there is a version of (34) in \(R\) which asserts that
\[I=I^{\prime}_{1}\cap\ldots\cap I^{\prime}_{m^{\prime}}, \tag{38}\]
where the \(I^{\prime}_{i}\) are prime ideals in \(R\). Geometrically, this corresponds to a decomposition
\[S=S^{\prime}_{1}\cup\ldots\cup S^{\prime}_{m^{\prime}} \tag{39}\]
where the \(S^{\prime}_{i}\) are algebraic sets defined over \(k\) which are 'irreducible' over \(k\) (in a sense made precise in the next section). Furthermore we have that
\[S(k)=S^{\prime}_{1}(k)\cup\ldots\cup S^{\prime}_{m^{\prime}}(k). \tag{40}\]
We will not prove (38) \(\implies\) (40) as it is a special case of a more general decomposition proven in the next section. Also note that, in this formulation, commutative algebra is used to prove the corresponding geometric statement about rational points rather than the other direction in (32) \(\implies\) (34).
#### 2.2.2 The language of schemes
The notion of an algebraic set \(S\subset\mathbb{A}^{n}\) defined over \(k\) and its 'irreducibility' over \(k\) is best described using Grothendieck's theory of schemes. The theory is quite general and allows consideration of the analogues of algebraic sets over a commutative ring \(A\). While a thorough introduction to the theory of schemes is beyond the scope of this article, we review the elements required for our proof and refer the reader to [9] for a more complete treatment.
The principal construction in this theory is to associate to a commutative ring \(R\) a topological space: the set
\[\mathrm{Spec}(R)=\left\{\mathfrak{p}:\mathfrak{p}\text{ a prime ideal of }R\right\}, \tag{41}\]
under the Zariski topology. It is endowed with a sheaf of rings (coming from localisations of \(R\)) so that it becomes a locally ringed space called the _affine scheme of \(R\)_. In this construction, ring homomorphisms \(\varphi:R\to S\) induce and correspond to morphisms \(\varphi^{*}:\mathrm{Spec}(S)\to\mathrm{Spec}(R)\) by mapping a prime \(\mathfrak{q}\) of \(\mathrm{Spec}(S)\) to the prime ideal \(\varphi^{-1}(\mathfrak{q})\) of \(\mathrm{Spec}(R)\).
A closed subscheme of \(\mathrm{Spec}(R)\) is given by \(\mathrm{Spec}(R/I)\) for an ideal \(I\) of \(R\) and comes with a morphism \(\mathrm{Spec}(R/I)\to\mathrm{Spec}(R)\). This morphism is a homeomorphism of \(\mathrm{Spec}(R/I)\) with the closed subset of \(\mathrm{Spec}(R)\) given by
\[V(I)=\left\{\mathfrak{p}\in\mathrm{Spec}(R):\mathfrak{p}\supseteq I\right\}.\]
More generally, a _scheme_\(S\) is a locally ringed space possessing an open cover with each set in the cover being isomorphic to an affine scheme.
If \(R\) is a \(A\)-algebra, where \(A\) is a commutative ring, then there is a structure morphism
\[\mathrm{Spec}(R)\to\mathrm{Spec}(A), \tag{42}\]
and we say \(\mathrm{Spec}(R)\) is a _scheme over \(\mathrm{Spec}(A)\)_ (often this is abbreviated as scheme over \(A\)). For example, when \(R=k[x_{1},\ldots,x_{n}]/I\) with \(I\) an ideal of \(k[x_{1},\ldots,x_{n}]\) and \(A=k\), there is a structure morphism
\[\mathrm{Spec}(k[x_{1},\ldots,x_{n}]/I)\to\mathrm{Spec}(k), \tag{43}\]
corresponding to the inclusion
\[k\hookrightarrow k[x_{1},\ldots,x_{n}]/I, \tag{44}\]
where we note \(R\) then has the structure of a commutative \(k\)-algebra. We now define the analogue of a solution contained within \(k^{n}\): a _\(k\)-rational point_ on \(\mathrm{Spec}(R)\) (also called a _\(k\)-section_), is a morphism
\[\mathrm{Spec}(k)\to\mathrm{Spec}(R) \tag{45}\]
that upon composition with the structure morphism in (43) yields the identity morphism on \(\operatorname{Spec}(k)\). In terms of rings, it means a \(k\)-rational point of \(\operatorname{Spec}(R)\) corresponds to a \(k\)-algebra homomorphism \(R\to k\).
In the case that \(k\) is algebraically closed and \(R=k[x_{1},\ldots,x_{n}]/I\) is the \(k\)-algebra that is the quotient by an ideal \(I\) of \(k[x_{1},\ldots,x_{n}]\), the \(k\)-rational points of \(\operatorname{Spec}(R)\) are in bijection with points of the algebraic set \(V(I)\subseteq k^{n}\). More generally, if \(S\) is a scheme over a commutative ring \(A\), we denote by \(S(A)\) the set of \(A\)-sections of \(S\) defined as morphisms
\[\operatorname{Spec}(A)\to\operatorname{Spec}(R)\]
that upon composition with the structure morphism yields the identity on \(\operatorname{Spec}(A)\).
In case that \(S\) is a scheme over a commutative ring \(A\) and we have a ring homomorphism \(A\to A_{0}\) to another commutative ring \(A_{0}\), it is understood that \(S(A_{0})\) means \((S\times_{A}A_{0})(A_{0})\) where \(S\times_{A}A_{0}\) is the 'base change' of \(S\) from \(A\) to \(A_{0}\) described in the next paragraph.
If \(S=\operatorname{Spec}(R)\) is an affine scheme, then the _base change_\(S\times_{A}A_{0}\) is the scheme \(\operatorname{Spec}(R\otimes_{A}A_{0})\) over \(A_{0}\), where \(R\otimes_{A}A_{0}\) is the tensor product of \(R\) and \(A_{0}\) as \(A\)-modules. We can define the base change of a general scheme \(S\) from \(A\) to \(A_{0}\) by using an open cover of \(S\) consisting of affine schemes and patching together the base changes of these open affine schemes. This is a special case of a more general notion of fiber product of two schemes with morphisms to a base scheme [9].
#### 2.2.3 A decomposition of schemes over \(\mathbb{Z}[1/2]\)
We now prove a refinement of (38), (40).
**Theorem 9**.: _Let \(R\) be a commutative ring which is an \(A\)-algebra for a commutative ring \(A\) and suppose \(I,I_{1},\ldots,I_{m}\) are ideals in \(R\) such that_
\[I=I_{1}\cap\ldots\cap I_{m}. \tag{46}\]
_Let \(S=\operatorname{\mathit{Spec}}(R/I),S_{1}=\operatorname{\mathit{Spec}}(R/I_{1 }),\ldots,S_{m}=\operatorname{\mathit{Spec}}(R/I_{m})\) be the corresponding schemes over \(A\). Suppose we have a ring homomorphism \(A\to A_{0}\) where \(A_{0}\) is a field. Then_
\[S(A_{0})=S_{1}(A_{0})\cup\ldots\cup S_{m}(A_{0}). \tag{47}\]
Proof.: An \(A_{0}\)-rational point in \(S(A_{0})\) corresponds to a maximal ideal \(\mathfrak{m}\) of the \(A\)-algebra \(R\). If \(\mathfrak{m}\supseteq I\), then it follows that \(\mathfrak{m}\supseteq I=I_{1}\cap\ldots\cap I_{m}\supseteq I_{1}\cdots I_{m}\). Since \(\mathfrak{m}\) is a maximal ideal, it is a prime ideal, and hence we have that \(\mathfrak{m}\supseteq I_{j}\) for some \(j=1,\ldots,m\), that is, the \(A_{0}\)-rational point occurs in one of the sets \(S_{j}(A_{0})\).
An \(A_{0}\)-rational point in \(S_{j}(A_{0})\) corresponds to a maximal ideal \(\mathfrak{m}\) of the \(A\)-algebra \(R\). Since \(\mathfrak{m}\supseteq I_{j}\), we have that \(\mathfrak{m}\supseteq I=I_{1}\cap\ldots\cap I_{m}\) and hence the \(A_{0}\)-rational point lies in \(S(A_{0})\).
In the language of schemes, the above says that \(S\) is a scheme over \(A\) and decomposes into a union of closed subschemes \(S_{1},\ldots,S_{m}\) over \(A\); hence, there is a corresponding decomposition of the set of its \(A_{0}\)-rational points.
_Remark 2_.: In the above theorem, the ideals \(I,I_{1},\ldots,I_{m}\) need not be prime nor radical.
In our application of interest, \(A=\mathbb{Z}[1/2]\) and \(A_{0}=\mathbb{Z}_{d}\) for \(d\neq 2\). Theorem 9 gives a method to establish (47) for all \(d\neq 2\) with one computation which we now describe.
Suppose we have a scheme \(X\) over \(\mathbb{Z}\) given by an ideal \(I\subseteq\mathbb{Z}[x_{1},\ldots,x_{n}]\), i.e. \(X=\operatorname{Spec}(\mathbb{Z}[x_{1},\ldots,x_{n}]/I)\). We first compute a probable radical decomposition of \(I\otimes\mathbb{Q}\) over \(\mathbb{Q}\) using Magma using the built-in function ProbableRadicalDecomposition. This corresponds to
\[I\otimes\mathbb{Q}=I^{\prime}_{1}\cap\ldots\cap I^{\prime}_{m}, \tag{48}\]
for some ideals \(I^{\prime}_{1},\ldots,I^{\prime}_{m}\subseteq\mathbb{Q}[x_{1},\ldots,x_{n}]\).
_Remark 3_.: In order for the above computation in Magma to complete, we often have to use the probable version of radical decomposition. This means that the components returned may still decompose further, i.e. they may not correspond to prime ideals. However, this does not affect the arguments because of Remark 2.
Let \(I_{1},\ldots,I_{m}\) be the ideals in \(\mathbb{Z}[x_{1},\ldots,x_{n}]\) generated by generators of \(I^{\prime}_{1},\ldots,I^{\prime}_{m}\), respectively, but scaled to lie in \(\mathbb{Z}[x_{1},\ldots,x_{n}]\) by removing the greatest common factor among the coefficients. We next check that
\[I\otimes\mathbb{Z}[1/2]=(I_{1}\otimes\mathbb{Z}[1/2])\cap\ldots\cap(I_{m} \otimes\mathbb{Z}[1/2]), \tag{49}\]
using Magma and the following lemma.
**Lemma 5**.: _If \(I,I_{1},\ldots,I_{m}\subseteq\mathbb{Z}[x_{1},\ldots,x_{n}]\) are ideals then_
\[(I_{1}\cap\ldots\cap I_{m})\otimes\mathbb{Z}[1/2]=(I_{1}\otimes\mathbb{Z}[1/2]) \cap\ldots\cap(I_{m}\otimes\mathbb{Z}[1/2]).\]
Proof.: This follows from [16, Lemma 7.4] as \(\mathbb{Z}[1/2]\) is flat over \(\mathbb{Z}\).
We achieve (49) by first computing the intersection
\[I_{1}\cap\ldots\cap I_{m}, \tag{50}\]
in \(\mathbb{Z}[x_{1},\ldots,x_{n}]\) which is possible as Magma is able to compute intersections and equality of ideals (using the meet and eq operators respectively) in \(\mathbb{Z}[x_{1},\ldots,x_{n}]\). Applying Lemma 5, we obtain
\[(I_{1}\cap\ldots\cap I_{m})\otimes\mathbb{Z}[1/2]=(I_{1}\otimes\mathbb{Z}[1/2 ])\cap\ldots\cap(I_{m}\otimes\mathbb{Z}[1/2]). \tag{51}\]
We then verify using Magma that
\[(I\otimes\mathbb{Z}[1/2])=(I_{1}\otimes\mathbb{Z}[1/2])\cap\ldots\cap(I_{m} \otimes\mathbb{Z}[1/2]) \tag{52}\]
by checking that each generator of the LHS of (52) is a \(\mathbb{Z}[1/2][x_{1},\ldots,x_{n}]\)-linear combination of the generators of the RHS of (52) and vice versa. If the verification succeeds, we can apply Theorem 9 with \(A=\mathbb{Z}[1/2]\) and \(A_{0}=\mathbb{Z}_{d}\) for \(d\neq 2\) to conclude that
\[X(\mathbb{Z}_{d})=X_{1}(\mathbb{Z}_{d})\cup\ldots\cup X_{m}(\mathbb{Z}_{d}) \tag{53}\]
for all \(d\neq 2\), where \(X_{j}=\operatorname{Spec}(\mathbb{Z}[x_{1},\ldots,x_{n}]/I_{j})\) for \(j=1,\ldots,m\). Specifically in applying Theorem 9, we take
\[S =\operatorname{Spec}(\mathbb{Z}[1/2][x_{1},\ldots,x_{n}]/(I \otimes\mathbb{Z}[1/2]), \tag{54}\] \[S_{j} =\operatorname{Spec}(\mathbb{Z}[1/2][x_{1},\ldots,x_{n}]/(I_{j} \otimes\mathbb{Z}[1/2]), \tag{55}\]
for \(j=1,\ldots,m\), and \(S,S_{j}\) are the base changes of \(X,X_{j}\) from \(\mathbb{Z}\) to \(\mathbb{Z}[1/2]\), respectively.
## 3 Main Result
### Overview
**Theorem 1** (Main theorem).: _For any odd prime dimension \(d\in\mathbb{N}\), every two-qudit third-level gate \(G\in\mathcal{C}_{3}^{2}\) is semi-Clifford._
Let \(d\in\mathbb{N}\) be an odd prime dimension and let \(G\in\mathcal{C}_{3}^{2}\) be a two-qudit third-level gate. By Theorem 4, \(G\) is Clifford-conjugate to a simplified third-level gate and, so, by Lemma 3, we can assume without loss of generality that \(G\) is a simplified third-level gate.
Therefore, the conjugate tuple \(\{(U_{1},V_{1}),(U_{2},V_{2})\}\) of \(G\) consists of almost diagonal gates:
\[U_{1} =GZ_{1}G^{*}\,=\,\omega^{c_{1}}D_{\Phi_{1}}Z^{\vec{p}_{1}}X^{\vec{ q}_{1}}\] \[V_{1} =GX_{1}G^{*}=\omega^{c_{2}}D_{\Phi_{2}}Z^{\vec{p}_{2}}X^{\vec{q}_{2}}\] \[U_{2} =GZ_{2}G^{*}\,=\,\omega^{c_{3}}D_{\Phi_{3}}Z^{\vec{p}_{3}}X^{\vec{ q}_{3}}\] \[V_{2} =GX_{2}G^{*}=\omega^{c_{4}}D_{\Phi_{4}}Z^{\vec{p}_{4}}X^{\vec{q}_{4}}\]
where \(c_{i}\in\mathbb{Z}_{d}\), \(\vec{p}_{i},\vec{q}_{i}\in\mathbb{Z}_{d}^{2}\), and \(\Phi_{i}\) is a \(2\times 2\) symmetric matrix over \(\mathbb{Z}_{d}\):
\[\Phi_{i}=\begin{pmatrix}\Phi_{i1}&\frac{1}{2}\Phi_{i3}\\ \frac{1}{2}\Phi_{i3}&\Phi_{i2}\end{pmatrix}.\]
Recall that, by Theorem 5 for all \(1\leq i<j\leq 4\):
\[\Phi_{i}\vec{q}_{j}=\Phi_{j}\vec{q}_{i} \tag{17}\] \[\vec{q}_{i}^{\,t}\Phi_{j}\vec{q}_{i}-\vec{q}_{j}^{\,t}\Phi_{i} \vec{q}_{j}+\vec{p}_{i}\cdot\vec{q}_{j}-\vec{p}_{j}\cdot\vec{q}_{i}=c_{ij} \tag{18}\]
where
\[c_{ij}=\begin{cases}1&\text{if }(i,j)\in\{(1,2),(3,4)\}\\ 0&\text{otherwise}.\end{cases} \tag{19}\]
Ultimately, our aim is to show that \(G\) is semi-Clifford. We will do this by showing that the solution in \(\mathbb{Z}_{d}\) of Equations (17)-(18) yielded by \(G\) satisfy an additional three polynomial equations (Equations (57)) that guarantee \(G\) is semi-Clifford (Lemma 6). As the number of variables and the complexity of the systems of polynomial equations involved render the necessary calculations infeasible, we make two simplifications.
First, we show that without loss of generality, we may assume that \(GZ_{1}G^{*}\) is a Pauli gate or, equivalently, that the matrix \(\Phi_{1}=0\) (Theorem 10). Aside from eliminating three variables, making this reduction simplifies the polynomial conditions of Equations (57) that ensure \(G\) is semi-Clifford.
Our last reduction is to replace Equations (18) with a pair of polynomials \(E,F\) such that satisfying Equations (18) implies that either \(E=0\) or \(F=0\) (Lemma 7). The immediate effect of this will be to eliminate eight more variables. As a consequence of weakening some of our original constraints, we enlarge the corresponding spaces of solutions. However, we will find that, of the solutions to either of the two new polynomial systems, those that arise from simplified third-level gates also satisfy the semi-Clifford equations.
#### 3.1.1 Algebraic sets and schemes
For a fixed value of the dimension \(d\), the above strategy is easily summarised in the language of algebraic sets. We will define a set denoted by \(T(\mathbb{Z}_{d})\) (\(T\) for _third-level_) of \(\mathbb{Z}_{d}\)-rational points: the set of \(\mathbb{Z}_{d}\)-solutions of Equations (17)-(18) yielded by simplified third-level gates \(G\) with the reduction \(\Phi_{1}=0\). We will also define a subset denoted by \(S(\mathbb{Z}_{d})\) (\(S\) for _semi-Clifford_): the subset of \(T(\mathbb{Z}_{d})\) corresponding to solutions that also satisfy the semi-Clifford equations. Thus, we aim to prove that \(T(\mathbb{Z}_{d})=S(\mathbb{Z}_{d})\).
In theory, this can be checked computationally from the polynomial equations involved. Each algebraic set over \(\mathbb{Z}_{d}\) corresponds to an ideal in a polynomial ring with coefficients in \(\mathbb{Z}_{d}\). To show two algebraic sets over \(\mathbb{Z}_{d}\) have the same \(\mathbb{Z}_{d}\)-rational points it suffices to show the two corresponding ideals have the same radical ideal since the Nullstellensatz would imply that the two algebraic sets are the same. However, as alluded to above, the size and complexity of the polynomial systems involved means that computationally checking the two ideals have the same radical is not feasible.
The motivates our final reduction that decreases the number of variables at the expense of enlarging the algebraic set \(T(\mathbb{Z}_{d})\) to a union of two algebraic sets: \(T^{E}(\mathbb{Z}_{d})\cup T^{F}(\mathbb{Z}_{d})\). Here, the superscripts \(E\) (resp. \(F\)) indicates that the \(\mathbb{Z}_{d}\)-rational points satisfy \(E=0\) (resp. \(F=0\)) rather than Equations (18). Let \(S^{E}(\mathbb{Z}_{d})\) (resp. \(S^{F}(\mathbb{Z}_{d})\)) be the algebraic subset of \(T^{E}(\mathbb{Z}_{d})\) (resp. \(T^{E}(\mathbb{Z}_{d})\)) of solutions of the polynomial conditions to be semi-Clifford. It now becomes computationally feasible to show \(T^{E}(\mathbb{Z}_{d})\) decomposes into 5 algebraic subsets \(T^{E}_{1}(\mathbb{Z}_{d}),\ldots,T^{E}_{5}(\mathbb{Z}_{d})\) and \(S^{E}(\mathbb{Z}_{d})\) decomposes into 3 algebraic subsets \(S^{E}_{1}(\mathbb{Z}_{d}),\ldots,S^{E}_{3}(\mathbb{Z}_{d})\) and the \(T^{E}_{i}(\mathbb{Z}_{d})=S^{E}_{i}(\mathbb{Z}_{d})\) for \(i=1,2,3\), but \(T^{E}_{4}(\mathbb{Z}_{d}),T^{E}_{5}(\mathbb{Z}_{d})\) are disjoint from \(T(\mathbb{Z}_{d})\). These facts will imply that \(T(\mathbb{Z}_{d})=S(\mathbb{Z}_{d})\).
Our ultimate goal is to prove \(T(\mathbb{Z}_{d})\subset S(\mathbb{Z}_{d})\) for _all_ odd prime dimensions \(d\) (Lemma 8). Using the above method to show \(T(\mathbb{Z}_{d})=S(\mathbb{Z}_{d})\), a priori one would have to computationally decompose \(T^{E}(\mathbb{Z}_{d})\) and \(S^{E}(\mathbb{Z}_{d})\) separately for each odd prime \(d\). To enable one computation that deals with all odd primes, we introduce in Section 3.5 the corresponding schemes \(T,S,T^{E},S^{E},T^{F},S^{F}\) over \(\mathbb{Z}[1/2]\). In Section 3.6, we decompose \(T^{E}\) into schemes \(T^{E}_{1},\ldots T^{E}_{5}\) over \(\mathbb{Z}[1/2]\) and \(S^{E}\) into schemes \(S^{E}_{1},\ldots,S^{E}_{5}\) over \(\mathbb{Z}[1/2]\). Reducing these two decompositions modulo \(d\neq 2\) gives the required decompositions to conclude \(T(\mathbb{Z}_{d})=S(\mathbb{Z}_{d})\) as in the argument above.
### Reduction to an easier case
Here, we show that we can assume without loss of generality that \(GZ_{1}G^{*}\) is a Pauli gate.
**Theorem 10**.: _Let \(G\in\mathcal{C}^{2}_{3}\) be a simplified two-qudit third-level gate with conjugate tuple_
\[\{(U_{1}=D_{\Phi_{1}}P_{1},\,V_{1}=D_{\Phi_{2}}P_{2}),\,(U_{2}=D_{\Phi_{3}}P_{ 3},\,V_{2}=D_{\Phi_{4}}P_{4})\}.\]
_There exists a Clifford gate \(C\in\mathcal{C}^{2}_{2}\) such that \(GC\in\mathcal{C}^{2}_{3}\) is a simplified two-qudit third-level gate whose conjugate tuple is_
\[\{(U^{\prime}_{1}=D_{\Phi^{\prime}_{1}}P^{\prime}_{1},\,V^{\prime}_{1}=D_{ \Phi^{\prime}_{2}}P^{\prime}_{2}),\,(U^{\prime}_{2}=D_{\Phi^{\prime}_{3}}P^{ \prime}_{3},\,V^{\prime}_{2}=D_{\Phi^{\prime}_{4}}P^{\prime}_{4})\}\]
_with \(\Phi^{\prime}_{1}\) equal to the \(2\times 2\) zero matrix._
Proof.: Since the matrices \(\Phi_{i}\) are four members of the three-dimensional vector space over \(\mathbb{Z}_{d}\) of symmetric \(2\times 2\) matrices, a nontrivial linear combination \(\rho_{1}\Phi_{1}+\kappa_{1}\Phi_{2}+\rho_{2}\Phi_{3}+\kappa_{2}\Phi_{4}=0\) holds.
There exists \((\rho_{1}^{\prime},\kappa_{1}^{\prime},\rho_{2}^{\prime},\kappa_{2}^{\prime}) \in\mathbb{Z}_{d}^{4}\) such that \(\{(\rho_{1},\kappa_{1},\rho_{2},\kappa_{2}),(\rho_{1}^{\prime},\kappa_{1}^{ \prime},\rho_{2}^{\prime},\kappa_{2}^{\prime})\}\) form a Lagrangian semibasis.
To see this, note that of \(d^{4}\) elements of \(\mathbb{Z}_{d}^{4}\), \(d^{3}\) of these have vanishing symplectic product with \((\rho_{1},\kappa_{1},\rho_{2},\kappa_{2})\) while \(d^{4}-d\) of them are not a scalar multiple of \((\rho_{1},\kappa_{1},\rho_{2},\kappa_{2})\). Since \(d^{3}>d\), \((d^{4}-d)+d^{3}>d^{4}\) and there must exist \((\rho_{1}^{\prime},\kappa_{1}^{\prime},\rho_{2}^{\prime},\kappa_{2}^{\prime} )\in\mathbb{Z}_{d}^{4}\) satisfying both of these conditions. Let \(Q_{1}=Z_{1}^{\rho_{1}}X_{1}^{\kappa_{1}}Z_{2}^{\rho_{2}}X_{2}^{\kappa_{2}}\) and \(Q_{2}=Z_{1}^{\rho_{1}^{\prime}}X_{1}^{\kappa_{1}^{\prime}}Z_{2}^{\rho_{2}^{ \prime}}X_{2}^{\kappa_{2}}\).
The Pauli gates \(Q_{1},Q_{2}\) are independent (in the sense that no nontrivial product of them is a scalar multiple of the identity) since \(\{(\rho_{1},\kappa_{1},\rho_{2},\kappa_{2}),(\rho_{1}^{\prime},\kappa_{1}^{ \prime},\rho_{2}^{\prime},\kappa_{2}^{\prime})\}\) is linearly independent and they commute using Equation 4 since \([(\rho_{1},\kappa_{1},\rho_{2},\kappa_{2}),(\rho_{1}^{\prime},\kappa_{1}^{ \prime},\rho_{2}^{\prime},\kappa_{2}^{\prime})]=0\). We can thus construct a Clifford gate \(C\in\mathcal{C}_{2}^{2}\) such that \(CZ_{1}C^{*}=Q_{1}\) and \(CZ_{2}C^{*}=Q_{2}\) by applying Lemma 5.3 of [8].
Let \(P\in\mathcal{C}_{1}^{2}\) be a Pauli gate and let \(CPC^{*}\in\mathcal{C}_{1}^{2}\) be the Pauli gate \(\omega^{c}Z^{\bar{p}}X^{\bar{q}}\) for \(c\in\mathbb{Z}_{d},\vec{p},\vec{q}\in\mathbb{Z}_{d}^{2}\). Then,
\[GCP^{*}G^{*} =\omega^{c}G(Z^{\bar{p}}X^{\bar{q}})G^{*}\] \[=\omega^{c}U_{1}^{p_{1}}U_{2}^{p_{2}}V_{1}^{q_{1}}V_{2}^{q_{2}}\] \[=\omega^{c}(D_{\Phi_{1}}P_{1})^{p_{1}}\cdots(D_{\Phi_{4}}P_{4})^{ q_{2}}\] \[=D_{p_{1}\Phi_{1}+q_{1}\Phi_{2}+p_{2}\Phi_{3}+q_{2}\Phi_{4}}S\]
for some Pauli gate \(S\in\mathcal{C}_{1}^{2}\) by Lemma 4; the result is thus an almost diagonal Clifford gate. By taking \(P\) to be each of the basic Pauli gates, we see that \(GC\in\mathcal{C}_{3}^{2}\) is a simplified two-qudit third-level gate.
Taking \(P=Z_{1}\) and noting that \(CPC^{*}=Q_{1}=\omega^{0}Z^{\bar{p}}X^{\bar{q}}\) we see that \(GCZ_{1}C^{*}G^{*}=GQ_{1}G^{*}=U^{\bar{p}}V^{\bar{\kappa}}\) which, following the preceding paragraph, has the form \(D_{\Phi_{1}^{\prime}}P_{1}^{\prime}\) with \(\Phi_{1}^{\prime}=\rho_{1}\Phi_{1}+\kappa_{1}\Phi_{2}+\rho_{2}\Phi_{3}+\kappa_ {2}\Phi_{4}=0\).
### The semi-Clifford condition as a polynomial system
We derive an set of polynomial constraints such that, under the reductions of Theorem 10, solutions to (17)-(18) that satisfy these additional constraints describe simplified third-level gates that are semi-Clifford.
We will employ the characterisation of two-qudit simplified semi-Clifford gates given by Theorem 6. Recall that in the \(n=2\) case, a Lagrangian semibasis (Definition 10) is a linearly independent pair of vectors with vanishing symplectic inner product.
**Lemma 6**.: _If \(\Phi_{11}=\Phi_{12}=\Phi_{13}=0\), the kernel of the matrix_
\[\begin{pmatrix}\Phi_{11}&\Phi_{21}&\Phi_{31}&\Phi_{41}\\ \Phi_{12}&\Phi_{22}&\Phi_{32}&\Phi_{42}\\ \Phi_{13}&\Phi_{23}&\Phi_{33}&\Phi_{43}\end{pmatrix}. \tag{56}\]
_contains a Lagrangian semibasis if and only if the following three equations are satisfied:_
\[\begin{split}\Phi_{31}\Phi_{42}-\Phi_{32}\Phi_{41}&=0\\ \Phi_{31}\Phi_{43}-\Phi_{33}\Phi_{41}&=0\\ \Phi_{32}\Phi_{43}-\Phi_{33}\Phi_{42}&=0\end{split} \tag{57}\]
Proof.: First, note that \((1,0,0,0)\) belongs to the kernel. To form a Lagrangian semibasis from \((1,0,0,0)\), it is sufficient to have another nonzero vector of the form \((0,0,k_{3},k_{4})\) in the kernel of (56). This is because \([(1,0,0,0),(0,0,k_{3},k_{4})]=1\cdot 0-0\cdot 0+0\cdot k_{4}-0\cdot k_{3}=0\) and, if either \(k_{3}\neq 0\) or \(k_{4}\neq 0\), the two vectors form a linearly independent pair. Thus, \(\{(1,0,0,0),(0,0,k_{3},k_{4})\}\) is a Lagrangian semibasis.
In fact, as we shall now prove, the existence of a nonzero vector \((0,0,k_{3},k_{4})\) in the kernel is also necessary for the kernel to contain a Lagrangian semibasis.
Suppose that \(\vec{a}=(a_{1},a_{2},a_{3},a_{4})\) and \(\vec{b}=(b_{1},b_{2},b_{3},b_{4})\) form a Lagrangian semibasis in the kernel of the above matrix. The span of \(\vec{a}\) and \(\vec{b}\) will also be inside the kernel and, by bilinearity of the symplectic product, any pair inside this span will also have vanishing symplectic product.
We can assume without loss of generality that \(a_{2}\) is \(0\). If \(a_{2}\neq 0\) and \(b_{2}=0\), then we can swap the labels of \(\vec{a},\vec{b}\). If both are nonzero then by replacing \(\vec{a}\) with \(a_{2}^{-1}\vec{a}-b_{2}^{-1}\vec{b}\), we have that \(\{\vec{a},\vec{b}\}\) is a Lagrangian semibasis with \(a_{2}=1-1=0\).
We may further assume \(a_{1}=1\). If it is zero, then the lemma is proved by choosing the vector \((0,0,k_{3},k_{4})\) to be \((0,0,a_{3},a_{4})\). If \(a_{1}\) is not zero, then we can replace \(\vec{a}\) with \(a_{1}^{-1}\vec{a}\). We have that \(\{\vec{a},\vec{b}\}\) is a Lagrangian semibasis with \(\vec{a}=(1,0,a_{3},a_{4})\).
We know that \((1,0,0,0)\) is in the kernel and \([(1,0,0,0),(1,0,a_{3},a_{4})]=0\). Thus \(\{(1,0,0,0),\vec{a}-(1,0,0,0)\}=\{(1,0,0,0),(0,0,a_{3},a_{4})\}\) is a Lagrangian semibasis inside the kernel. Choosing \(k_{3}=a_{3}\) and \(k_{4}=a_{4}\), we have proved the kernel contains a nonzero vector \((0,0,k_{3},k_{4})\).
Combining this with our first observation, we find that the kernel contains a Lagrangian semibasis if and only if it contains a nonzero vector \((0,0,k_{3},k_{4})\).
Finally, we note that the condition that the kernel contains a nonzero vector \((0,0,k_{3},k_{4})\) is equivalent to the condition that the two rightmost columns are linearly dependent which is in turn is equivalent to all the \(2\times 2\) minors of the two rightmost columns vanishing. The polynomial equations (57) capture these conditions.
### Weakening a polynomial system via linearisation
Due to the number of variables and the complexity of the polynomial systems (17)-(18) we use to describe simplified third-level gates, we must make some simplifications in order to render the necessary calculations feasible.
Here, we will weaken Equations (18): for all \(1\leq i<j\leq 4\)
\[\vec{q}_{i}^{\,t}\Phi_{j}\vec{q}_{i}-\vec{q}_{j}^{\,t}\Phi_{i}\vec{q}_{j}+ \vec{p}_{i}\cdot\vec{q}_{j}-\vec{p}_{j}\cdot\vec{q}_{i}=c_{ij} \tag{18}\]
where
\[c_{ij}=\begin{cases}1&\text{ if }(i,j)\in\{(1,2),(3,4)\}\\ 0&\text{ otherwise.}\end{cases} \tag{19}\]
We can rearrange (18) by moving the quadratic terms to the right-hand side. The result is an inhomogeneous linear system of equations.
\[\begin{pmatrix}\vec{q}_{2}^{\,t}&-\vec{q}_{1}^{\,t}&0&0\\ \vec{q}_{3}^{\,t}&0&-\vec{q}_{1}^{\,t}&0\\ \vec{q}_{4}^{\,t}&0&0&-\vec{q}_{1}^{\,t}\\ 0&\vec{q}_{3}^{\,t}&-\vec{q}_{2}^{\,t}&0\\ 0&\vec{q}_{4}^{\,t}&0&-\vec{q}_{2}^{\,t}\\ 0&0&\vec{q}_{4}^{\,t}&-\vec{q}_{3}^{\,t}\\ \end{pmatrix}\begin{pmatrix}\vec{p}_{1}\\ \vec{p}_{2}\\ \vec{p}_{3}\\ \vec{p}_{4}\end{pmatrix}=\begin{pmatrix}\vec{q}_{2}^{\,t}\,\Phi_{1}\,\vec{q}_ {2}-\vec{q}_{1}^{\,t}\,\Phi_{2}\,\vec{q}_{1}+1\\ \vec{q}_{3}^{\,t}\,\Phi_{1}\,\vec{q}_{3}-\vec{q}_{1}^{\,t}\,\Phi_{3}\,\vec{q} _{1}\\ \vec{q}_{4}^{\,t}\,\Phi_{1}\,\vec{q}_{4}-\vec{q}_{1}^{\,t}\,\Phi_{4}\,\vec{q} _{1}\\ \vec{q}_{3}^{\,t}\,\Phi_{2}\,\vec{q}_{3}-\vec{q}_{2}^{\,t}\,\Phi_{3}\,\vec{q} _{2}\\ \vec{q}_{4}^{\,t}\,\Phi_{2}\,\vec{q}_{4}-\vec{q}_{2}^{\,t}\,\Phi_{4}\,\vec{q} _{2}\\ \vec{q}_{4}^{\,t}\,\Phi_{3}\,\vec{q}_{4}-\vec{q}_{3}^{\,t}\,\Phi_{4}\,\vec{q} _{3}+1\end{pmatrix}\] (18')
We will relax the polynomial system of Equations 18 and replace them with the disjunction of two weaker and simpler equations. These new constraints no longer involve the variables \(\vec{p}_{i}\), thus significantly reducing the complexity of our system.
As we shall see below in Lemma 7, the consistency of Equation (18') implies that the variables involved satisfy either one of two additional polynomial constraints. That is, a solution to Equations (18) is either a solution of \(E\) or a solution of \(F\).
**Lemma 7**.: _There are polynomials \(E,F\) in the entries of \(\Phi_{i},\vec{q}_{i}\) such that (18') being consistent implies \(E=0\) or \(F=0\)._
Proof.: Using Mathematica, we symbolically compute the order \(6\) minors of the \(6\times 8\) coefficient matrix of Equation 18' and find that they are all zero; thus its rank is always at most \(5\). We then symbolically row reduce the augmented matrix:
\[\begin{pmatrix}\vec{q}_{2}^{\,t}&-\vec{q}_{1}^{\,t}&0&0&\vec{q}_{2}^{\,t}\, \Phi_{1}\,\vec{q}_{2}-\vec{q}_{1}^{\,t}\,\Phi_{2}\,\vec{q}_{1}+1\\ \vec{q}_{3}^{\,t}&0&-\vec{q}_{1}^{\,t}&0&\vec{q}_{3}^{\,t}\,\Phi_{1}\,\vec{q}_ {3}-\vec{q}_{1}^{\,t}\,\Phi_{3}\,\vec{q}_{1}\\ \vec{q}_{4}^{\,t}&0&0&-\vec{q}_{1}^{\,t}&\vec{q}_{4}^{\,t}\,\Phi_{1}\,\vec{q}_ {4}-\vec{q}_{1}^{\,t}\,\Phi_{4}\,\vec{q}_{1}\\ 0&\vec{q}_{3}^{\,t}&-\vec{q}_{2}^{\,t}&0&\vec{q}_{3}^{\,t}\,\Phi_{2}\,\vec{q}_ {3}-\vec{q}_{2}^{\,t}\,\Phi_{3}\,\vec{q}_{2}\\ 0&\vec{q}_{4}^{\,t}&0&-\vec{q}_{2}^{\,t}&\vec{q}_{4}^{\,t}\,\Phi_{2}\,\vec{q}_ {4}-\vec{q}_{2}^{\,t}\,\Phi_{4}\,\vec{q}_{2}\\ 0&0&\vec{q}_{4}^{\,t}&-\vec{q}_{3}^{\,t}&\vec{q}_{4}^{\,t}\,\Phi_{3}\,\vec{q}_ {4}-\vec{q}_{3}^{\,t}\,\Phi_{4}\,\vec{q}_{3}+1\end{pmatrix}\]
and find a (rather complicated) rational function \(E/F\) in the bottom-right corner.
Fix a choice of values for the variables in the matrix above from \(\mathbb{Z}_{d}\) and assume, given these values, that \(F\neq 0\). In this case, the result of first substituting the variables for their values and then row reducing the matrix matches the result of symbolically row reducing the matrix and then substituting the variables. Thus, the bottom right corner is \(E/F\) evaluated at the chosen values. Since the coefficient matrix is not full-rank, its bottom row is all zeroes. So, if the linear system is consistent, \(E/F=0\) implying that \(E=0\).
Thus, consistency of Equation (18') implies that \(E=0\) or \(F=0\).
The Mathematica code generating the polynomials \(E,F\) is available at [https://github.com/ndesilva/semiclifford/](https://github.com/ndesilva/semiclifford/).
### Schemes of interest
Here, we define the key schemes over \(\mathbb{Z}[1/2]\) we will need to consider in our proof. We have already defined \(T\) whose defining equations are the polynomial systems describing simplified third-level gates (with \(\Phi_{1}=0\)). Further, \(S\) is additionally defined by the semi-Clifford condition of Equation (57). For both \(T,S\), we introduce variations for \(E\) (resp. \(F\)) wherein Equations (18) are replaced with \(E=0\) (resp. \(F=0\)). The table below defines schemes via the polynomial generators of their defining ideals.
\[\begin{array}{ll}\textbf{Scheme}&\textbf{Defining polynomial systems}\\ T&\text{(\ref{eq:T}), (\ref{eq:T})
Proof.: We achieve this by using noting the generators for \(I(S_{i}^{E})\) and \(I(T_{i}^{E})\) have coefficients in \(\mathbb{Z}[1/2]\) and we check in Magma that each generator of \(I(S_{i}^{E})\) is a \(\mathbb{Z}[1/2][x_{1},\ldots,x_{n}]\)-polynomial linear combination of the generators of \(I(T_{i}^{E})\) and vice versa.
Thus, we can conclude that:
\[T_{i}^{E}(\mathbb{Z}_{d})=S_{i}^{E}(\mathbb{Z}_{d})\text{ for }i=1,2,3. \tag{63}\]
Next, we will show that the \(\mathbb{Z}_{d}\)-rational points of the last two components \(T_{4}^{E},T_{5}^{E}\) of \(T\) do not correspond to valid third-level gates.
**Lemma 11**.: _The components \(T_{4}^{E},T_{5}^{E}\) are extraneous in the sense that_
\[T_{4}^{E}(\mathbb{Z}_{d})\cap T(\mathbb{Z}_{d})=T_{5}^{E}(\mathbb{Z}_{d})\cap T (\mathbb{Z}_{d})=\emptyset. \tag{64}\]
Proof.: The equations for \(T_{4}^{E}\) are
\[\vec{q}_{1}=\ldots=\vec{q}_{4}=0. \tag{65}\]
However, for \((i,j)\in\{(1,2),(3,4)\}\), we obtain a contradiction to the equation in (18).
The equations for \(T_{5}^{E}\) include the equations
\[\vec{q}_{1} =0 \tag{66}\] \[\lambda_{3}\vec{q}_{3} =\mu_{2}\vec{q}_{2}\] (67) \[\lambda_{4}\vec{q}_{4} =\nu_{2}\vec{q}_{2} \tag{68}\]
for some scalars \(\lambda_{3},\mu_{2}\); \(\lambda_{4},\nu_{2}\), with each pair not both zero.
If either \(\lambda_{3}=0\) or \(\lambda_{4}=0\), then \(\vec{q}_{2}=0\), but then for \((i,j)=(1,2)\), the left hand side of (18) is \(0\), while the right hand side of (17) is \(1\); a contradiction.
Assume \(\lambda_{3}\) and \(\lambda_{4}\) are both nonzero. Then \(\vec{q}_{3}\) and \(\vec{q}_{4}\) are scalar multiples of \(\vec{q}_{2}\). The equations in (18) for \((i,j)=(1,2),(1,3),(1,4)\) read as
\[\vec{p}_{1}\cdot\vec{q}_{2} =1 \tag{69}\] \[\vec{p}_{1}\cdot\vec{q}_{3} =0\] (70) \[\vec{p}_{1}\cdot\vec{q}_{4} =0. \tag{71}\]
The last two equations imply that \(\vec{q}_{3}=\vec{q}_{4}=0\). But then (18) for \((i,j)=(3,4)\) has left hand side being \(0\) but right hand side being \(1\) which is a contradiction.
In summary, we deduce that \(T_{4}^{E}(\mathbb{Z}_{d})\cap T(\mathbb{Z}_{d})=T_{5}^{E}(\mathbb{Z}_{d})\cap T (\mathbb{Z}_{d})=\emptyset\).
Combining (61), (62), (63) and (64), we find that:
\[(T^{E}(\mathbb{Z}_{d})\cap T(\mathbb{Z}_{d}))=(S^{E}(\mathbb{Z}_{d})\cap T( \mathbb{Z}_{d})). \tag{72}\]
Repeating the calculations of the above two lemmas with \(T^{F},S^{F}\), we find that:
\[(T^{F}(\mathbb{Z}_{d})\cap T(\mathbb{Z}_{d}))=(S^{F}(\mathbb{Z}_{d})\cap T( \mathbb{Z}_{d})) \tag{73}\]
Finally, we can use the above two lemmas to conclude that third-level gates are semi-Clifford.
**Theorem 11**.: _For any odd prime dimension \(d\in\mathbb{N}\), \(T(\mathbb{Z}_{d})=S(\mathbb{Z}_{d})\)._
Proof.: Clearly, \(S(\mathbb{Z}_{d})\subseteq T(\mathbb{Z}_{d})\). We also have that
\[T(\mathbb{Z}_{d}) \subseteq(T^{E}(\mathbb{Z}_{d})\cup T^{F}(\mathbb{Z}_{d}))\cap T( \mathbb{Z}_{d}) \tag{74}\] \[=(T^{E}(\mathbb{Z}_{d})\cap T(\mathbb{Z}_{d}))\cup(T^{F}(\mathbb{ Z}_{d})\cap T(\mathbb{Z}_{d}))\] (75) \[=(S^{E}(\mathbb{Z}_{d})\cap T(\mathbb{Z}_{d}))\cup(S^{F}(\mathbb{ Z}_{d})\cap T(\mathbb{Z}_{d}))\] (76) \[=(S^{E}(\mathbb{Z}_{d})\cup S^{F}(\mathbb{Z}_{d}))\cap T(\mathbb{ Z}_{d})\] (77) \[\subseteq S(\mathbb{Z}_{d}). \tag{78}\]
Here, Equation (74) follows from (58): \(T(\mathbb{Z}_{d})\subseteq T^{E}(\mathbb{Z}_{d})\cup T^{F}(\mathbb{Z}_{d})\). Equation (76) follows from (72) and (73). Finally, Equation (78) follows from the facts that every element of \(T(\mathbb{Z}_{d})\) satisfies Equations (17), (18) and \(\Phi_{1}=0\) and that every element of \(S^{E}(\mathbb{Z}_{d}),S^{F}(\mathbb{Z}_{d})\) satisfies the semi-Clifford condition of Equation (57).
Using Lemma 8, Theorem 11 entails Theorem 1.
Conclusions
A natural follow-up question to our work is to generalise our result to higher levels of the Clifford hierarchy. We can also consider extending our techniques to generalise the counterexamples of Zeng-Chen-Chuang [25] and Gottesman-Mochon to higher dimensions; that is, find examples of \(n\)-qudit \(k\)-th level gates that are not semi-Clifford when \(n>2,k>3\) or \(n>3,k=3\).
This work significantly advances the program of classifying gates of the Clifford hierarchy and semi-Clifford gates. Deeper mathematical understanding of the Clifford hierarchy and semi-Clifford gates will lead to more efficient circuit and gate synthesis. It further bolsters the viability of qudit-based fault-tolerant universal quantum computers by providing complete sets of efficient gate teleportation protocols. This is practically important as qudit magic state distillation has been proposed as a significantly more efficient alternative to the qubit case [4]. This, and other advantages of qudits, are driving current experimental research [5, 6, 14, 15, 20, 21, 24].
The abstract mathematical techniques developed to solve our problem are widely applicable to many more problems within quantum information. We give a blueprint for solving any problem that can be recast in terms of the equivalence of solution sets of polynomial equations over \(\mathbb{Z}_{d}\), for all odd prime \(d\). This is a potentially very broad class of problems given that the dominant stabiliser formalism for quantum error correction is based on the standard representation of the Heisenberg-Weyl group over \(\mathbb{Z}_{d}\).
## 5 Acknowledgments
IC was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC): Discovery Grant RGPIN-2023-03457. ND acknowledges support from the Canada Research Chair program, NSERC Discovery Grant RGPIN-2022-03103, the NSERC-European Commission project FoQaCiA, and the Faculty of Science of Simon Fraser University.
|
2303.17910 | Selective Knowledge Distillation for Non-Autoregressive Neural Machine
Translation | Benefiting from the sequence-level knowledge distillation, the
Non-Autoregressive Transformer (NAT) achieves great success in neural machine
translation tasks. However, existing knowledge distillation has side effects,
such as propagating errors from the teacher to NAT students, which may limit
further improvements of NAT models and are rarely discussed in existing
research. In this paper, we introduce selective knowledge distillation by
introducing an NAT evaluator to select NAT-friendly targets that are of high
quality and easy to learn. In addition, we introduce a simple yet effective
progressive distillation method to boost NAT performance. Experiment results on
multiple WMT language directions and several representative NAT models show
that our approach can realize a flexible trade-off between the quality and
complexity of training data for NAT models, achieving strong performances.
Further analysis shows that distilling only 5% of the raw translations can help
an NAT outperform its counterpart trained on raw data by about 2.4 BLEU. | Min Liu, Yu Bao, Chengqi Zhao, Shujian Huang | 2023-03-31T09:16:13Z | http://arxiv.org/abs/2303.17910v2 | # Selective Knowledge Distillation
###### Abstract
Benefiting from the sequence-level knowledge distillation, the Non-Autoregressive Transformer (NAT) achieves great success in neural machine translation tasks. However, existing knowledge distillation has side effects, such as propagating errors from the teacher to NAT students, which may limit further improvements of NAT models and are rarely discussed in existing research. In this paper, we introduce selective knowledge distillation by introducing an NAT evaluator to select NAT-friendly targets that are of high quality and easy to learn. In addition, we introduce a simple yet effective progressive distillation method to boost NAT performance. Experiment results on multiple WMT language directions and several representative models show that our approach can realize a flexible trade-off between the quality and complexity of training data for NAT models, achieving strong performances. Further analysis shows that distilling only 5% of the raw translations can help an NAT outperform its counterpart trained on raw data by about 2.4 BLEU.
## 1 Introduction
Non-autoregressive Transformer [1] introduces a promising paradigm of parallel decoding. Unlike sequentially predicting the words in an autoregressive model, NAT models can generate a sentence in parallel based on a conditional independence assumption, improving the inference speed by over 10 times. Besides, such a parallel decoding paradigm also has the potential to avoid the _exposure bias_ that has a long-term discussion in sequential decoding models [21]. As a result, we see NAT models achieve great success in machine translation tasks [15], surpassing many autoregressive models in WMT21.
Footnote 1: [http://statmt.org/wmt21/](http://statmt.org/wmt21/)
Despite the great potential of NAT models, they rely on sequence-level knowledge distillation [16] to achieve success. The introduced conditional independence assumption prevents NAT models from leveraging the inherent structures to overcome the _multi-modality problem_, where each input may correspond to several valid outputs in the training data. In such background, Gu et al. gu2018multi (Gu et al. gu2018multi) introduce sequence-level knowledge distillation to bypass the multi-modality problem of NAT models. They first train an autoregressive Transformer (AT, Vaswani et al. vanswani2017attention) as a teacher model, and then train the NAT models using the teacher's output as targets. The deterministic outputs generated by the teacher can directly avoid the one-to-many situation in raw training data and improve the performance of an NAT model by over 5.0 BLEU [2] in machine translation.
However, there are still several problems in standard knowledge distillation, which may limit the performance of NAT models. First, NAT models learning only from AT teachers may miss some important knowledge in the original data, such as prediction on low-frequency words [10]. Second, the outputs generated by the AT teacher are not necessarily suitable for the training of NAT models, as these architectures have quite different modeling paradigms. It should be noted that existing NAT research [14, 15, 16] only regards knowledge distillation as a necessary data processing technique but lacks a deeper discussion. Therefore, designing knowledge distillation strategies to help NAT models learn better is still an open question.
Figure 1: An illustration of our selective knowledge distillation. Standard knowledge distillation reduces the complexity of raw data at the cost of translation quality. In contrast, we propose combining the merits of raw and KD data, balancing the complexity and quality of training data.
In this paper, we propose a selective knowledge distillation technique for training NAT models to tackle the two issues in standard knowledge distillation. More specifically, we introduce an NAT model trained on distilled data as an evaluator to construct the training data, replacing the original distilled data with raw data dynamically in the learning progress. There are two intuitions behind our selective knowledge distillation: First, our approach can access raw data and avoid repeating the mistakes made by the AT teacher. Second, due to its similar modeling paradigm, the NAT evaluator can effectively assess whether the data is suitable for the training of NAT students. The NAT evaluator judges each sentence in the original training set by scoring the predicted tokens. We select sentences with higher scores as the targets which generally contain minor modality change from the distilled data but show better translation quality as raw data. In tuition, these sentences can be safely exposed to NAT students during training. Besides, we introduce a hard-to-easy curriculum learning strategy while training, which has been demonstrated effective for automatic speech recognition systems [1].
We conduct experiments on two widely-used machine translation benchmarks, WMT14 En-De and WMT16 En-Ro and over an inference-efficient AT structure [10] and two representative NAT architectures [12, 13]. Experiment results show that our selective knowledge distillation consistently improves models' performance on each dataset. Further analyses show that a small ratio (5%) of distilled data is sufficient to improve NAT significantly, demonstrating that our method can effectively select the NAT-friendly raw translations. As an early attempt to introduce raw data for training NAT models, we hope this work will raise more attention to selecting beneficial examples from authentic data to recover the missing information while keeping the merits of knowledge distillation.
## 2 Background
Neural Machine Translation can be defined as a sequence-to-sequence generation problem: given source sentence \(X=\{x_{1},x_{2},\cdots,x_{N}\}\), to generate target sentence \(Y=\{y_{1},y_{2},\cdots,y_{L}\}\) according to \(P(Y|X,\theta)\), where \(\theta\) denotes the parameters of a network.
### Non-Autoregressive Neural Machine Translation
Non-Autoregressive Transformer [14] imposes the conditional independence assumption among target words while factorizing the probability \(P(Y|X,\theta)\):
\[P(Y|X,\theta)=P(L|X,\theta)\prod_{i=1}^{L}P(y_{i}|X,\theta)\]
where \(L\) is the length of the target sequence.
The conditional independence assumption allows NAT to significantly outperform autoregressive Transformer (AT) in inference speed, but it also leads to an inferior translation quality compared to AT. A well-recognized explanation is that NAT models suffer from the multi-modality problem [12], where the model fails to capture the highly multimodal distribution of target translations adequately. For example, a source sentence might have several ground-truth translations that differ in wording and structure, and NAT models are likely to get confused since they have to select from multiple choices only through the source sentence. In contrast, an AT model can easily learn these different translations by predicting tokens based on the source sentence and previous tokens.
### Knowledge Distillation
To alleviate the multi-modality problem, sequence-level knowledge distillation [13] is adopted as a preliminary step for training an NAT model, where the original translations are replaced with those generated by a pretrained autoregressive teacher. The distilled data eases the training by introducing more deterministic knowledge and significantly improves the performance of an NAT student. Some previous works propose generating several distilled translations and select the most suitable candidate [15, 16] to gain more benefits from knowledge distillation.
\begin{table}
\begin{tabular}{c|c|l} \hline \hline
**\#** & **raw/distilled** & **outputs** \\ \hline \multirow{2}{*}{Translation \#1} & raw & **1 entirely** agree with the PPE Group that paragraph 9 is **central**. \\ & distilled & I _fully_ agree with the PPE Group that paragraph 9 is _of key importance_. \\ \hline \multirow{2}{*}{Translation \#2} & raw & That is up to the Heads of Governments to do **this** week. \\ & distilled & That is up to the Heads of Government to do _this this_ week. \\ \hline \multirow{2}{*}{Translation \#3} & raw & **Once you have started reading, you can not put it down**. \\ & distilled & Anyone who starts reading it keeps his breath until the last word. \\ \hline \multirow{2}{*}{Translation \#4} & raw & **Nice and clean hotel in great location, great value for money.** \\ & distilled & Gravino Cinco is fresher and newer than Tryp Ciudad Hotel! \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of different situations. Our method mainly improves performance on sentences like Translation 2, where minor mistakes are introduced by the AT teacher. In our experiments, the NAT evaluator generated a translation exactly the same as the distilled one, while the NAT student trained on selected data corrected the mistake by removing the repeated token.
However, knowledge distillation has some side effects like leading to more errors on low-frequency words (Ding et al., 2021). Due to the differences in architectures between AT and NAT, the translations generated by an AT teacher are not always suitable for the learning of NAT. Another obvious limitation of KD is that it propagates mistakes made by the AT teacher to NAT students. Table 1 shows that the translation generated by the AT teacher contains mistakes which might harm the performance of the student. Therefore, how to break such limitations of AT-based KD and utilize authentic data to improve the translation quality remains a question.
## 3 Method
The intuition behind our method is that introducing raw translations which do not significantly increase the complexity will not make training much more challenging but free NAT from some mistakes made by the AT teacher. Section SS3.1 introduces how to select NAT-friendly raw sentences, combining the high translation quality of raw data and reduced complexity of distilled data. Besides, we also introduce a hard-to-easy learning strategy for dynamically configuring the raw data ratio in the training process, as presented in Section SS3.2.
### Selecting NAT-friendly Raw Translations
While raw data are of high quality in most cases, the multi-modality problem prevents an NAT model from capturing the distribution of target translations properly. Contrarily, distilled data eases the training of NAT by reducing the complexity of targets, but mistakes made by the AT teacher will be easily propagated to the student if only distilled data is exposed. It is natural to think that NAT models should learn some missing information in the distilled translations from the original ones to improve translation quality, and a simple solution is to expose part of the raw data to NAT. The question remaining is how to evaluate whether a raw translation should be exposed.
We propose to evaluate each translation in the raw data through an NAT evaluator trained on distilled data, replacing a raw translation with its distilled version when the NAT evaluator fails to generate outputs similar to the reference. Specifically, given source sentence \(X\), we first get a decoded output \(\hat{Y}=f_{teacher}(X)\) using the NAT evaluator. Then we evaluate the raw translation \(Y\) through a metric \(score(X,Y)=1-d(Y,\hat{Y})/|Y|\), which measures the difference between the ground truth translation and the predicted output. The translations with high scores are considered NAT-friendly.
In the following part we explain why an NAT evaluator can decide whether a raw sentence can be safely exposed. Since we want to keep both the high translation quality of raw sentences and the simplified modes of distilled sentences, a naive answer is that raw sentences with fewer modes can be set as targets for NAT. If the NAT evaluator trained on distilled data can get a prediction close to the raw target, then the raw and distilled translations are probably quite similar in their modes. To illustrate the details, here we list four typical situations where the distilled translations are different from the original ones:
* **Minor Modality Change:** A few words are substituted by their synonyms without introducing great changes to the structure and semantics of the raw sentence. Therefore, both the raw and distilled translation can be used as the target.
* **Minor Mistakes:** While the structure and semantics of the original sentence is preserved, a few mistakes like falsely predicted low-frequency words or word repetition are introduced. Learning from the raw translation can be helpful to correct these mistakes.
* **Dramatic Modality Change:** Despite sharing the same semantics, the raw and distilled translation are expressed in quite different ways. The raw translation contains modes too challenging for an NAT model.
* **Dramatic Mistakes:** The distilled sentence is not well-translated, but we are not sure whether the raw translation is a better target to learn from since even the AT.
Table 1 provides the examples corresponding to each situation. Minor modality changes (Translation #1) can be tolerated since they do not greatly increase the modes of training data, and correcting minor mistakes (Translation #2) is the main goal of our method. The NAT evaluator is not likely to get a close prediction when there exists dramatic differences between raw and distilled data (Translation #3), so when it gives a raw sentence a high score, it is highly likely that the sentence satisfies our requirement of simple and clean translation. Besides, an NAT evaluator can avoid the cases where a distilled sentence is close to the original one but still too challenging for an NAT (Translation #4). Therefore, we can choose to distill only the raw sentences with low scores under the NAT evaluator and keep the rest unchanged. In this way, the dataset displays higher translation quality while keeping the general complexity suitable for NAT.
### Hard-to-Easy Data Selection
Motivated by the success of curriculum learning (Qian et al., 2021; Guo et al., 2020; Liu et al., 2020), we further introduce a hard-to-easy learning strategy to improve the performance. Ding et al. (2021) show that pretraining with raw data can improve the performance of NAT by rejuvenating
low-frequency words. To keep the merits of low-mode, they further trained the pretrained model on distilled data. We combine this idea with our data selection method by decreasing the ratio of raw data in the training process. Specifically, the training data for each update can be formulated as:
\[\{(X,Y)|score(X,Y)\geq T_{k}\wedge(X,Y,Y^{KD})\in\mathcal{D}_{k}\}\cap\]
\[\{(X,Y^{KD})|score(X,Y)<T_{k}\wedge(X,Y,Y^{KD})\in\mathcal{D}_{k}\}\]
where \(T_{k}\) and \(\mathcal{D}_{k}\) denote the threshold and the set of tuples \((X,Y,\hat{Y})\) for the \(k\)th update respectively. \(T_{k}\) can be determined by a preset function or feedbacks from the NAT student. In our experiments, we adopt a linear function for \(T_{k}\) which is computed as \(T_{k}=T_{0}+\frac{k}{K}(T_{1}-T_{0})\), where \(K\) is the total number of updates, the constants \(T_{0}\) and \(T_{1}\) can be determined according to the distribution of score \(P(score(X,Y))\) given a specific NAT evaluator and the raw training data. The whole data selection process can be found in Algorithm 1. This process is an additional stage following standard training procedures for NAT, thus being generic to various data and architectures.
## 4 Experiments
### Experimental Settings
#### Datasets
We conduct experiments on two widely-used machine translation datasets: WMT14 English-German (En-De) and WMT16 English-Romanian (En-Ro), which consist of 3.96M and 0.6M sentence pairs, respectively. Following the common practices, we process the datasets with Moses script [13] and segment the words into subword units using byte-pair encoding (BPE, Sennrich, Haddow, and Birch2016). The subword embeddings are shared between the source and target language. For the sequence-level knowledge distillation, we employ the Transformer with base settings in Vaswani et al. (2017) as the teacher.
#### Model
We evaluate our selective knowledge distillation on DeepShallow [10], CMLM [1], and GLAT+CTC [1]. DeepShallow is an inference-efficient AT structure with a deep encoder and a single-layer autoregressive decoder, which also benefits from knowledge distillation. We adopt a 6-layer encoder in the experiments. CMLM iteratively generates the target sequence from the masked input. For the previous two models, we compute \(d(Y,\hat{Y})\) using the Hamming distance \(\sum_{i=1}^{L}[Y_{i}\neq\hat{Y}_{i}]\). GLAT builds the word interdependencies to improve the performance of single-pass parallel generation. During training, the decoder is fed with randomly masked target sequence, and the number of masked tokens depends on the prediction accuracy. The performance of GLAT can be further improved by connectionist temporal classification (CTC, Graves et al.2006), which utilizes an alignment-based objective. Another advantage of CTC is that it can align the targets according to decoder outputs so that ground-truth tokens are not required to be predicted on a fixed position, thus making the NAT evaluator more tolerant to minor mistakes when evaluating a raw translation. To compute \(score(X,Y)\) for applying our approach on GLAT+CTC, we use dynamic programming to get the aligned path \(Y^{align}\) with the largest align score [1] and adopt the Hamming distance as the metric, which is computed as \(d(Y^{align},\hat{Y})=\sum_{i=1}^{L^{\prime}}[Y_{i}^{align}\neq\hat{Y}_{i}]\).
#### Training Settings
We follow the hyperparameters of models in their original papers. We set the dropout rate to \(0.1\) for WMT14 En-De/De-En and \(0.3\) for WMT16 En-Ro. For the optimizer, we use Adam with \(\beta=(0.9,0.999)\) to train our model. The learning rate warms up to \(5e-4\) within 4k steps and then decays with the inverse square-root schedule. For the sampling ratio \(\lambda\) in GLAT+CTC, we adopt linear annealing from 0.5 to 0.3. As to the hard-to-easy learning strategy, we set \(T_{0}=0.4,T_{1}=1.0\) under En-De/De-En and \(T_{0}=0.6,T_{1}=1.0\) under En-Ro for GLAT+CTC. We set \(T_{0}=0,T_{1}=1.0\) for other models. All the NAT evaluators and students are trained with batches of 64k tokens, lasting 300k updates and 100k updates for En-De/De-En and En-Ro respectively. To better utilize the NAT evaluators, the students are initialized with parameters of the teachers trained after 25k updates for En-De/De-En and 10k updates for En-Ro, when the general knowledge has been acquired. We average the top 5 checkpoints chosen by the validation BLEU scores to create the final model.
#### Baselines
We compare our method with standard KD which distills the whole training set. Another baseline is Low Frequency Rejuvenation (LFR, Ding et al.2021), which also exposes raw data to the NAT. They trained NAT models with raw, bidirectional KD and standard KD data in three different stages. We also apply their method to GLAT+CTC with the training updates split to approximately \(2:2:3\) in ratio for each stage. Their method is trained for 325k updates on En-De/De-En and 110k updates on En-Ro for fair comparison. Note that their method augments the training data by introducing _(distilled source, raw target)_ sentence pairs, while ours only utilizes raw and standard KD data. We evaluate all the models using the tokenized and cased BLEU scores [1], and a learned metric COMET [14] with the recommended model wmt20-comet-da.
### Main Results
Table 2 and Table 3 present the main results on the benchmarks. Our method outperforms baselines consistently across different language pairs. We enable the model to learn directly from authentic data without greatly increasing the modes by selecting NAT-friendly raw translations using an NAT evaluator. Compared with the previous work [13] which also exposes raw data directly to NAT, we can determine the period of exposure for each sentence by setting the threshold dynamically in the training process. We highlight the empirical advantages of our method:
* Simple, effective and generic. Our method adds a simple data selection procedure to the standard training pipeline, while it can effectively improve the performance of NAT across different datasets. Since the method is architecture-irrelevant, it can be applied to a wide range of architectures while maintaining their advantages, even including inference-efficient AT structures.
* Well balance the translation quality and complexity of data. Our method can configure the translation quality and complexity of training data by setting different thresholds for data selection. As the ratio of raw data increases, the translation quality improves, and the complexity of training data increases only slightly since we deliberately select the simple raw translations.
### Analysis
Properties of Selected Raw Data.Our method aims at selecting more NAT-friendly raw translations, which contain few modes and show high quality. To validate that our data selection process indeed find a set of training data that has the desired properties, we measure the complexity of our training data using two metrics:
* **Translation Uncertainty**: Zhou et al. (2019) proposed to measure the translation uncertainty of parallel data based on conditional entropy. They simplified conditional entropy to the sum of entropy of target words conditioned on the aligned source words: \[C(d)=\frac{1}{|\mathcal{V}_{x}|}\sum_{x\in\mathcal{V}_{x}}\mathcal{H}(y|x)\] where \(d\) is a given dataset and \(\mathcal{V}_{x}\) is the set of source vocabularies.
* **Alignment Shift**: We measure the change of sentence structure according to the relative distance between aligned words. Specifically, given source sentence \(X\) and its translation \(Y\), we get \[\tau(X,Y)=\frac{1}{|Y|}\sum_{i,j}[X_{i}=\text{align}(Y_{j})]\cdot|\frac{i}{|X |}-\frac{j}{|Y|}|.\]
\(S(d)\) is computed as the average of \(\tau(X,Y)\) over all pairs: \(S(d)=\frac{1}{|d|}\sum_{(X,Y)\in d}\tau(X,Y)\).
We adopt an alignment model (Dyer et al., 2013) for the metrics above. The metrics are computed over 1M randomly sampled sentence pairs from our processed WMT14 En-De. To display the effects of our method, we compute the metrics for distilled data, selected raw data (using GLAT+CTC), raw data replaced by KD data and the overall training data under different threshold \(T\).
As shown in Figure 2, the translation uncertainty and alignment shifts of replaced raw data (red) exceed those of selected raw data (green) by a large margin, indicating that our method can effectively separate raw data into classes of different complexity. When the threshold \(T\) is high enough, the selected raw data even displays lower complexity than the average level of distilled data. This further proves that
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Iter**} & \multicolumn{2}{c}{**WMT14**} & **WMT16** & \multirow{2}{*}{**Speed Up**} \\ \cline{3-4} \cline{6-6} & & **En-De** & & **De-En** & & **En-Ro** \\ \hline \multirow{4}{*}{**Transformer**(Vaswani et al., 2017)} & \multicolumn{2}{c}{**AT Models**} & & & \\ & T & 27.30 & / & / & 1.0\(\times\) \\
**Transformer*** & T & 27.34 (0.309) & 31.73 (0.388) & 34.68 (0.515) & 1.0\(\times\) \\
**DeepShallow**(Kasai et al., 2020) * & T & 26.00 (0.152) & 30.62 (0.308) & 32.25 (0.401) & 2.4\(\times\) \\ \hline \multirow{4}{*}{**CMLM**(Ghazvininejad et al., 2019)} & \multicolumn{2}{c}{**Iterative NAT Models**} & & & \\
**CMLM**(Ghazvininejad et al., 2019) & 10 & 27.03 & 30.53 & 33.08 & 1.7\(\times\) \\
**JM-NAT**(Guo et al., 2020) & 10 & 27.31 & 31.02 & / & 5.7\(\times\) \\ \hline \multicolumn{6}{c}{**Non-iterative NAT Models**} & & & \\
**NAT-FT**(Gu et al., 2018) & 1 & 17.69 & 21.47 & 27.29 & 15.6\(\times\) \\
**GLAT**(Qian et al., 2021a) & 1 & 25.21 & 29.84 & 31.19 & 15.3\(\times\) \\
**GLAT + CTC**(Qian et al., 2021a) & 1 & 26.39 & 29.54 & 32.79 & 14.6\(\times\) \\
**DA-Transformer**(Huang et al., 2022) & 1 & 27.91 & 31.95 & / & 7.0\(\times\) \\ \hline \multirow{4}{*}{**DeepShallow w/ Standard KD** *} & \multicolumn{2}{c}{**Our Models**} & & & \\
**DeepShallow w/ Standard KD** & T & 27.05 (0.246) & 31.36 (0.326) & 32.99 (0.416) & 2.4\(\times\) \\
**DeepShallow w/ Selective KD (ours)** & T & **27.23 (0.252)** & **31.70 (0.352)** & **33.28 (0.438)** & 2.4\(\times\) \\
**CMLM w/ Standard KD** * & 10 & 26.64 (0.137) & 30.24 (0.215) & 32.85 (0.357) & 2.1\(\times\) \\
**CMLM w/ Selective KD (ours)** & 10 & **27.06 (0.170)** & **30.65 (0.226)** & **33.38 (0.374)** & 2.1\(\times\) \\
**GLAT + CTC w/ Standard KD** * & 1 & 26.19 (0.119) & 30.74 (0.274) & 32.73 (0.362) & 14.2\(\times\) \\
**GLAT + CTC w/ Selective KD (ours)** & 1 & **26.82 (0.144)** & **31.30 (0.302)** & **33.34 (0.381)** & 14.2\(\times\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: BLEU and COMET scores of NAT models on WMT14 En-De/De-En and WMT16 En-Ro benchmarks. COMET scores are listed in parentheses if available. * indicates the results are obtained based on our implementation. To highlight the advantage in efficiency, we did not apply strategies like reranking which improve the performance at the cost of inference speed.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{2}{c}{**WMT14**} & **WMT16** \\ \cline{2-4} & **En-De** & **De-En** & **En-Ro** \\ \hline
**LFR** & 26.56 & 31.13 & 33.27 \\
**Selective KD (ours)** & 26.82 & 31.30 & 33.34 \\ \hline \hline \end{tabular}
\end{table}
Table 3: BLEU scores of GLAT+CTC using our method and LFR (Ding et al., 2021a) based on our implementation.
the selected raw data contains fewer modes. Observing the results on training data (blue), we find that the metrics grow smoothly as the ratio of raw data increases, which means that a flexible trade-off between translation quality and complexity of data can be realized.
Our Method Reduces Repetition.We also measure the percentage of repeated tokens to analyze whether our method can reduce the occurrence of repetition which is a typical mistake caused by the multi-modality problem. We see in Table 4 that exposing raw data during training can further reduce token repetition ratio. Although our data contains more modes than fully distilled data, it still achieves a better result. We think the improvement comes from learning directly from authentic distribution, which exhibits better word interdependencies and fewer mistakes.
**Long Sentences Benefits More.** Figure 3 presents the BLEU score on sentences of different lengths. As seen, longer sentences benefit more from our selective knowledge distillation. Intuitively, the long sentences may contain more mistakes during distillation; thus, learning from authentic data can help the NAT student avoid or correct these mistakes and strengthen its ability to model long sentences. We also find that the performance drops slightly on sentences with fewer than ten tokens. As shown in Table 5, shorter sentences have higher average scores, thus exposed to the student NAT for a longer period. In such a case, long-term exposure to raw data may confuse the model's training, as it suffers from the multi-modality of the raw data.
### Ablation Study
**Effects of Threshold \(T\).** We further analyze the effects of threshold \(T\) in Figure 4. We fix the threshold \(T\) so
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Model** & **En-De** & **De-En** & **En-Ro** \\ \hline
**Standard KD** & 1.06\% & 0.56\% & 0.80\%\({}_{e}\) \\
**Selective KD** & 0.82\% & 0.38\% & 0.64\%\({}_{e}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Word repetition ratio of GLAT+CTC on WMT14 En-De/De-En and WMT16 En-Ro.
Figure 3: BLEU scores of GLAT+CTC for examples of different lengths on WMT14 En-De.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Length** & **Score** & **Exposure Period** \\ \hline \(<10\) & 0.826 & 71.0\% \\ \([10,20)\) & 0.740 & 56.6\% \\ \([20,30)\) & 0.696 & 49.3\% \\ \([30,40)\) & 0.680 & 46.6\% \\ \([40,50)\) & 0.670 & 45.1\% \\ \([50,60)\) & 0.658 & 43.0\% \\ \(\geq 60\) & 0.644 & 40.6\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Average score and exposure period for raw translations of different lengths on WMT14 En-De with a GLAT +CTC evaluator. Exposure period is given by the percentage of updates where the raw translation can be directly learned.
Figure 2: \(C(d)\) and \(S(d)\) on 1M pairs randomly sampled from WMT14 En-De. We set \(T=[0.4,0.5,0.6,0.7,0.8,0.9,1.01]\) for the experiments, and the ratio of raw data=\([0.98,0.91,0.75,0.51,0.28,0.10,0.00]\) respectively with a GLAT+CTC evaluator. _Selected Raw_ is the set of raw sentences selected under \(T\), while _Replaced Raw_ is the set of raw sentences to be distilled. We concatenate _Replaced Raw_ after distillation and _Selected Raw_ to get _Training Data_, which is the data exposed to the NAT student during training. We neglect \(C(d)\) and \(S(d)\) when there is not enough data for analysis.
that the training data remains unchanged during the training process. The model can achieve significant improvement (+2.4 BLEU) by distilling only 5% of the training data. We attribute this phenomenon to the effectiveness of our data selection process, which can filter translations that greatly complicate the training data. The growth in performance becomes much slower as the ratio of distilled translations increases. Another finding is that the model trained on 80%-distilled data slightly outperforms the one trained on fully distilled data. According to Zhou et al. (2019), a potential explanation is that the complexity of the 80%-distilled data is more suitable for the capacity of GLAT+CTC architecture. The dynamic threshold outperforms all the fixed threshold settings, embodying the advantage of our hard-to-easy strategy.
Model Initialization.To study how model initialization influences our method, we initialize the GLAT+CTC student with parameters of the teacher trained after \(t\) updates, where \(t\) ranges from 25k to 300k with step 25k. We find that initialization with teacher trained after only 25k updates when the improvement on validation set begins to slow down achieves the best performance (26.82 BLEU), but the performance gap between these differently initialized models is negligible. This suggests that the improvement of our method does not come from a longer training process (initialization + training). However, removing teacher initialization brings about a degeneration of 0.47 BLEU. We believe that transferring some basic knowledge from the teacher can free the student from learning everything from scratch on the more challenging raw data, enabling the student to focus on the missing knowledge in distilled data.
## 5 Related Work
Non-autoregressive Machine TranslationGu et al. (2018) first proposed Non-Autoregressive Transformer (NAT) for machine translation, which significantly boost the inference speed by generating the outputs in parallel. Despite the efficiency, NAT still lags behind AT in performance. Various methods have been proposed to bridge the performance gap. A line of work proposes to enhance the decoder inputs of NAT Lee et al. (2018); Wei et al. (2019); Wang et al. (2019). Another branch of work proposes to model the interdependencies between target outputs, which is explicitly missing in vanilla NAT Ghazvininejad et al. (2019); Qian et al. (2021). In addition, a series of work takes the latent variable as inputs to modeling the target-side information Kaiser et al. (2018); Ma et al. (2019); Akoury et al. (2019); Bao et al. (2021); 222. These work lines focus on model architecture and training method, so they can be easily combined with our model-agnostic method.
Training Data ManipulationMore close to our work is the thread of studies on manipulating training data for NAT. Zhou et al. (2019) show that sequence-level knowledge distillation Kim and Rush (2016) reduces the complexity of training data and propose several methods to adjust the complexity of distilled data in order to match the model's capacity. Sun and Yang (2020) jointly optimizes AT and NAT models to remove the multi-modality in target sentences. Shao et al. (2022) generate several high-quality reference translations and select the most suitable candidates by comparing them with the NAT outputs. Some recent studies show that distilled data has some side effects like leading to more errors on predicting low-frequency words Ding et al. (2021). In order to solve this problem, Ding et al. (2021) proposed to pretrain NAT models on raw data, which is closely related to our work. Our method follows the idea of exposing raw data to NAT, but is different from theirs by introducing an NAT evaluator to evaluate each raw translation. By changing the ratio of raw sentences in the training data, we can configure the complexity of data in the training process and benefit more from raw data by exposing some raw translations for a longer period.
Curriculum LearningOur work adopts a hard-to-easy strategy in training NAT models by decreasing the ratio of raw data in the training process, which is contrary to curriculum learning Bengio et al. (2009) in spirits. Curriculum learning methods train machine learning models from easy to hard data, but Braun et al. (2017) showed that learning from hard to easy can be effective. They conducted experiments on automatic speech recognition systems and use signal-to-noise ratio (SNR) to create hard-to-easy curriculum. Compared with the opposite ranking of the examples from easy to hard, the hard-to-easy strategy provides better results.
## 6 Conclusion
In this paper, we propose selective knowledge distillation to tackle error propagation from an autoregressive teacher in standard knowledge distillation for NAT models. Specifically, we employ an NAT evaluator to progressively replace the targets from distilled data with raw data for training NAT students, enabling them to benefit from both the high-quality raw data and easy-to-learn distilled data. Experiment results validate that our approach can effectively improve performance on machine translation tasks. Extensive analyses also reveal that an effective data selection strategy has a great potential to improve the performance.
Figure 4: Performance of GLAT+CTC on WMT14 En-De with fixed threshold and dynamical threshold (0.4\(\rightarrow\)1.0).
## Acknowledgements
We would like to thank the anonymous reviewers for their insightful comments. Shujian Huang is the corresponding author. This work is supported by National Science Foundation of China (No. 62176120), the Liaoning Provincial Research Foundation for Basic Research (No. 2022-KF-26-02).
|
2309.12833 | Model-based causal feature selection for general response types | Discovering causal relationships from observational data is a fundamental yet
challenging task. Invariant causal prediction (ICP, Peters et al., 2016) is a
method for causal feature selection which requires data from heterogeneous
settings and exploits that causal models are invariant. ICP has been extended
to general additive noise models and to nonparametric settings using
conditional independence tests. However, the latter often suffer from low power
(or poor type I error control) and additive noise models are not suitable for
applications in which the response is not measured on a continuous scale, but
reflects categories or counts. Here, we develop transformation-model (TRAM)
based ICP, allowing for continuous, categorical, count-type, and
uninformatively censored responses (these model classes, generally, do not
allow for identifiability when there is no exogenous heterogeneity). As an
invariance test, we propose TRAM-GCM based on the expected conditional
covariance between environments and score residuals with uniform asymptotic
level guarantees. For the special case of linear shift TRAMs, we also consider
TRAM-Wald, which tests invariance based on the Wald statistic. We provide an
open-source R package 'tramicp' and evaluate our approach on simulated data and
in a case study investigating causal features of survival in critically ill
patients. | Lucas Kook, Sorawit Saengkyongam, Anton Rask Lundborg, Torsten Hothorn, Jonas Peters | 2023-09-22T12:42:48Z | http://arxiv.org/abs/2309.12833v4 | # Model-based causal feature selection for
###### Abstract
Discovering causal relationships from observational data is a fundamental yet challenging task. Invariant causal prediction (ICP, Peters et al., 2016) is a method for causal feature selection which requires data from heterogeneous settings and exploits that causal models are invariant. ICP has been extended to general additive noise models and to nonparametric settings using conditional independence tests. However, the latter often suffer from low power (or poor type I error control) and additive noise models are not suitable for applications in which the response is not measured on a continuous scale, but reflects categories or counts. Here, we develop transformation-model (tram) based ICP, allowing for continuous, categorical, count-type, and uninformatively censored responses (these model classes, generally, do not allow for identifiability when there is no exogenous heterogeneity). As an invariance test, we propose tram-GCM based on the expected conditional covariance between environments and score residuals with uniform asymptotic level guarantees. For the special case of linear shift trams, we also consider tram-Wald, which tests invariance based on the Wald statistic. We provide an open-source R package **tramicp** and evaluate our approach on simulated data and in a case study investigating causal features of survival in critically ill patients.
## 1 Introduction
### Motivation
Establishing causal relationships from observational data is a common goal in several scientific disciplines. However, systems are often too complex to allow for recovery of the full causal structure underlying the data-generating process. In this work, we consider the easier task of uncovering the causal drivers of a particular response variable of interest. We present methods, theoretical
results and user-friendly software for model-based causal feature selection, where the response may represent a binary, ordered, count, or continuous outcome and may additionally be uninformatively censored. We propose tramicp for causal feature selection, which is based on invariant causal prediction (ICP, Peters et al., 2016) and a flexible class of regression models, called transformation models (trams, Hothorn et al., 2018). tramicp relies on data from heterogeneous environments and the assumption, that the causal mechanism of the response given its direct causes (direct w.r.t. the considered sets of covariates) is correctly specified by a tram and does not change across those environments (Haavelmo, 1943; Frisch et al., 1948; Aldrich, 1989; Pearl, 2009; Scholkopf et al., 2012). The causal tram will then produce score residuals (residuals defined specifically for trams) that are invariant across the environments. We propose an invariance test based on the expected conditional covariance between the score residuals and the environments given a subset \(S\) of the covariates, called tram-GCM. With this invariance test, tramicp recovers a subset of the direct causes with high probability, by fitting a tram for all subsets of covariates, computing score residuals, testing whether those score residuals are uncorrelated with the residualized environments and lastly, intersecting all subsets for which the null hypothesis of invariance was not rejected. For the special case of additive linear models, we propose another invariance test, tram-Wald, based on the Wald statistic for testing whether main and interaction effects involving the environments are zero.
We illustrate the core ideas of tramicp in the following example with a binary response and the logistic regression model (McCullagh and Nelder, 2019), which is a tram. We defer all details on how trams and score residuals are defined to Section 2 and describe the invariance tests in Section 3.
**Example 1** (Invariance in binary generalized linear models).: Consider the following structural causal model (Pearl, 2009) over \((Y,X^{1},X^{2},E)\):
\[E \coloneqq N_{E}\] \[X^{1} \coloneqq-E+N_{1}\] \[Y \coloneqq\mathds{1}(0.5X^{1}>N_{Y})\] \[X^{2} \coloneqq Y+0.8E+N_{2},\]
where \(N_{E}\sim\text{Bernoulli}(0.5)\), \(N_{1}\sim\text{N}(0,1)\), \(N_{2}\sim\text{N}(0,1)\), \(N_{Y}\) are jointly independent noise variables and \(N_{Y}\) follows a standard logistic distribution. Here, \(E\) encodes two environments in which the distribution of \(X^{1}\) and \(X^{2}\) differ, but the causal mechanism of \(Y\) given its direct causes \(X^{1}\) does not change.
Let us assume that both the above structural causal model and its implied structure is unknown and that we observe an i.i.d. sample \(\{(e_{i},x_{i}^{1},x_{i}^{2},y_{i})\}_{i=1}^{n}\) from the joint distribution of \((E,X^{1},X^{2},Y)\). We further know that \(Y\) given its direct causes is correctly specified by a logistic regression. All remaining conditionals do not need to satisfy any model assumptions. Our task is now to infer (a subset of) the direct causes of \(Y\).
To do so, for each subset of the covariates \(\mathbf{X}^{S}\), \(S\subseteq\{1,2\}\) (i.e., for \(\emptyset\), \(\{1\}\), \(\{2\}\) and \(\{1,2\}\)), we now (i) fit a binary logistic regression model, (ii) compute the score residuals \(y_{i}-\hat{\mathbb{P}}(Y=1\mid\mathbf{X}^{S}=\mathbf{x}^{S}_{i})\) (from the logistic regression) and residualized environments \(e_{i}-\hat{\mathbb{P}}(E=1\mid\mathbf{X}^{S}=\mathbf{x}^{S}_{i})\) (via a random forest), and (iii) test whether the two residuals are correlated. Figure 1 shows the residuals obtained in step (iii) for each non-empty subset of the covariates.
In this example, even though the model using \(\{X^{1},X^{2}\}\) achieves higher predictive accuracy than the model using the causal parent \(\{X^{1}\}\), only the model \(Y\mid X^{1}\) is stable across the environments. If more than one set is invariant, one can take the intersection of the invariant sets to obtain a subset of the direct causes of \(Y\)(Peters et al., 2016).
With our openly available R package **tramicp** ([https://CRAN.R-project.org/package=tramicp](https://CRAN.R-project.org/package=tramicp)), the analysis in this example can be reproduced with the following code, where df is a data frame with 500 independent observations from the structural causal model above.
R> library("tramicp") R> icp <- glmICP(Y ~ X1 + X2, data = df, env = ~ E, family = "binomial") R> pvalues(icp, which = "set") Empty X1 X2 X1+X2
1.82e-02 5.10e-01 4.54e-09 2.22e-03
### Related work
In structural causal models (Pearl, 2009), several algorithms exist to tackle the problem of causal discovery, i.e., learning the causal graph from data, including constraint-based and score-based
Figure 1: Invariance in binary generalized linear models. By the data generating mechanism in Example 1, we know that the conditional distribution of \(Y\) given its direct cause \(X^{1}\) does not change across the two environments \(E=0\) and \(E=1\). When predicting both \(Y\) and \(E\) from the three sets of covariates \(\{1\}\), \(\{2\}\) and \(\{1,2\}\), the resulting residuals are uncorrelated only when conditioning on the invariant set \(\{1\}\). The \(p\)-values of the invariance test we introduce in Section 3.1.1 are shown in the panel strips for the corresponding subset of covariates (we have also added linear model fits, see blue lines). The empty set is omitted, since the score residuals and residualized environments only take two values.
methods (Spirtes et al., 2000; Chickering, 2002; Pearl, 2009; Glymour et al., 2019). Without further restrictions on the structural assignments and faithfulness, one can hope to recover the causal graph up to the Markov equivalence class (Verma and Pearl, 1990; Andersson et al., 1997; Tian and Pearl, 2001), for which several algorithms have been proposed based on observational data, interventional data, or a combination of both (Spirtes et al., 2000; Chickering, 2002; Castelo and Kocka, 2003; He and Geng, 2008; Hauser and Buhlmann, 2015). However, in many real-world applications learning the full causal graph may be too ambitious or unnecessary for tackling the problem at hand. As opposed to causal discovery, causal feature selection aims to identify the direct causes of a given variable of interest (the response) from potentially many measured covariates, instead of the full graph (Guyon et al., 2007).
Invariant causal prediction (ICP) is an approach to causal feature selection which exploits invariance of the conditional distribution of a response given its direct causes under perturbations of the covariates (ICP, Peters et al., 2016). ICP can be formulated from a structural causal modeling, as well as potential outcome perspective (Hernan and Robins, 2010). In contrast to constraint- and score-based algorithms, ICP requires a specific response variable and data from heterogeneous environments.
ICP builds on the concept of invariance and can generally be formulated as conditional independence between the response and the environments given a candidate set (Heinze-Deml et al., 2018). Thus, nonparametric conditional independence tests (Fukumizu et al., 2007; Zhang et al., 2011; Candes et al., 2018; Strobl et al., 2019; Berrett et al., 2019) can, in principle, always be applied. However, conditional independence testing has been shown to be an arbitrarily hard problem, as there is no test that is simultaneously level and has non-trivial power (Shah and Peters, 2020).
As an alternative to conditional independence testing, model-based formulations of ICP have been formulated for linear (Peters et al., 2016) and non-linear additive noise models ("invariant residual distribution test" proposed in Heinze-Deml et al., 2018). Diaz et al. (2022) use an "invariant target prediction" test from Heinze-Deml et al. (2018) for testing invariance with a binary response by nonparametrically comparing out-of-sample area under the receiver operating characteristic (ROC) curve (AUC). Under correct model specification, model-based ICP can have considerably higher power than its nonparametric alternative. Model-based ICP has been extended to generalized linear models (GLMs, see dicussion in Peters et al., 2016) and sequential data (Pfister et al., 2019). ICP for GLMs and additive and multiplicative hazard models has been investigated in Laksafoss (2020).
Many applications feature complex response types, such as ordinal scales, survival times, or counts and the data-generating mechanism can seldomly be assumed to be additive in the noise. This is reflected in the most common model choices for these responses, namely proportional odds logistic (McCullagh, 1980; Tutz, 2011), Cox proportional hazards (Cox, 1972), and generalized linear models (McCullagh and Nelder, 2019), which do not assume additive noise in general. Together, non-continuous responses and non-additive noise render many causal feature selection algorithms inapplicable. Moreover, proposed extensions to GLMs and hazard-based models rely on case-specific
definitions of invariance and thus a unified view on linear, generalized linear, hazards, and general distributional regression is yet to be established.
In practice, a model-based approach can be desirable, because it leads to interpretable effect estimates, such as odds or hazard ratios. However, there is a trade-off between model intelligibility and misspecification. Many commonly applied regression models are not closed under marginalization or the inclusion or exclusion of covariates that are associated with the response (collapsibility, Greenland, Greenland et al., 1999; Didelez and Stensrud, 2022).
### Summary
Formally, we are interested in discovering the direct causes of a response \(Y\in\mathcal{Y}\subseteq\mathbb{R}\) among a potentially large number of covariates \(\mathbf{X}\in\mathcal{X}_{1}\times\cdots\times\mathcal{X}_{d}\subseteq\mathbb{R}^{d}\). Consider a set \(S_{*}\subseteq\{1,\ldots,d\}\) (the reader may think about the "direct causes" of \(Y\)) and assume that \(Y\mid\mathbf{X}^{S_{*}}\) is correctly specified by a tram while all other conditionals remain unspecified. In Section 2.2, we define structural causal trams and there, \(S_{*}\) will be the set of causal parents of \(Y\). Thus, from now on, we refer to \(S_{*}\) as the causal parents of \(Y\). trams characterize the relationship between features and response via the conditional cumulative distribution function (CDF) \(F_{Y\mid\mathbf{X}^{S_{*}}=\mathbf{x}^{S_{*}}}(y)\coloneqq\mathbb{P}(Y\leq y\mid\mathbf{X}^ {S_{*}}=\mathbf{x}^{S_{*}})\) on the quantile-scale of a user-specified CDF \(F_{Z}\). More specifically, when using trams, one models the increasing function \(h(\mathbf{\cdot}\mid\mathbf{x}^{S_{*}})\coloneqq F_{Z}^{-1}\circ F_{Y\mid\mathbf{X}^{S_{*} }=\mathbf{x}^{S_{*}}}(\mathbf{\cdot})\), called a transformation function. The name stems from the fact that for all \(\mathbf{x}^{S_{*}}\) its (generalized) inverse transforms samples of \(Z\) to samples from the conditional distribution \(Y\mid\mathbf{X}^{S_{*}}=\mathbf{x}^{S_{*}}\). Specific choices of \(F_{Z}\) and further modeling assumptions on the functional form of \(h\) give rise to many well-known models (examples below). Throughout the paper we illustrate tramicp with three archetypal responses, namely binary (Ex. 2), count (Ex. 3), and potentially censored survival times (Ex. 4). None of the following examples can be phrased as additive noise models of the form \(Y=f(X)+\varepsilon\) with \(X\perp\!\!\!\perp\varepsilon\). Together with the hardness of conditional independence testing (Shah and Peters, 2020, see also above), this motivates the need for causal feature selection algorithms in more flexible non-additive noise models.
**Example 2** (Binary logistic regression).: The binary logistic regression model (binomial GLM) with \(\mathcal{Y}\coloneqq\{0,1\}\) can be phrased in terms of the conditional distribution \(F_{Y\mid\mathbf{X}^{S_{*}}=\mathbf{x}^{S_{*}}}(0)=\operatorname{expit}(\vartheta-(\mathbf{x }^{S_{*}})^{\top}\mathbf{\beta})\), where \(\operatorname{expit}(\cdot)=\operatorname{logit}^{-1}(\cdot)=(1+\exp(-\cdot) )^{-1}\) denotes the standard logistic CDF, and \(\vartheta\) denotes the baseline \((\mathbf{x}^{S_{*}}=0)\) log-odds for belonging to class 0 rather than 1. Here, \(\mathbf{\beta}\) is interpretable as a vector of log odds-ratios. The model can informally be written as \(F_{Y\mid\mathbf{X}^{S_{*}}=\mathbf{x}^{S_{*}}}(y)=\operatorname{expit}(h_{Y}(y)+(\mathbf{x }^{S_{*}})^{\top}\mathbf{\beta})\), where \(h_{Y}(0)\coloneqq\vartheta\) and \(h_{Y}(1)\coloneqq+\infty\). The latter way of writing the model extends to ordered responses with more than two levels \(\mathcal{Y}\coloneqq\{y_{1},y_{2},\ldots,y_{K}\}\) with \(y_{1}<y_{2}<\cdots<y_{K}\), \(h_{Y}(y_{k})\coloneqq\vartheta_{k}\), for all \(k\) and for \(k=2,\ldots,K\), \(\vartheta_{k}>\vartheta_{k-1}\), using the convention \(\vartheta_{K}=+\infty\) (see McCullagh, 1980, proportional odds logistic regression).
**Example 3** (Count regression).: Count random variables take values in \(\mathcal{Y}\coloneqq\{0,1,2,\ldots\}\). Without restricting the distribution for \(F_{Y\mid\mathbf{X}^{S_{*}}=\mathbf{x}^{S_{*}}}\) to a parametric family, we can formulate models for the count response via \(F_{Y\mid\mathbf{X}^{S_{*}}=\mathbf{x}^{S_{*}}}(\mathbf{\cdot})=F_{Z}(h_{Y}(\mathbf{\cdot})-(\mathbf{ x}^{S_{*}})^{\top}\mathbf{\beta})\), where \(h_{Y}\) is an increasing step function
(with jumps at points in \(\mathcal{Y}\)) and \(F_{Z}\) a user-specified continuous cumulative distribution function with log-concave density (log-concavity ensures uniqueness of the maximum likelihood estimator, Siegfried and Hothorn, 2020).
**Example 4** (Parametric survival regression).: Understanding the causal relationship between features and patient survival is sought after in many biomedical applications. Let \(Y\) be a (strictly) positive real-valued random variable, i.e., \(\mathcal{Y}\coloneqq\mathbb{R}_{+}\). The Weibull proportional hazards model is defined via \(F_{Y|\mathbf{X}^{S_{*}}=\mathbf{x}^{S_{*}}}(\mathbf{\cdot})=1-\exp(-\exp(\vartheta_{1}+ \vartheta_{2}\log(\mathbf{\cdot})-(\mathbf{x}^{S_{*}})^{\top}\mathbf{\beta}))\), with \(\vartheta_{1}\in\mathbb{R},\vartheta_{2}>0\). Here, \(\mathbf{\beta}\) is interpretable as a vector of log hazard ratios (Kleinbaum and Klein, 2012). The model can be written as \(F_{Y|\mathbf{X}^{S_{*}}=\mathbf{x}^{S_{*}}}(\mathbf{\cdot})=F(h_{Y}(\mathbf{\cdot})-(\mathbf{x}^{S _{*}})^{\top}\mathbf{\beta})\), where \(F(\mathbf{\cdot})\coloneqq 1-\exp(-\exp(\mathbf{\cdot}))\) denotes the standard minimum extreme value distribution and \(h_{Y}(\mathbf{\cdot})\coloneqq\vartheta_{1}+\vartheta_{2}\log(\mathbf{\cdot})\). The Cox proportional hazard model (Cox, 1972) is obtained as an extension of the Weibull model by allowing \(h_{Y}\) to be a step function (with jumps at the observed event times) instead of a log-linear function.
In all of the above examples, we have assumed that the response given its causal parents is correctly specified by an linear shift tram (see Definition 7 for more details). If conditioning on a set that is not \(S^{*}\) always yielded a model misspecification, one could attempt to identify the set of causal parents by testing, for different sets \(\mathbf{X}^{S}\) of covariates, whether the model for \(Y\mid\mathbf{X}^{S}\) is correctly specified. However, in Proposition 15 below, we prove that, in general, such a procedure does not work. More precisely, there exists a pair of structural causal models such that both induce the same observational distribution, and in both, the response given its causal parents is correctly specified by an (linear shift) tram but the parental sets differ.
In this work, following a line of work in causal discovery (Peters et al., 2016; Meinshausen et al., 2016; Heinze-Deml et al., 2018; Christiansen and Peters, 2020), we instead assume to have access to data from heterogeneous environments. Given such data, we define invariance in trams and propose invariance tests based on the expected conditional covariance between the environments and score residuals (tram-GCM) and an invariance test based on the Wald statistic for linear shift trams in particular (tram-Wald). We prove that the tram-GCM test is uniformly asymptotically level \(\alpha\) for any \(\alpha\in(0,1)\) (Theorem 20) and demonstrate empirically that it has power comparable to or higher than nonparametric conditional independence testing. In the context of the result on the hardness of assumption-free conditional independence testing assumptions for continuous distributions (Shah and Peters, 2020), our theoretical results show that, under mild assumptions on the relationship between \(\mathbf{E}\) and \(\mathbf{X}\), the model class of trams can be sufficiently restrictive to allow for useful conditional independence tests.
The rest of this paper is structured as follows. Section 2.1 gives a technical introduction to transformation models which can be skipped at first reading. We introduce structural causal trams in Section 2.2 and show that in this class, the set of causal parents is, in general, not identified (Section 2.3). In Section 3, we present the proposed tram-GCM and tram-Wald invariance tests and their theoretical guarantees. We also describe our readily available R implementation of tramicp. Afterwards, we present a simulation study featuring the examples from above (Section 4) and the
behaviour of tramICP under model misspecification.
## 2 Using transformation models for causal inference
Transformation models, as introduced by Box and Cox (1964) in their earliest form, are models for the conditional cumulative distribution function of a response given covariates (Doksum, 1974; Bickel and Doksum, 1981; Cheng et al., 1995; Hothorn et al., 2014). trams transform the response conditional on covariates such that the transformed response can be modelled on a fixed, continuous latent scale. Given data and a finite parameterization, the transformation can be estimated via maximum likelihood (Hothorn et al., 2018). We formally define trams as a class of non-linear non-additive noise models depending on the sample space of both response and covariates. Our treatment of trams may appear overly mathematical; however, the formalism is needed to formulate and prove the identification result (see Proposition 15 in Section 2.3) and the uniform asymptotic level guarantee for the tram-GCM invariance test (Theorem 20). A more intuitive introduction to trams can be found in Hothorn et al. (2018), for example. We then embed trams into a causal modeling framework, using structural causal models (SCMs, Pearl, 2009; Bongers et al., 2021). We also adapt standard results from parametric (Hothorn et al., 2018) and semi-parametric (McLain and Ghosh, 2013) maximum likelihood estimation which enable us to obtain results on consistency and asymptotic normality, which are exploited by the proposed invariance tests.
### Transformation models
Let \(\overline{\mathbb{R}}\coloneqq\mathbb{R}\cup\{-\infty,+\infty\}\) denote the extended real line. Throughout the paper, let \(\mathcal{Z}\) denote the set of functions \(F_{Z}:\overline{\mathbb{R}}\to[0,1]\) that are (i) strictly increasing with \(\lim_{x\to-\infty}F_{Z}(x)=0\), \(\lim_{x\to\infty}F_{Z}(x)=1\), (ii) three-times differentiable and have a log-concave derivative \(f_{Z}=F_{Z}^{\prime}\) when restricted to \(\mathbb{R}\), and (iii) satisfy \(F_{Z}(-\infty)=0\) and \(F_{Z}(+\infty)=1\). We call \(\mathcal{Z}\) the set of _extended differentiable cumulative distribution functions_. Given that a CDF \(F:\mathbb{R}\to\mathbb{R}\) satisfies (i) and (ii), we may add (iii) and refer to the resulting function as an _extended CDF_. For instance, the extended standard logistic CDF is given by \(F_{\mathrm{SL}}(z)=(1+\exp(-z))^{-1}\) for all \(z\in\mathbb{R}\) and \(F_{\mathrm{SL}}(-\infty)=0\) and \(F_{\mathrm{SL}}(+\infty)=1\). Besides \(F_{\mathrm{SL}}\), in our applications, we consider the extended versions of the standard normal CDF \(\Phi\), and the standard minimum extreme value CDF \(F_{\mathrm{minEV}}:z\mapsto 1-\exp(-\exp(z))\). By slight abuse of notation, we use the same letters \(\Phi,F_{\mathrm{SL}},F_{\mathrm{minEV}}\), for the extended CDFs. In general, specification of a transformation model requires choosing a particular \(F_{Z}\in\mathcal{Z}\). Further, for a symmetric positive semi-definite matrix \(A\), let \(\lambda_{\min}(A)\) denote its smallest eigenvalue and \(\left\|A\right\|_{\mathrm{op}}\) denote its operator norm. For all \(n\in\mathbb{N}\), we write \([n]\) as shorthand for \(\{1,\ldots,n\}\).
We call a function \(h:\mathbb{R}\to\overline{\mathbb{R}}\)_extended right-continuous and increasing_ (ERCI) on \(\mathcal{Y}\subseteq\mathbb{R}\) if (i) it is right-continuous and strictly increasing on \(\mathcal{Y}\) and fulfills \(h(\min\mathcal{Y})>-\infty\) (if \(\min\mathcal{Y}\) exists), (ii) for all \(y<\inf\mathcal{Y}\), we have \(h(y)=-\infty\), (iii) for all \(y>\sup\mathcal{Y}\), we have \(h(y)=+\infty\), (iv) for all \(t\in(\inf\mathcal{Y},\sup\mathcal{Y})\setminus\mathcal{Y}\), we have \(h(t)=h(\underline{t})\), where \(\underline{t}\coloneqq\sup\{v\in\mathcal{Y}:v<t\}\) and (v)
\(\lim_{v\rightarrow-\infty}h(v)=-\infty\) and \(\lim_{v\rightarrow\infty}h(v)=\infty\). Condition (iv) is needed to ensure that \(h\) is piecewise constant outside of \(\mathcal{Y}\). Finally, for a function \(f:\overline{\mathbb{R}}\rightarrow\mathbb{R}\), we denote the derivative \(f^{\prime}:\mathbb{R}\rightarrow\mathbb{R}\) s.t. for all \(x\in\mathbb{R}\), \(f^{\prime}(x)=\frac{\mathrm{d}}{\mathrm{d}u}f(u)|_{u=x}\). We are now ready to define the class of transformation models.
**Definition 5** (Transformation model).: Let \(\mathcal{Y}\subseteq\mathbb{R}\) and \(\mathcal{X}\coloneqq\mathcal{X}_{1}\times\ldots\times\mathcal{X}_{d}\subseteq \mathbb{R}^{d}\), where for all \(i\), \(\mathcal{X}_{i}\subseteq\mathbb{R}\). The set of all _transformation functions_ on \(\mathcal{Y}\times\mathcal{X}\) is defined as
\[\mathcal{H}^{*}_{\mathcal{Y},\mathcal{X}}\coloneqq\bigg{\{}h:\mathbb{R} \times\mathcal{X}\rightarrow\overline{\mathbb{R}}\,\big{|}\,\forall\mathbf{x} \in\mathcal{X},\ h(\mathbf{\cdot}\mid\mathbf{x})\ \text{is ERCI on}\ \mathcal{Y}\bigg{\}}.\]
Then, for a fixed _error distribution_\(F_{Z}\in\mathcal{Z}\) and a set of transformation functions \(\mathcal{H}_{\mathcal{Y},\mathcal{X}}\subseteq\mathcal{H}^{*}_{\mathcal{Y}, \mathcal{X}}\), the _family of_ \(\text{\sc trams}\ \mathcal{M}(F_{Z},\mathcal{Y},\mathcal{X},\mathcal{H}_{\mathcal{Y}, \mathcal{X}})\) is defined as the following set of conditional cumulative distribution functions1 (see also Definition 2 in Hothorn et al., 2018):
Footnote 1: In Proposition 27 in Appendix E2, we show that \(\mathcal{M}\) indeed only contains CDFs.
\[\mathcal{M}(F_{Z},\mathcal{Y},\mathcal{X},\mathcal{H}_{\mathcal{Y}, \mathcal{X}})\coloneqq\big{\{}F_{Y|\mathbf{X}=\cdot}:\mathbb{R}\times\mathcal{X} \rightarrow[0,1]\,\big{|}\] \[\exists h\in\mathcal{H}_{\mathcal{Y},\mathcal{X}}:\forall\mathbf{x} \in\mathcal{X}\ \forall y\in\mathbb{R},\ F_{Y|\mathbf{X}=\mathbf{x}}(y)=F_{Z}(h(y\mid\mathbf{x}))\big{\}}.\]
As such, a single tram is fully specified by \((F_{Z},h)\), \(F_{Z}\in\mathcal{Z},h\in\mathcal{H}_{\mathcal{Y},\mathcal{X}}\). The condition that for all \(\mathbf{x}\in\mathcal{X}\), \(h(\mathbf{\cdot}\mid\mathbf{x})\) is ERCI on \(\mathcal{Y}\) ensures that the support of the induced conditional distribution specified by \(F_{Y|\mathbf{X}=\mathbf{x}}\) is \(\mathcal{Y}\). Further, for all \(\mathbf{x}\in\mathcal{X}\) and \(z\in\overline{\mathbb{R}}\), we write \(h^{-1}(z\mid\mathbf{x})\coloneqq\inf\{y\in\mathcal{Y}:z\leq h(y\mid\mathbf{x})\}\) for the inverse transformation function.
The inverse transformation function \(h^{-1}(\mathbf{\cdot}\mid\mathbf{x})\) at a given \(\mathbf{x}\) can be interpreted analogously to a quantile function: Given some \(\mathbf{X}=\mathbf{x}\), we can obtain an observation from \(F_{Y|\mathbf{X}=\mathbf{x}}\) by sampling an observation from \(F_{Z}\) and passing it through \(h^{-1}(\mathbf{\cdot}\mid\mathbf{x})\).
In statistical modelling, it is common to additionally assume additivity of the effects of \(\mathbf{X}\) on a specific scale. For instance, in linear regression the covariates enter as a linear predictor on the scale of the conditional mean. In this work, we restrict ourselves to the class of shift \(\text{\sc trams}\) in which additivity is assumed on the scale of the transformation function.
**Definition 6** (Shift \(\text{\sc trams}\)).: Let \(\mathcal{Y}\), \(\mathcal{X}\) and \(F_{Z}\in\mathcal{Z}\) be as in Definition 5. Further, let \(\mathcal{F}\coloneqq\{f:\mathcal{X}\rightarrow\mathbb{R}\mid f\ \text{measurable}\}\) and \(\mathcal{H}_{\mathcal{Y}}\coloneqq\{h_{Y}:\mathbb{R}\rightarrow\overline{ \mathbb{R}}\mid h_{Y}\ \text{is ERCI on}\ \mathcal{Y}\}\). Let the set of _shift transformation functions_ be defined as
\[\mathcal{H}^{\text{shift}}_{\mathcal{Y},\mathcal{X}}\coloneqq\left\{h\in \mathcal{H}^{*}_{\mathcal{Y},\mathcal{X}}\mid\exists h_{Y}\in\mathcal{H}_{ \mathcal{Y}},\ f\in\mathcal{F}:\forall\mathbf{x}\in\mathcal{X},\ h(\mathbf{\cdot}\mid \mathbf{x})=h_{Y}(\mathbf{\cdot})-f(\mathbf{x})\right\}.\]
Then, \(\mathcal{M}(F_{Z},\mathcal{Y},\mathcal{X},\mathcal{H}^{\text{shift}}_{ \mathcal{Y},\mathcal{X}})\) denotes the family of _shift_\(\text{\sc trams}\) and a tram\(F_{Z}\circ h\) is called _shift_\(\text{\sc tram}\) iff \(h\in\mathcal{H}^{\text{shift}}_{\mathcal{Y},\mathcal{X}}\). Further, any \(h_{Y}\in\mathcal{H}_{\mathcal{Y}}\) is referred to as a _baseline transformation_.
We next introduce the subset of linear shift \(\text{\sc trams}\) in which the covariates enter as a linear predictor.
**Definition 7** (Linear shift trams).: Consider shift trams specified by \(F_{Z},\mathcal{Y},\mathcal{X},\mathcal{F},\mathcal{H}^{\text{shift}}_{\mathcal{Y},\mathcal{X}}\), as in Definition 6. Let \(\mathbf{b}:\mathcal{X}\to\mathbb{R}^{b}\) be a finite collection of basis transformations and define \(\mathcal{F}_{\mathbf{b}}\coloneqq\{f\in\mathcal{F}\mid\exists\mathbf{\beta}\in\mathbb{ R}^{b}\text{ s.t. }f(\mathbf{\cdot})=\mathbf{b}(\mathbf{\cdot})^{\top}\mathbf{\beta}\}\). The set of _linear shift transformation functions w.r.t. \(\mathbf{b}\)_ is defined as
\[\mathcal{H}^{\text{linear}}_{\mathcal{Y},\mathcal{X}}(\mathbf{b})\coloneqq\left\{h \in\mathcal{H}^{\text{shift}}_{\mathcal{Y},\mathcal{X}}\,\middle|\,\exists h_{ Y}\in\mathcal{H}_{\mathcal{Y}},\ f\in\mathcal{F}_{\mathbf{b}}:\forall\mathbf{x}\in\mathcal{X}:h(\mathbf{\cdot} \mid\mathbf{x})=h_{Y}(\mathbf{\cdot})-f(\mathbf{x})\right\}.\]
Then, \(\mathcal{M}(F_{Z},\mathcal{Y},\mathcal{X},\mathcal{H}^{\text{linear}}_{ \mathcal{Y},\mathcal{X}}(\mathbf{b}))\) denotes the family of _linear shift_ trams_ w.r.t. \(\mathbf{b}\)_. Further, a tram \(F_{Z}\circ h\) is called _linear shift_ tram_ w.r.t. \(\mathbf{b}\)_ iff \(h\in\mathcal{H}^{\text{linear}}_{\mathcal{Y},\mathcal{X}}(\mathbf{b})\). For the special case of \(\mathbf{b}:\mathbf{x}\mapsto\mathbf{x}\), we write \(\mathcal{H}^{\text{linear}}_{\mathcal{Y},\mathcal{X}}\) and refer to the class and its members as _linear shift_ trams.
Estimation and inference in trams can be based on the log-likelihood function - if it exists. The following assumption ensures that this is the case.
**Assumption 1**.: We have \(\mathcal{H}_{\mathcal{Y},\mathcal{X}}\subseteq\mathcal{H}^{\text{shift}}_{ \mathcal{Y},\mathcal{X}}\). Furthermore, if \(\mathcal{Y}\) is uncountable, \(F_{Z},\mathcal{X},\mathcal{H}_{\mathcal{Y},\mathcal{X}}\) are such that for all \(\mathbf{x}\in\mathcal{X}\) and \(h\in\mathcal{H}_{\mathcal{Y},\mathcal{X}}\),
\[f_{Y\mid\mathbf{X}=\mathbf{x}}(\mathbf{\cdot};h)\coloneqq F^{\prime}_{Z}(h(\mathbf{\cdot}\mid \mathbf{x}))h^{\prime}(\mathbf{\cdot}\mid\mathbf{x}), \tag{1}\]
where \(h^{\prime}(y\mid\mathbf{x})\coloneqq\frac{\mathrm{d}}{\mathrm{d}\upsilon}h(\upsilon \mid\mathbf{x})|_{\upsilon=y}\), is well-defined and a density (w.r.t. Lebesgue measure) of the conditional CDF induced by the tram.
Assumption 1 allows us to define (strictly positive) canonical conditional densities with respect to a fixed measure that we denote by \(\mu\): If \(\mathcal{Y}\) is countable, we let \(\mu\) denote the counting measure on \(\mathcal{Y}\) and define the canonical conditional density by \(f_{Y\mid\mathbf{X}=\mathbf{x}}(\mathbf{\cdot};h)\coloneqq F_{Z}(h(\mathbf{\cdot}\mid\mathbf{x}))- F_{Z}(h(y\mid\mathbf{x}))\), where \(y\coloneqq\sup\{\upsilon\in\mathcal{Y}:\upsilon<y\}\)2. If \(\mathcal{Y}\) is uncountable, we let \(\mu\) denote the Lebesgue measure restricted to \(\mathcal{Y}\) and the canonical conditional density is then defined by (1). In either case, \(\mathcal{H}_{\mathcal{Y},\mathcal{X}}\subseteq\mathcal{H}^{\text{shift}}_{ \mathcal{Y},\mathcal{X}}\) ensures that for all \(\mathbf{x}\) and \(y\in\mathcal{Y}\), \(f_{Y\mid\mathbf{X}=\mathbf{x}}(y;h)>0\). Thus, for \((F_{Z},\mathcal{Y},\mathcal{X},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\) satisfying Assumption 1, we can define the tram log-likelihood as \(\ell:\mathcal{H}_{\mathcal{Y},\mathcal{X}}\times\mathcal{Y}\times\mathcal{X} \to\mathbb{R}\) with
Footnote 2: We adopt the convention that the supremum of the empty set is \(-\infty\).
\[\ell(h;y,\mathbf{x})\coloneqq\log f_{Y\mid\mathbf{X}=\mathbf{x}}(y;h).\]
When applying ICP to linear additive noise models, invariance can be formulated as uncorrelatedness between residuals and environments. In trams, however, the response can be categorical, reducing the usefulness of classical residuals. Instead, score residuals (Lagakos, 1981; Korepanova et al., 2020; Kook et al., 2022) are a natural choice for testing invariance of trams. Score residuals were first introduced by Lagakos (1981) for multiplicative hazard models (see also Korepanova et al., 2020, for non-multiplicative hazard models) and extended to linear shift trams by Kook et al. (2022, Definition 2). Score residuals coincide with scaled least-squares residuals in linear regression with normal errors and martingale residuals in the Cox proportional hazards model (Barlow and Prentice, 1988) and directly extend to censored responses (Lagakos, 1981; Farrington, 2000). In this
work, score residuals play a major role in formulating invariance tests (Section 3) and have been used for causal regularization in a distributional version of anchor regression (Rothenhausler et al., 2021; Kook et al., 2022). For defining score residuals, we require the following assumption (which, by definition, is satisfied for \(\mathcal{H}^{\text{shift}}_{\mathcal{Y},\mathcal{X}}\) and \(\mathcal{H}^{\text{linear}}_{\mathcal{Y},\mathcal{X}}\)).
**Assumption 2**.: \(\mathcal{H}_{\mathcal{Y},\mathcal{X}}\) is closed under scalar addition, that is, for all \(h\in\mathcal{H}_{\mathcal{Y},\mathcal{X}}\) and \(\alpha\in\mathbb{R}\), we have3\(h+\alpha\in\mathcal{H}_{\mathcal{Y},\mathcal{X}}\).
Footnote 3: We adopt the convention that for all \(\alpha\in\mathbb{R}\), \(-\infty+\alpha=-\infty\) and \(\infty+\alpha=\infty\).
**Definition 8** (Score residuals, Lagakos, 1981; Kook et al., 2022).: Let \(\mathcal{Y}\), \(\mathcal{X}\), \(F_{Z}\in\mathcal{Z}\) and \(\mathcal{H}_{\mathcal{Y},\mathcal{X}}\subseteq\mathcal{H}^{*}_{\mathcal{Y}, \mathcal{X}}\) be as in Definition 5. Impose Assumptions 1 and 2. Then, considering \(\alpha\in\mathbb{R}\), the _score residual_\(R:\mathcal{H}_{\mathcal{Y},\mathcal{X}}\times\mathcal{Y}\times\mathcal{X}\to \mathbb{R}\) is defined as
\[R:(h;y,\boldsymbol{x})\mapsto\frac{\partial}{\partial\alpha}\ell(h+\alpha;y, \boldsymbol{x})\big{|}_{\alpha=0}.\]
Our invariance tests (Definition 16 in Section 3) are based on score residuals and use the following property.
**Lemma 9**.: Let \(\mathcal{Y}\), \(\mathcal{X}\), \(F_{Z}\in\mathcal{Z}\), \(\mathcal{H}_{\mathcal{Y},\mathcal{X}}\subseteq\mathcal{H}^{*}_{\mathcal{Y}, \mathcal{X}}\) be as in Definition 5. Impose Assumptions 1 and 2. Let \(\boldsymbol{X}\in\mathcal{X}\) follow \(\mathbb{P}_{\boldsymbol{X}}\) and let \((F_{Z},h_{0})\), \(h_{0}\in\mathcal{H}_{\mathcal{Y},\mathcal{X}}\), be a tram such that for \(\mathbb{P}_{\boldsymbol{X}}\)-almost all \(\boldsymbol{x}\), \((Y\mid\boldsymbol{X}=\boldsymbol{x})\) has CDF \(F_{Z}(h_{0}(\boldsymbol{\cdot}\mid\boldsymbol{x}))\). Assume that for \(\mathbb{P}_{\boldsymbol{X}}\)-almost all \(\boldsymbol{x}\),
\[\int_{\mathcal{Y}}\frac{\partial}{\partial\alpha}f_{Y\mid\boldsymbol{X}= \boldsymbol{x}}(v;h_{0}+\alpha)|_{\alpha=0}\,\mathrm{d}\mu(v)=\frac{\partial} {\partial\alpha}\int_{\mathcal{Y}}f_{Y\mid\boldsymbol{X}=\boldsymbol{x}}(v;h_ {0}+\alpha)\,\mathrm{d}\mu(v)|_{\alpha=0}, \tag{2}\]
where \(\alpha\in\mathbb{R}\). Then, we have \(\mathbb{E}[R(h_{0};Y,\boldsymbol{X})\mid\boldsymbol{X}]=0\), and hence \(\mathbb{E}[R(h_{0};Y,\boldsymbol{X})]=0\) and \(\mathbb{E}[\boldsymbol{X}R(h_{0};Y,\boldsymbol{X})]=0\).
A proof is given in Appendix E1.1. One can find regularity conditions on the involved distributions and functions that ensure the validity of interchangeability of differentiation and integration, see (2), similar to Hothorn et al. (2018, Theorems 1-3); see also Propositions 19 and 21.
**Example 10** (Binary logistic regression, cont'd).: The family of binary linear shift logistic regression models is given by \(\mathcal{M}(F_{\text{SL}},\{0,1\},\mathcal{X},\mathcal{H}^{\text{linear}}_{ \mathcal{Y},\mathcal{X}})\). We can thus write for all \(\boldsymbol{x}\in\mathcal{X}\), \(h(\boldsymbol{\cdot}\mid\boldsymbol{x})\coloneqq h_{Y}(\boldsymbol{\cdot})- \boldsymbol{x}^{\top}\boldsymbol{\beta}\) with \(h_{Y}(0)\coloneqq\vartheta\) and, by convention, \(h_{Y}(1)\coloneqq+\infty\). The likelihood contribution for a given observation \((y,\boldsymbol{x})\) is \(F_{\text{SL}}(h(0\mid\boldsymbol{x}))^{1-y}(1-F_{\text{SL}}(h(0\mid\boldsymbol {x}))^{y}\). The score residual is given by \(R(h;y,\boldsymbol{x})=1-y-F_{\text{SL}}(h(0\mid\boldsymbol{x}))\). Further, the inverse transformation function is given by \(h^{-1}:(z,\boldsymbol{x})\mapsto\mathds{1}(z\geq\vartheta-\boldsymbol{x}^{\top }\boldsymbol{\beta})\).
**Example 11** (Count regression, cont'd).: For count responses, we can define a family of linear shift trams with support \(\mathcal{Y}\coloneqq\{0,1,2,\dots\}\), given by \(\mathcal{M}(F_{Z},\mathcal{Y},\mathcal{X},\mathcal{H}^{\text{linear}}_{ \mathcal{Y},\mathcal{X}})\). For all \(\boldsymbol{x}\in\mathcal{X}\), the transformation function \(h(\boldsymbol{\cdot}\mid\boldsymbol{x})\coloneqq h_{Y}(\boldsymbol{\cdot})- \boldsymbol{x}^{\top}\boldsymbol{\beta}\) is a right-continuous step function with steps at the integers and linear shift effects. The log-likelihood contribution for a single observation \((y,\boldsymbol{x})\) is
given by \(\log F_{Z}(h(0\mid\mathbf{x}))\) if \(y=0\) and \(\log(F_{Z}(h(y\mid\mathbf{x}))-F_{Z}(h(y-1\mid\mathbf{x}))\)) for \(y\geq 1\). The exact form of score residual depends on the choice of \(F_{Z}\). The (generalized) inverse transformation function is given by \(h^{-1}:(z,\mathbf{x})\mapsto\lfloor h_{Y}^{-1}(z+\mathbf{x}^{\top}\mathbf{\beta})\rfloor\).
**Example 12** (Parametric survival regression, cont'd).: For the Weibull proportional hazards model, we fix \(F_{Z}(z)=1-\exp(-\exp(z))\) and define a family of log-linear transformation functions \(\mathcal{H}^{\log\text{-lin}}_{\mathcal{Y},\mathcal{X}}\coloneqq\{h\in\mathcal{ H}^{\text{linear}}_{\mathcal{Y},\mathcal{X}}\mid\exists(\vartheta_{1}, \vartheta_{2})\in\mathbb{R}\times\mathbb{R}_{+}:\forall y\in\mathcal{Y},\mathbf{x }\in\mathcal{X}\ h(y\mid\mathbf{x})=\vartheta_{1}+\vartheta_{2}\log(y)-\mathbf{x}^{ \top}\mathbf{\beta}\}\). The log-likelihood contribution for an exact response is given by the log-density and an uninformatively right-censored observation at time \(t\) with covariates \(\mathbf{x}\in\mathcal{X}\) contributes the log-survivor function evaluated at \(t\), i.e., \(\log(1-F_{Z}(h(t\mid\mathbf{x})))\), to the log-likelihood. The inverse transformation function is given by \((z,\mathbf{x})\mapsto h_{Y}^{-1}(z+\mathbf{x}^{\top}\mathbf{\beta})\coloneqq\exp(\vartheta _{2}^{-1}(z-\vartheta_{1}+\mathbf{x}^{\top}\mathbf{\beta}))\).
### Structural causal transformation models
Next, we cast rams into a structural causal modelling framework (Pearl, 2009) and return to our examples from Section 1. For all subsets \(S\subseteq[d]\), define \(\mathcal{X}^{S}\) to be the projection of \(\mathcal{X}\) onto the ordered coordinates in \(S\). For the rest of this paper, we restrict ourselves to shift rams. In this case, any "global" model class \(\mathcal{H}_{\mathcal{Y},\mathcal{X}}\) naturally induces submodel classes \(\mathcal{H}_{\mathcal{Y},\mathcal{X}^{S}}\subseteq\mathcal{H}^{*}_{\mathcal{Y}, \mathcal{X}^{S}}\) for all \(S\subseteq[d]\) by the following construction: \(\mathcal{H}_{\mathcal{Y},\mathcal{X}^{S}}\coloneqq\{h\in\mathcal{H}^{*}_{ \mathcal{Y},\mathcal{X}^{S}}\mid\exists h^{\text{global}}\in\mathcal{H}_{ \mathcal{Y},\mathcal{X}}\text{ s.t. }\forall\mathbf{x}\in\mathcal{X}\), \(h^{\text{global}}(\mathbf{\cdot}\mid\mathbf{x})=h(\mathbf{\cdot}\mid\mathbf{x}^{S})\}\). If \((F_{Z},\mathcal{Y},\mathcal{X},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\) satisfies Assumption 1, then \((F_{Z},\mathcal{Y},\mathcal{X}^{S},\mathcal{H}_{\mathcal{Y},\mathcal{X}^{S}})\) does too. We are now ready to define structural causal rams.
**Definition 13** (Structural causal tram).: Let \(\mathcal{Y}\), \(\mathcal{X}\), \(F_{Z}\in\mathcal{Z}\) be as in Definition 5. Let \(\mathcal{H}_{\mathcal{Y},\mathcal{X}}\subseteq\mathcal{H}^{*}_{\mathcal{Y}, \mathcal{X}}\) be a class of transformation functions such that Assumption 1 holds. Let \((Z,N_{\mathbf{X}})\) be jointly independent with \(Z\sim F_{Z}\). Then, a _structural causal tram\(C\) over \((Y,\mathbf{X})\)_ is defined as
\[C\coloneqq\begin{cases}\mathbf{X}\coloneqq g_{\mathbf{X}}(\mathbf{X},Y,N_{\mathbf{X}})\\ Y\coloneqq h^{-1}(Z\mid\mathbf{X}^{S_{*}}),\end{cases} \tag{3}\]
where \(S_{*}\subseteq[d]\), \(h\in\mathcal{H}_{\mathcal{Y},\mathcal{X}^{S_{*}}}\) is the _causal transformation function_ and \(\text{pa}_{C}(Y)\coloneqq S_{*}\) denotes the set of causal parents of \(Y\) in \(C\) and \(g_{\mathbf{X}}\) is an arbitrary measurable function. By \(\mathbb{P}^{C}_{(Y,\mathbf{X})}\) we denote the observational distribution induced by \(C\). We assume that the induced graph (obtained by drawing directed edges from variables on the right-hand side to variables on the left-hand side) is acyclic. We denote by \(\mathcal{C}(F_{Z},\mathcal{Y},\mathcal{X},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\) the collection of all structural causal rams with error distribution \(F_{Z}\) and causal transformation function \(h\in\mathcal{H}_{\mathcal{Y},\mathcal{X}}\).
### Non-identifiability of the causal parents in transformation models
We now show that performing causal feature selection in structural causal transformation models requires further assumptions. We consider a response variable \(Y\) and a set of covariates \(\mathbf{X}\) and assume that \((Y,\mathbf{X})\) are generated from an (unknown) structural causal tram (defined in (3)) with
(known) \(\mathcal{H}_{\mathcal{Y},\mathcal{X}}\subsetneq\mathcal{H}_{\mathcal{Y},\mathcal{ X}}^{*}\). In our work, the problem of causal feature selection concerns learning the causal parents \(\mathrm{pa}(Y)\) given a sample of \((Y,\mathbf{X})\) and knowledge of \(F_{Z}\), \(\mathcal{Y}\), \(\mathcal{X}\), \(\mathcal{H}_{\mathcal{Y},\mathcal{X}}\) (which specifies the model class \(\mathcal{M}(F_{Z},\mathcal{Y},\mathcal{X},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\)).
In this work, we specify the model class for the conditional of the response, given its causal parents, \(Y\mid\mathbf{X}^{\mathrm{pa}(Y)}\) by a tram; the remaining conditionals are unconstrained. Identifiability of causal structure has been studied for several model classes that constrain the joint distribution \((Y,\mathbf{X})\). When considering the class of linear Gaussian SCMs, for example, the causal parents are in general not identifiable from the observational distribution (as there are linear Gaussian SCMs with a different structure inducing the same distribution). This is different for other model classes: When considering linear Gaussian SCMs with equal noise variances (Peters and Buhlmann, 2013), linear non-Gaussian SCMs (Shimizu, 2014) or nonlinear Gaussian SCMs (Hoyer et al., 2008; Peters et al., 2014), for example, the graph structure (and thus the set of causal parents of \(Y\)) is identifiable under weak assumptions (identification then becomes possible by using goodness-of-fit procedures). To the best of our knowledge identifiability of such model classes has not been studied when constraining only the conditional distribution \(Y\mid\mathbf{X}^{\mathrm{pa}(Y)}\).
trans are generally not closed under marginalization (see Appendix A1 for a detailed discussion on non-collapsability) and one may hypothesize that this model class allows for identifiability of the parents (e.g., by considering different subsets of covariates and testing for goodness of fit). We now prove that this is not the case: In general, for trans (and even for linear shift trams), the causal parents are not identifiable from the observed distribution. Instead, additional assumptions are needed to facilitate causal feature selection in trams.
Definition 14 formally introduces the notion of identifiability of the causal parents and Proposition 15 provides the non-identifiability result.
**Definition 14** (Subset-identifiability of the causal parents).: Let \(\mathcal{C}\) denote a collection of structural causal models. The set of causal parents is said to be \(\mathcal{C}\)-_subset-identifiable_ if for all pairs \(C_{1},C_{2}\in\mathcal{C}\) it holds that
\[\mathbb{P}_{(Y,\mathbf{X})}^{C_{1}}=\mathbb{P}_{(Y,\mathbf{X})}^{C_{2}}\implies\mathrm{ pa}_{C_{1}}(Y)\subseteq\mathrm{pa}_{C_{2}}(Y)\;\vee\;\mathrm{pa}_{C_{2}}(Y)\subseteq \mathrm{pa}_{C_{1}}(Y).\]
**Proposition 15** (Non-subset-identifiability).: For all \(A\subseteq\mathbb{R}\) that are either an interval or countable, \(F_{Z}\in\mathcal{Z}\), \(\mathcal{Y}\subseteq\mathbb{R}\), there exists a class of transformation functions \(\mathcal{H}_{\mathcal{Y},A\times A}\subseteq\mathcal{H}_{\mathcal{Y},A\times A }^{\text{shift}}\subsetneq\mathcal{H}_{\mathcal{Y},A\times A}^{*}\), such that the set of causal parents is not \(\mathcal{C}(F_{Z},\mathcal{Y},A\times A,\mathcal{H}_{\mathcal{Y},A\times A})\)-subset identifiable.
A proof is given in Appendix E1.2, where we construct a joint distribution over three random variables \((Y,X^{1},X^{2})\), in which the two conditionals \(Y\mid X^{1}\) and \(Y\mid X^{2}\) are trams. This implies that there are two structural causal trams that have identical observational distributions, while \(Y\) has two different (non-empty) sets of causal parents that do not overlap. The proof in E1.2 characterizes how to construct such a joint distribution for shift trams. For illustrative purposes, we present a concrete example in Appendix E1.2 in which \(\mathcal{Y}=A=\{1,2,3\}\) and \(Y\mid X^{1}\) and \(Y\mid X^{2}\) are proportional odds logistic regression models. We then sample from the induced distribution
We sample from the induced distributions of the two structural causal trams constructed in the proof and apply the naive method described above of performing goodness-of-fit tests to identify the parents. We see that this method indeed fails to identify a non-empty subset of the parents in this example.
Instead of subset-identifiability, one can also consider a stronger notion of _full identifiability_, which states that the set of causal parents can be uniquely determined by the observed distribution (formally defined in Appendix A2). Proposition 15 immediately implies that the set of causal parents is not fully identifiable either.
## 3 Transformation model invariant causal prediction
Even if the observational distribution is insufficient to identify causal parents, identifiability can become possible if we have access to data from multiple, heterogeneous environments. Invariant causal prediction (ICP Peters et al., 2016) exploits the invariance of causal mechanisms (Haavelmo, 1943; Frisch et al., 1948; Aldrich, 1989; Pearl, 2009; Scholkopf et al., 2012) under interventions on variables other than the response. Depending on the response variable, multi-center clinical trials, data collected from different countries or different points in time may fall into this category. We then show that under Setting 1, the set of causal parents is subset-identifiable (Proposition 17) and fully identifiable if the environments are sufficiently heterogeneous (Proposition 18).
**Setting 1** (Data from multiple environments).: Let \(\mathcal{Y}\), \(\mathcal{X}\), \(F_{Z}\in\mathcal{Z}\) be as in Definition 5 and let \(\mathcal{H_{Y,X}}\subseteq\mathcal{H_{Y,X}^{*}}\) be a class of transformation functions such that Assumptions 1 and 2 hold. Let \(C_{*}\) be a structural causal tram over \((Y,\mathbf{X},\mathbf{E})\) such that
\[C_{*}\coloneqq\begin{cases}\mathbf{E}\coloneqq\ N_{\mathbf{E}}\\ \mathbf{X}\coloneqq\ g_{\mathbf{X}}(\mathbf{X},\mathbf{E},Y,N_{\mathbf{X}})\\ Y\coloneqq\ h_{*}^{-1}(Z\ |\ \mathbf{X}^{S_{*}}),\end{cases}\]
where \(h_{*}\in\mathcal{H_{Y,X^{S_{*}}}}\) with \(S_{*}\subseteq[d]\) denoting the parents of \(Y\) and \((Z,N_{\mathbf{X}},N_{\mathbf{E}})\) denoting the jointly independent noise variables. In this setup, the random vector \(\mathbf{E}\) encodes the environments and takes values in \(\mathcal{E}\subseteq\mathbb{R}^{q}\) and may be discrete or continuous. By definition, the induced graph \(\mathcal{G}_{*}\) is acyclic. An exemplary DAG contained in this setup is depicted on the right. By \(\mathcal{D}_{n}\coloneqq\{(y_{i},\mathbf{x}_{i},\mathbf{e}_{i})\}_{i=1}^{n}\), we denote a sample of independent observations from \(\mathbb{P}_{(Y,\mathbf{X},\mathbf{E})}^{C_{*}}\).
As for ICP, invariance plays a key role for tramicp. A subset of covariates is considered invariant if the corresponding transformation model correctly describes the conditional distribution across the environments \(\mathbf{E}\).
**Definition 16** (\((F_{Z},\mathcal{H_{Y,X}})\)-invariance).: Assume Setting 1. A subset of covariates \(S\subseteq[d]\) is
\((F_{Z},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\)-_invariant_ if there exists \(h^{S}\in\mathcal{H}_{\mathcal{Y},\mathcal{X}^{S}}\), such that for \(\mathbb{P}_{(\boldsymbol{X}^{S},\boldsymbol{E})}\)-almost all \((\boldsymbol{x}^{S},\boldsymbol{e})\),
\[(Y\mid\boldsymbol{X}^{S}=\boldsymbol{x}^{S},\boldsymbol{E}=\boldsymbol{e})\text { and }(Y\mid\boldsymbol{X}^{S}=\boldsymbol{x}^{S})\text{ are identical with conditional CDF }F_{Z}(h^{S}(\boldsymbol{\cdot}\mid\boldsymbol{x}^{S})).\]
If an _invariant transformation function_\(h^{S}\) according to Definition 16 exists, it is \(\mathbb{P}_{\boldsymbol{X}^{S}}\)-almost surely unique (see Lemma 31 in Appendix E2). Proposition 17 shows that the parental set fulfills \((F_{Z},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\)-invariance, which is sufficient to establish coverage guarantees for invariant causal prediction in trams.
**Proposition 17**.: Assume Setting 1. Then the set of causal parents \(S_{*}\) is \((F_{Z},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\)-invariant.
A proof is given in Appendix E1.3.
The set of causal parents \(S_{*}\) together with the causal transformation function \(h_{*}\) in Setting 1 may not be the only set satisfying \((F_{Z},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\)-invariance. In this vein, we define the set of _identifiable causal predictors_ as
\[S_{I}\coloneqq\bigcap_{S\subseteq[d]:S\text{ is }(F_{Z},\mathcal{H}_{ \mathcal{Y},\mathcal{X}})\text{-invariant}}S,\]
and have, by Proposition 17, \(S_{I}\subseteq S_{*}\). This implies that the causal parents \(P\) are subset-identifiable.
If the environments \(\boldsymbol{E}\) are heterogeneous enough, the set of causal parents is fully identified under Setting 1 and the faithfulness assumption (see Spirtes et al., 2000, p. 31).
**Proposition 18**.: Assume Setting 1. Let \(\mathcal{G}\) be the DAG induced by \(C_{*}\) and assume that \(\mathbb{P}_{(Y,\boldsymbol{X},\boldsymbol{E})}^{C_{*}}\) is faithful w.r.t. to \(\mathcal{G}\). If \(S_{*}\subseteq\operatorname{ch}(\boldsymbol{E})\), where \(\operatorname{ch}(\boldsymbol{E})\) denotes the children of \(\boldsymbol{E}\), we have
\[S_{I}=S_{*}.\]
A proof is given in Appendix E1.4.
For simple model classes such as linear Gaussian SCMs, sufficient conditions for faithfulness are known (Spirtes et al., 2000). In our setting, analyzing the faithfulness assumption is particularly challenging due to non-collapsibility and non-closure under marginalization of trams (see Appendix A1). Nonetheless, we empirically show in our simulations (see Section 4) that faithfulness is not violated, for example, if the coefficients in linear shift trams are sampled from a continuous distribution (for details see Appendix B3).
### Testing for invariance
We now translate \((F_{Z},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\)-invariance into testable conditions which are applicable to general trams and thus general response types. Here, we propose an invariance condition based on score residuals (Definition 8). The following proposition shows that the score residuals are uncorrelated with the environments (in Setting 1) when conditioning on an invariant set.
**Proposition 19** (Score-residual-invariance).: Assume Setting 1 and that (2) holds. Then, we have the following implication:
\[\begin{split} S\text{ is }(F_{Z},\mathcal{H}_{\mathcal{Y}, \mathcal{X}})\text{-invariant }&\implies\mathbb{E}[R(h^{S};Y,\mathbf{X}^{S})\mid\mathbf{X}^{S}]=0, \text{ and }\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \mathbb{E}[\text{Cov}[\mathbf{E},R(h^{S};Y,\mathbf{X}^{S})\mid\mathbf{X}^{S}]]=0, \end{split} \tag{4}\]
where \(\mathbb{E}[\text{Cov}[\mathbf{E},R(h^{S};Y,\mathbf{X}^{s})\mid\mathbf{X}^{S}]]\coloneqq \mathbb{E}[\mathbf{E}R(h^{S};Y,\mathbf{X}^{s})\mid\mathbf{X}^{S}]-\mathbb{E}[\mathbf{E}\mid\bm {X}^{S}]\mathbb{E}[R(h^{S};Y,\mathbf{X}^{S})\mid\mathbf{X}^{S}]\) denotes the expected conditional covariance between the residuals and environments.
A proof is given in Appendix E1.5. In Appendix A3, we extend tramicp (in particular, Proposition 19) to uninformatively censored observations, where \(Y\) itself is unobserved.
We now turn to the problem of testing invariance from finite data. Section 3.1.1 develops a test, based on similar ideas as the Generalised Covariance Measure (GCM, Shah and Peters, 2020), on how to test the implication in (4). As a second, alternative, invariance test, we also propose a Wald test for the existence of main and interaction terms involving the environments in Section 3.1.2; we will see in Proposition 21 that for linear shift trams, such a test is closely related to the implication in Proposition 19.
For all \(S\subseteq[d]\), and sample sizes \(n\), let \(p_{S,n}:(\mathbb{R}\times\mathcal{X}^{S}\times\mathcal{E})^{n}\to[0,1]\) be the \(p\)-value of a test for the null hypothesis that \(S\) is \((F_{Z},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\)-invariant. All proposed invariance tests are embedded in a subset-search over the set of covariates, in which we return the intersection of all non-rejected sets at a given level \(\alpha\in(0,1)\) (ICP; Algorithm 1).
```
0: Data \(\mathcal{D}_{n}\) from Setting 1, significance level \(\alpha\in(0,1)\), and a family of invariance tests \((p_{S,n})_{S\subseteq\{1,\ldots,d\}}\) (outputting a \(p\)-value; see Algorithms 2, and 3 and the comparators in Section 4.1.1)
1:for\(S\subseteq[d]\)do\(\triangleright\) Iterate over all subsets
2: Compute \(p_{S,n}(\mathcal{D}_{n})\)\(\triangleright\) Compute \(p\)-value of invariance test
3:endfor
4:return\(S_{n}\coloneqq\bigcap_{S:p_{S,n}(\mathcal{D}_{n})>\alpha}S\)\(\triangleright\) Intersection over all non-rejected sets
```
**Algorithm 1** Invariant causal prediction (Peters et al., 2016)
We refer to the combination of ICP (Algorithm 1) with the proposed tram-GCM invariance test (Algorithm 2) as tramicp-GCM, with the proposed tram-Wald invariance test (Algorithm 3) as tramicp-Wald and using a nonparametric conditional independence test (see Section 4.1.1) as nonparametric ICP.
Both tramicp-GCM and tramicp-Wald under their respective additional assumptions enjoy the same coverage result as ICP for linear additive noise models, that is, \(\lim_{n\to\infty}\mathbb{P}(S_{n}^{\phi}\subseteq\text{pa}_{\mathcal{C}_{*}}( Y))\geq 1-\alpha\) if the tests are level \(\alpha\) (Theorem 1 in Peters et al., 2016).
#### 3.1.1 Invariance tests based on score residuals
We can test the null hypothesis of \((F_{Z},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\)-invariance by testing the implication in (4), i.e., uncorrelatedness between score residuals and residualized environments in a GCM-type invariance test (Algorithm 2). This requires that the maximum likelihood estimator exists and is unique.
**Assumption 3**.: Under Setting 1 and for all \(S\subseteq[d]\), the maximum likelihood estimator, given by
\[\operatorname*{arg\,max}_{h\in\mathcal{H}_{\mathcal{Y},\mathcal{X}^{S}}}\ell(h ;\mathcal{D}_{n}),\]
exists and is unique.
See also the regularity conditions in McLain and Ghosh (2013, Assumptions I-V). Theorem 20 shows that the proposed test is uniformly asymptotically level \(\alpha\) for any \(\alpha\in(0,1)\).
```
0: Data \(\mathcal{D}_{n}\) from Setting 1, \(S\subseteq[d]\), estimator \(\hat{\boldsymbol{\mu}}\) for \(\boldsymbol{\mu}(\boldsymbol{X}^{S})\coloneqq\mathbb{E}[\boldsymbol{E}\mid \boldsymbol{X}^{S}]\).
1: Fit the tram: \[\hat{h}\leftarrow\operatorname*{arg\,max}_{h\in\mathcal{H}_{\mathcal{Y}, \mathcal{X}^{S}}}\ell(h;\mathcal{D}_{n})\]
2: Obtain \(\hat{\boldsymbol{\mu}}\) using data \(\mathcal{D}_{n}\)
3: Compute residual product terms: \[\boldsymbol{L}_{i}\gets R(\hat{h};y_{i},\boldsymbol{x}_{i}^{S})\{ \boldsymbol{e}_{i}-\hat{\boldsymbol{\mu}}(\boldsymbol{x}_{i}^{S})\},i=1, \ldots,n\]
4: Compute residual covariance: \[\hat{\Sigma}\gets n^{-1}\sum_{i=1}^{n}\boldsymbol{L}_{i}\boldsymbol{L}_{i }^{\top}-\left(n^{-1}\sum_{i=1}^{n}\boldsymbol{L}_{i}\right)\left(n^{-1}\sum_ {i=1}^{n}\boldsymbol{L}_{i}\right)^{\top}\]
5: Compute test statistic: \[\boldsymbol{T}_{n}\leftarrow\hat{\Sigma}^{-1/2}\left(n^{-1/2}\sum_{i=1}^{n} \boldsymbol{L}_{i}\right)\]
6: Compute \(p\)-value: \[p_{S,n}(\mathcal{D}_{n})\gets 1-F_{\chi_{q}^{2}}(\|\boldsymbol{T}_{n}\|_{2}^{2})\]
7:return\(p_{S,n}(\mathcal{D}_{n})\)
```
**Algorithm 2** tram-GCM invariance test
**Theorem 20** (Uniform asymptotic level of the invariance test in Algorithm 2).: Assume Setting 1 together with Assumption 3 and for a fixed \(S\subseteq[d]\) let \(\mathcal{P}\coloneqq\{\mathbb{P}_{(Y,\boldsymbol{X}^{S},\boldsymbol{E})}\mid S\text { is }\mathcal{H}_{\mathcal{Y},\mathcal{X}}\text{-invariant}\}\) denote the set of null distributions for the hypothesis \(H_{0}(S):S\) is \((F_{Z},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\)-invariant (Definition 16). For all \(P\) in \(\mathcal{P}\), we denote by \(h_{P}\) the \(h^{S}\in\mathcal{H}_{\mathcal{Y},\mathcal{X}^{S}}\) in the definition of \((F_{Z},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\)-invariance and \(\boldsymbol{\mu}(\boldsymbol{X}^{S})\coloneqq\mathbb{E}_{P}[\boldsymbol{E}\mid \boldsymbol{X}^{S}]\). Let \(\boldsymbol{\xi}\coloneqq\boldsymbol{E}-\boldsymbol{\mu}(\boldsymbol{X}^{S})\). Assume that
[MISSING_PAGE_POST]
2. There exists \(\delta>0\), s.t. \(\sup_{P\in\mathcal{P}}\mathbb{E}_{P}[\|R(h_{P};Y,\mathbf{X}^{S})\mathbf{\xi}\|_{2}^{2+ \delta}]<\infty\),
3. \(\sup_{P\in\mathcal{P}}\max\left\{\mathbb{E}_{P}[\|\mathbf{\xi}\|_{2}^{2}\mid\mathbf{X}^ {S}],\mathbb{E}_{P}[R(h_{P};Y,\mathbf{X}^{S})^{2}\mid\mathbf{X}^{S}]\right\}<\infty\).
Further, we require the following rate conditions on \(M\coloneqq n^{-1}\sum_{i=1}^{n}\|\hat{\mathbf{\mu}}(\mathbf{X}_{i}^{S})-\mathbf{\mu}(\mathbf{X} _{i}^{S})\|_{2}^{2}\) and \(W\coloneqq n^{-1}\sum_{i=1}^{n}\{R(\hat{h};Y_{i},\mathbf{X}_{i}^{S})-R(h_{P};Y_{i},\mathbf{X}_{i}^{S})\}^{2}\):
1. \(M=o_{\mathcal{P}}(1)\),
2. \(W=o_{\mathcal{P}}(1)\),
3. \(MW=o_{\mathcal{P}}(n^{-1})\).
Then \(\mathbf{T}_{n}\) converges to a standard \(q\)-variate normal distribution uniformly over \(\mathcal{P}\). As a consequence, for all \(\alpha\in(0,1)\),
\[\lim_{n\to\infty}\sup_{P\in\mathcal{P}}\mathbb{P}_{P}(p_{S,n}(\mathcal{D}_{n} )\leq\alpha)=\alpha,\]
where \(p_{S,n}(\mathcal{D}_{n})\) is the \(p\)-value computed by Algorithm 2.
A proof is given in Appendix E1.6. Conditions (a)-(c) are mild regularity conditions on the distributions of \((Y,\mathbf{X}^{S},\mathbf{E})\). Of the remaining conditions it is usually (iii) that is the strictest. In the case of a parametric linear shift tram, we would expect \(W=O_{\mathcal{P}}(n^{-1})\) and therefore would only need the regression of \(\mathbf{E}\) on \(\mathbf{X}^{S}\) to be consistent. However, the tram-GCM invariance test can still be correctly calibrated even if the score residuals are learned at a slower-than-parametric rate. Slower rates occur, for instance, in mixed-effects (Tamasi and Hothorn, 2021), penalized linear shift (Kook and Hothorn, 2021), or conditional trams(Hothorn et al., 2014). The robustness property of the tram-GCM invariance test does not necessarily hold without residualization of the environments (Chernozhukov et al., 2017; Shah and Peters, 2020) We illustrate empirically how the naive correlation test may not be level, in case of shift and penalized linear shift trams, in Appendix B4.
#### 3.1.2 Invariance tests based on the Wald statistic
For linear shift trams, \((F_{Z},\mathcal{H}^{\text{linear}}_{\mathcal{Y},\mathcal{X}})\)-invariance implies the absence of main and interaction effects involving the environments if we include the environment variable as a covariate into the model (see Proposition 21 below); this can be tested for using a Wald test for both continuous and discrete responses (Algorithm 3).
We now introduce notation for including main and interaction effects involving the environments. Let \(\otimes\) denote the Kronecker product and define, for all \(S\subseteq[d]\), \(m^{S}:\mathcal{X}^{S}\times\mathcal{E}\to\mathbb{R}^{q(1+|S|)}\), \((\mathbf{x}^{S},\mathbf{e})\mapsto(1,(\mathbf{x}^{S})^{\top})^{\top}\otimes\mathbf{e}\), which sets up first order interaction between environments \(\mathbf{e}\) and covariates \(\mathbf{x}^{S}\) together with environment main effects. Let \(\mathcal{H}^{*}_{\mathcal{Y},\mathcal{X}\times\mathcal{E}}\) denote the class of all transformation functions on \(\mathcal{Y}\times(\mathcal{X}\times\mathcal{E})\subseteq\mathbb{R}\times \mathbb{R}^{d}\times\mathbb{R}^{q}\) (see Definition 5). For a fixed vector of basis functions \(\mathbf{a}:\mathcal{Y}\to\mathbb{R}^{b}\) (see Section 4.2 for typical choices of bases and their correspondence to commonly used regression models), we define \(\mathcal{H}^{\text{Wald}}_{\mathcal{Y},\mathcal{X}\times\mathcal{E}}(\mathbf{a}) \coloneqq\{h\in\mathcal{H}^{*}_{\mathcal{Y},\mathcal{X}\times\mathcal{E}} \mid\exists(\mathbf{\theta},\mathbf{\beta},\mathbf{\gamma})\in\Theta\times\mathbb{R}^{d} \times\mathbb{R}^{q(1+d)}:\forall\mathbf{x}\in\mathcal{X},\mathbf{e}\in\mathcal{E}:\]
\(h(\boldsymbol{\cdot}\mid\boldsymbol{x},\boldsymbol{e})=\boldsymbol{a}(\boldsymbol{ \cdot})^{\top}\boldsymbol{\theta}+\boldsymbol{x}^{\top}\boldsymbol{\beta}+m( \boldsymbol{x},\boldsymbol{e})^{\top}\boldsymbol{\gamma}\) where \(\Theta\subseteq\overline{\mathbb{R}}^{b}\). Thus, the transformation functions are parameterized by \(\boldsymbol{\vartheta}\coloneqq(\boldsymbol{\theta},\boldsymbol{\beta}, \boldsymbol{\gamma})\in\Theta\times\mathbb{R}^{d}\times\mathbb{R}^{q(1+d)}\). For all \(S\subseteq[d]\), this global model class induces subclasses \(\mathcal{H}^{\text{Wald}}_{\mathcal{Y},\mathcal{X}^{S}\times\mathcal{E}}( \boldsymbol{a})\) and \(\mathcal{H}^{\text{Wald}}_{\mathcal{Y},\mathcal{X}^{S}}(\boldsymbol{a})\) as described in Section 2.2. For \((F_{Z},\mathcal{Y},\mathcal{X}\times\mathcal{E},\mathcal{H}^{\text{Wald}}_{ \mathcal{Y},\mathcal{X}^{S}\times\mathcal{E}}(\boldsymbol{a}))\) satisfying Assumption 1, we define the log-likelihood function \(\ell^{\text{Wald}}:\mathcal{H}^{\text{Wald}}_{\mathcal{Y},\mathcal{X}\times \mathcal{E}}(\boldsymbol{a})\times\mathcal{Y}\times\mathcal{X}\times\mathcal{ E}\rightarrow\mathbb{R}\) with \(\ell:(h,y,\boldsymbol{x},\boldsymbol{e})\mapsto\log f_{Y|\boldsymbol{X}= \boldsymbol{x},\boldsymbol{E}=\boldsymbol{e}}(y;h)\). For all subsets of covariates \(S\subseteq[d]\), we then estimate the \(\text{\sc{tram}}\;F_{Z}\circ h_{\boldsymbol{\vartheta}^{S}}\), \(h_{\boldsymbol{\vartheta}^{S}}\in\mathcal{H}^{\text{Wald}}_{\mathcal{Y}, \mathcal{X}^{S}\times\mathcal{E}}(\boldsymbol{a})\) and consider the hypothesis test \(H_{0}(S):\boldsymbol{\gamma}^{S}=0\).
**Proposition 21**.: Assume Setting 1 with \(\mathcal{H}_{\mathcal{Y},\mathcal{X}}=\mathcal{H}^{\text{Wald}}_{\mathcal{Y}, \mathcal{X}}(\boldsymbol{a})\). Let \(S\subseteq[d]\) be given and suppose that the canonical conditional CDF of \(Y\) given \(\boldsymbol{X}^{S}\) and \(\boldsymbol{E}\) is an element of \(\mathcal{M}(F_{Z},\mathcal{Y},\mathcal{X}^{S}\times\mathcal{E},\mathcal{H}^{ \text{Wald}}_{\mathcal{Y},\mathcal{X}^{S}\times\mathcal{E}})\); that is, there exist \((\boldsymbol{\theta},\boldsymbol{\beta}^{S},\boldsymbol{\gamma}^{S})\in \Theta\times\mathbb{R}^{|S|}\times\mathbb{R}^{q(1+|S|)}\) s.t. the canonical conditional CDF equals \(F_{Z}(\boldsymbol{a}(\boldsymbol{\cdot})^{\top}\boldsymbol{\theta}+( \boldsymbol{x}^{S})^{\top}\boldsymbol{\beta}^{S}+m^{S}(\boldsymbol{x}^{S}, \boldsymbol{e})^{\top}\boldsymbol{\gamma}^{S})\). Then \(S\) is \((F_{Z},\mathcal{H}^{\text{Wald}}_{\mathcal{Y},\mathcal{X}})\)-invariant if and only if \(\boldsymbol{\gamma}^{S}=0\).
A proof is given in Appendix E1.7.
The Wald test uses the quadratic test statistic \((\hat{\boldsymbol{\gamma}}^{S})^{\top}\hat{\Sigma}_{\hat{\boldsymbol{\gamma} }^{S}}/n\hat{\boldsymbol{\gamma}}^{S}\) which converges in distribution to a \(\chi^{2}\)-distribution with \(\operatorname{rank}\hat{\Sigma}_{\hat{\boldsymbol{\gamma}}^{S}}\) degrees of freedom (under further regularity conditions, see Hothorn et al., 2018, Theorems 1-3). Here, \((\hat{\Sigma}_{\hat{\boldsymbol{\gamma}}^{S}})_{ij}\coloneqq[\boldsymbol{I}( h_{\hat{\boldsymbol{\vartheta}}^{S}};\boldsymbol{X}^{S},\boldsymbol{E})]_{ij}^{-1}\), \(i,j\in\{|S|+1,\ldots,|S|+q(1+|S|)\}\), denotes the estimated variance-covariance matrix of the model restricted to the main and interaction effects involving the environments, where \(\boldsymbol{I}(h_{\hat{\boldsymbol{\vartheta}}^{S}};\boldsymbol{X}^{S}, \boldsymbol{E})\) denotes an estimate of the Fisher information, which for all \(S\subseteq[d]\) and \(\boldsymbol{\vartheta}^{S}\in\Theta\times\mathbb{R}^{|S|}\times\mathbb{R}^{q( 1+|S|)}\), is defined as \(\boldsymbol{I}(h_{\boldsymbol{\vartheta}^{S}};\boldsymbol{X}^{S},\boldsymbol{E })\coloneqq\mathbb{E}\left[-\frac{\partial}{\partial\boldsymbol{\vartheta}^{S} \partial(\boldsymbol{\vartheta}^{S})^{\top}}\ell^{\text{Wald}}(h_{\boldsymbol {\vartheta}^{S}};Y,\boldsymbol{X}^{S},\boldsymbol{E})\mid\boldsymbol{X}^{S}, \boldsymbol{E}\right]\).
```
0: Data \(\mathcal{D}_{n}\) from Setting 1, \(S\subseteq[d]\), \(\mathcal{H}_{\mathcal{Y},\mathcal{X}^{S}\times\mathcal{E}}\subseteq\mathcal{H}^{ \text{Wald}}_{\mathcal{Y},\mathcal{X}^{S}\times\mathcal{E}}(\boldsymbol{a})\)
1: Fit the \(\text{\sc{tram}}\): \[h_{\hat{\boldsymbol{\vartheta}}^{S}_{n}}\leftarrow\underset{h_{\boldsymbol{ \vartheta}^{S}}\in\mathcal{H}_{\mathcal{Y},\mathcal{X}^{S}\times\mathcal{E}}}{ \operatorname{arg\,max}}\ell(h_{\boldsymbol{\vartheta}^{S}};\mathcal{D}_{n})\]
2: Compute the variance-covariance matrix and its rank \[\hat{\Sigma}_{\hat{\boldsymbol{\gamma}}^{S}_{n}}\leftarrow[\boldsymbol{I}(h_{ \hat{\boldsymbol{\vartheta}}^{S}_{n}};\mathcal{D}_{n})]_{\hat{\boldsymbol{ \gamma}}^{S}}^{-1},\quad K_{S}\leftarrow\operatorname{rank}\hat{\Sigma}_{\hat{ \boldsymbol{\gamma}}^{S}}\]
3: Compute Wald \(p\)-value: \[p_{S,n}(\mathcal{D}_{n})\gets 1-F_{\chi^{2}(K_{S})}\{(\hat{\boldsymbol{\gamma}}^{S}_{n})^{ \top}\hat{\Sigma}_{\hat{\boldsymbol{\gamma}}^{S}_{n}}\hat{\boldsymbol{\gamma}}^{S}_ {n}\}\]
4:return\(p_{S,n}(\mathcal{D}_{n})\)
```
**Algorithm 3** \(\text{\sc{tram}-Wald}\) invariance test
### Practical aspects
Plausible causal predictorsThe procedure in Algorithm 1 can be used to compute \(p\)-values for all \(S\in\{1,\ldots,d\}\). Based on Peters et al. (2016) and as implemented in **InvariantCausalPrediction**(Meinshausen, 2019), we can transform the set-specific \(p\)-values into predictor-specific \(p\)-values
via
\[\forall j\in[d]:\hat{p}_{j}\coloneqq\begin{cases}1&\text{if }\max_{j}\hat{p}_{j}< \alpha,\\ \max_{S\subseteq[d]:j\not\in S}\hat{p}_{n}(S)&\text{otherwise},\end{cases}\]
where for \(j\in[d]\), \(\hat{p}_{j}\) is now a valid \(p\)-value for the null hypothesis \(H_{0}(j):X^{j}\notin\text{pa}(Y)\) (assuming that the true parents satisfy \((F_{Z},\mathcal{H}_{\mathcal{Y},\mathcal{X}})\)-invariance). We then refer to \(X^{j}\) with \(\hat{p}_{j}\leq\alpha\), \(j\in[d]\) as _plausible causal predictors_.
Reducing computational complexityThe computational complexity of ICP scales exponentially in the number of predictors due to the need of testing all possible subsets. Peters et al. (2016) proposed to reduce the high computational burden by pre-screening the predictors using a variable selection algorithm. Pre-screening of covariates in trams can in principle be done via \(L_{1}\)-penalized likelihood estimation (Kook and Hothorn, 2021) or nonparametric feature selection methods that can handle discrete and censored responses. Given a variable selection procedure with output \(S_{n}^{\text{VS}}\) which guarantees \(\lim_{n\to\infty}\mathbb{P}(S_{n}^{\text{VS}}\supseteq\text{pa}_{C_{*}}(Y)) \geq 1-\alpha\) at level \(\alpha\in(0,1)\), we can run tramicp with the potentially reduced set of covariates \(S_{n}^{\text{VS}}\) at level \(\alpha\) and maintain the coverage guarantee of ICP at level \(2\alpha\)(Peters et al., 2016, Section 3.4).
While pre-screening simplifies the application of tramicp in practice, it is difficult to ensure that the screening property holds. Although \(L_{1}\)-penalized maximum likelihood is fast and asymptotically guaranteed to return the Markov boundary for linear additive Gaussian noise models (Meinshausen and Buhlmann, 2006), this no longer holds true for general linear additive noise models (Nandy et al., 2017, Example 1 in the supplement) or (linear shift) trams, since the parametric regression model of the response on all covariates can be misspecified if a child is included in the conditioning set.
Unmeasured confoundingIn Setting 1, we assume that all confounding variables between covariates and response and all parents of the response have been measured. This assumption can be weakened by instead assuming that there exists a subset of ancestors \(A\subseteq\text{an}(Y)\), such that \(E\perp\!\!\!\perp_{\mathcal{G}^{*}}Y\mid\boldsymbol{X}^{A}\) (where \(\perp\!\!\!\perp_{\mathcal{G}^{*}}\) denotes \(d\)-separation in \(\mathcal{G}^{*}\)) and the model for \(Y\) given \(\boldsymbol{X}^{A}\) is correctly specified by a tram. Such transformation models can be constructed in special cases (Barbanti and Hothorn, 2019; Wienke, 2010), but a characterization of this assumption is, to the best of our knowledge, an open problem. As in ICP in the presence of hidden confounders (Peters et al., 2016, Proposition 5), tramicp, under this assumption, returns a subset of the ancestors of \(Y\) with large probability.
### Implementation: R package tramicp
With **tramicp**, we provide a user-friendly implementation for applying tramicp, which we briefly outline in this section. For every model implementation listed in Table 2 in Section 4.2, there is a
corresponding alias in **tramicp** which appends ICP to the model name (e.g., glmICP in Example 1). As an example, in the below code snippet, we apply tramicp-GCM to data generated from a structural causal tram with DAG \(\mathcal{G}\) (shown below) and a "Cotram" (cf. count tram, Siegfried and Hothorn, 2020) model for the count-valued response with \(F_{Z}=F_{\text{SL}}\) (see Table 2 and Example 3).
R> cotramICP(Y ~ X1 + X2 + X3 + X4, data = dat, env = ~ E, type = "residual", + test = "gcm.test", verbose = FALSE)
The argument type specifies the type of invariance considered, i.e., "wald" for the Wald test or "residual" for any residual-based test (the default is test = "gcm.test").
The corresponding output is shown below. tramicp correctly returns \(\{1,2\}\) as the causal parent of the response. The reported \(p\)-value for a predictor of interest is computed as the maximum \(p\)-value over all tested sets not containing the predictor of interest (in case all sets are rejected, the \(p\)-value is set to 1, see Section 3.2). An illustration of the tram-GCM invariance test can be found in Figure 1.
Model-based Invariant Causal Prediction Discrete Odds Count Transformation Model
Call: cotramICP(formula = Y ~ X1 + X2 + X3 + X4, data = df, env = ~ E, verbose = FALSE, type = "residual", test = "gcm.test")
Invariance test: gcm.test
Predictor p-values: X1 X2 X3 X4
0.001 0.000 0.699 0.699
Set of plausible causal predictors: X1 X2
In its general form, tramicp is implemented in the dicp() function. The function takes the arguments formula, data, env (a formula for the environment variables) and modFUN (the function for fitting the model). Appendix D summarizes all model classes and invariance tests implemented in **tramicp** and explains how model classes that are not directly implemented in **tramicp** (such as shift trams from package **tramME**) can be integrated.
If prior knowledge of the form \(S_{m}\subsetneq S_{*}\) is available, only super-sets of \(S_{m}\) need to be tested. Thus, \(S_{m}\) can be interpreted as a mandatory part of the conditioning set. In **tramicp**, such mandatory predictors can be supplied to the mandatory argument as a formula. In our example above, we could, for instance, specify mandatory = \(\sim\) X1 in the call to cotramICP(). In Section 5, we illustrate how such prior knowledge can be used to reduce computation time by testing fewer sets. If non-parents
are falsely included as mandatory, the output of tramicp may be misleading. Also, the robustness guarantees discussed in Section 3.2 'Unmeasured confounding' can break down.
## 4 Simulation Experiments
We now evaluate the proposed algorithms on simulated data based on randomly chosen graphs with restrictions on the number of possible descendants and children of the response, as well as how the binary environment indicator affects non-response nodes. In the simulations, the conditional distribution of \(Y\mid\mathrm{pa}(Y)\) is correctly specified by a tram, all other structural equations are linear additive with Gaussian noise.
### Existing methods
#### 4.1.1 Nonparametric ICP via conditional independence testing
Throughout, we report the oracle version on nonparametric ICP. Under Setting 1, let \(\mathcal{G}\) denote the DAG implied by \(C_{*}\). Assuming the Markov property (see Spirtes et al., 2000, p. 29)) and faithfulness of \(\mathbb{P}^{C_{*}}_{(Y,\mathbf{X},\mathbf{E})}\) w.r.t. \(\mathcal{G}\), the oracle is defined as the intersection of sets \(S\) for which \(Y\) is conditionally independent of \(\mathbf{E}\) given \(\mathbf{X}^{S}\)(Mogensen et al., 2022, Proposition 4.1),
\[S^{\mathrm{ICP}}=\mathrm{pa}(Y)\cap(\mathrm{ch}(\mathbf{E})\cup\mathrm{pa}( \mathrm{an}(Y)\cap\mathrm{ch}(\mathbf{E}))), \tag{5}\]
where \(\mathrm{pa}(\mathbf{\cdot})\), \(\mathrm{ch}(\mathbf{\cdot})\), \(\mathrm{an}(\mathbf{\cdot})\) denote parents, children and ancestors of a node in \(\mathcal{G}\), respectively.
A general-purpose algorithm for causal feature selection when having access to data from heterogeneous environments is nonparametric conditional independence testing (Zhang et al., 2011; Strobl et al., 2019, Algorithm 1 with \(p_{S,n}\) being a conditional independence test for the hypothesis \(\mathbf{E}\perp\!\!\!\perp Y\mid\mathbf{X}^{S}\)).
#### 4.1.2 ROC-based test for binary responses
Diaz et al. (2022) use a nonparametric invariance test based on comparing the area under the ROC curves obtained from regressing a binary response \(Y\) on \(\mathbf{X}^{S}\) and \((\mathbf{X}^{S},\mathbf{E})\). In case \(S\) is invariant, \(\mathbf{E}\) contains no further information on \(Y\), hence the AUC should be the same. Following their approach, we fit two logistic regression models: (i) for \(Y\) given \(\mathbf{X}^{S}\) and (ii) for \(Y\) given \(\mathbf{X}^{S}\) and \(\mathbf{E}\) with the interaction term between \(\mathbf{X}^{S}\) and \(\mathbf{E}\). Then, we apply the DeLong test (DeLong et al., 1988) to test for the equality of the resulting ROCs. We apply the ROC invariance test only to the settings with binary logistic regression.
### Simulation setup
We outline the simulation setup for our experiments and give details in Appendix B1, which includes the explicit parameterizations of the transformation function for all models, details on the data generating process and simulation scenarios. We compare tramicp with tram-GCM and tram-Wald invariance tests against nonparametric ICP and oracle ICP.
Models and parameterizationIn Section 4, we consider the models discussed in the examples in Section 1.3, namely, the binary GLM ("Binary"), a discrete-odds count transformation model ("Cotram"), and a parametric survival model ("Weibull"). We also show results for the normal linear model and other transformation models for ordered and continuous outcomes (see Tables 1 and 2) in Appendix B2.1. For our numerical experiments, we parameterize the transformation function \(h\) in terms of basis expansions depending on the type of response \(h_{Y}(\boldsymbol{\cdot})\coloneqq\boldsymbol{a}(\boldsymbol{\cdot})^{\top} \boldsymbol{\theta}\), with \(\boldsymbol{a}:\mathcal{Y}\to\mathbb{R}^{b}\), and appropriate constraints on \(\boldsymbol{\theta}\in\Theta\subseteq\overline{\mathbb{R}}^{b}\). Table 1 contains a summary of the bases used for continuous, bounded continuous, count, and binary/ordered responses.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Basis** & **Sample space** & **Basis functions** & **Constraints on \(\boldsymbol{\theta}\)** \\ \hline Linear & \(\mathcal{Y}\subseteq\mathbb{R}\) & \(\boldsymbol{a}:y\mapsto(1,y)^{\top}\) & \(\theta_{2}>0\) \\ Log-linear & \(\mathcal{Y}\subseteq\mathbb{R}_{+}\) & \(\boldsymbol{a}:y\mapsto(1,\log y)^{\top}\) & \(\theta_{2}>0\) \\ Bernstein of order \(M\) & \(\mathcal{Y}\subseteq\mathbb{R}\) & \(\boldsymbol{a}:y\mapsto\boldsymbol{a}_{\text{Bs},M}(y)\) & \(\theta_{1}\leq\theta_{2}\leq\cdots\leq\theta_{M+1}\) \\ Discrete & \(\mathcal{Y}=\{y_{1},\ldots,y_{K}\}\) & \(\boldsymbol{a}:y\mapsto\boldsymbol{a}_{\text{dc},K}(y)\) & \(\theta_{1}<\theta_{2}<\cdots<\theta_{K-1}<\theta_{K}\coloneqq+\infty\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Linear shift trams used in the experiments. The error distributions are extended cumulative distribution functions, namely, standard normal (\(\Phi\)), standard logistic (\(F_{\text{SL}}\)), and standard minimum extreme value (\(F_{\text{minEV}}\)). The choices of basis for \(h_{Y}\) with parameters \(\boldsymbol{\theta}\) and constraints, are listed in Table 1. We denote the discrete basis for an outcome with \(K\) classes by \(\boldsymbol{a}_{\text{dc},K}\).
Data-generating processIn each iteration, the data are drawn from a random DAG with a pre-specified number of potential ancestors and descendants of the response. In the DAG, the response given its causal parents is correctly specified by one of the linear shift \(\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{ \textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsctextsc{\textsctextsctextsctextsctextsctextsctextsctextsctextsctextsctextsctextsctextsctextsctextsctextsc}}} \textsc{\textsc{\textsc{\textsc{}}}}}}}}}}}}}\) in Table 2. All other structural equations are linear with additive noise. The environments are encoded by a single Bernoulli-distributed random variable. Algorithm B1 details the DGP. We generated data for 100 random DAGs and 20 repetitions per DAG. Sample sizes 100, 300, 1000, and 3000 are considered. The DAGs were generated with at most 3 ancestors and 2 descendants of the response (excluding the environment).
Summary measuresWe summarize simulation results in terms of power, measured by Jaccard similarity of the \(\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{{\textsc{ \textsc{\textsc{\textsc{\textsctextsctextsctextsc{\textsctextsctextsctextsctextsctextsc{\textsctextsctextsctextsc}}}}}}}}}}}}}}\) output to the true parents, \(\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{{ \textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsctextsctextsctextsc{\textsctextsctextsc}}}}}}}}}}}}}}}}}\), and the family-wise error rate (the proportion in which \(\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{ \textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{ }}}}}}}}}}}}}}}}}}}\) outputs a non-parent), \(\hat{\mathbb{P}}(S_{n}\not\subseteq S_{*})\). Jaccard similarity between two sets \(A\) and \(B\) is defined as \(\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{ \textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{ }}}}}}}}}}}}}}}}}}\)\(|A\cap B|/|A\cup B|\) and is 1 iff \(A=B\) and 0 if \(A\) and \(B\) do not overlap.
SoftwareCode for reproducing the results in this manuscript is openly available on GitHub at [https://github.com/LucasKook/tramicp.git](https://github.com/LucasKook/tramicp.git). For setting up, fitting, and sampling from \(\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{ \textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsctextsc{ }}}}}}}}}}}}}}}}}}\), we rely on packages **tram**(Hothorn et al., 2022) and **cotram**(Siegfried et al., 2021). We generate and sample from random DAGs using package **pcalg**(Kalisch et al., 2022). The ROC-based test described in Section 4.1.2 relies on the **pROC** package (Robin et al., 2021) using \(\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{ \textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsctextsc{\textsctextsctextsctextsc { \textsctextsctextsctextsctextsctextsc { }}}}}}}}}}}}}}}}\) **roc.test with \(\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{ \textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsc{\textsctextsc{\textsctextsctextsctextsctextsc {\textsctextsctextsctextsc\{\textsctextsc {\textsctextsctextsctextsctextsc\{\textsctextsc \cdotcdotcdotcdot \textsc {\textsc{\textsctextsctextsctextsc {\textsctextsctextsc \enspace
Hidden variablesIn the presence of hidden variables, tramicp can still output a subset of the ancestors, for example, if there is a set of ancestors that is invariant (see Section 3.2). Suppose, we generate \(Y\) as a tram and omit one of its observed parents. If this parent is directly influenced by \(E\), then we do not expect any set of ancestors to be invariant. More precisely, there is no set of observed covariates, such that \(E\) is \(d\)-separated from \(Y\) given this set, which, assuming faithfulness, implies that there is no invariant set. When a test has power equal to one to reject non-invariant sets, the output of ICP is the empty set (and thus still a subset of the ancestors of \(Y\), in fact, even of the parents of \(Y\)). We add a corresponding experiment in Appendix B2.2, where we show results for both tramicp-GCM and tramicp-Wald. While both tests seem to be anti-conservative for finite sample sizes, the results indicate that this may indeed be due to insufficiently large sample sizes.
Error distributionChoosing an error distribution \(F_{Z}\) different from the one generating the data is another way to misspecify the model. In Appendix B2.3, we see in simulations that tramicp shows some robustness against misspecification of the error distribution. In all our simulations, misspecifying \(F_{Z}\) leads to reduced power. In the case the data were generated with
Figure 2: Comparison of tramicp-GCM and tramicp-Wald with existing methods in a binomial GLM (“Binary”), a discrete odds count transformation model (“Cotram”) and a Weibull model (“Weibull”) for different sample sizes. The two proposed methods are compared against the GCM test (a nonparametric conditional independence test), the ROC test (for binary logistic regression only) and the oracle version of ICP, see (5), in terms of Jaccard similarity to the set of parents and fraction of outputs containing non-parents (FWER). ICP is level at the nominal 5% with all proposed invariance tests, while tramicp-Wald is most powerful in this setup (tram-Wald is not level at the nominal 5% under model misspecification, see Appendix B4). The ROC invariance test is on par with tram-GCM and GCM in terms of power. The expression \(S_{n}\not\subset S_{*}\) in the figure should be understood as \(S_{n}\cap S_{*}^{c}\neq\emptyset\).
but modelled using the misspecified \(F_{Z}=F_{\text{SL}}\), and vice versa, there is some evidence that tram-Wald is anti-conservative, while tramicp-GCM still seems to be level.
Non-linear covariate effectsWe have theoretical guarantees for tramicp-Wald only under the class of linear shift trams and, indeed, the presence of a nonlinear covariate effect can lead to tramicp-Wald being anti-conservative. The tram-GCM invariance test can still be used for the more general shift trams (or when linear shift trams are estimated via penalized maximum likelihood; see the simulations in Appendix B4).
## 5 Case study: Causal drivers of survival in critically ill adults
We apply tramicp to the SUPPORT2 dataset (Knaus et al., 1995) with time-to-death in a population of critically ill hospitalized adults being the response variable. SUPPORT2 contains data from 9105 patients of whom 68.1% died after a maximum follow-up of 5.55 years and the remaining 31.9% of observations were right-censored due to loss of follow-up. We consider the following predictors measured at baseline (determined at most three days after hospital admission): Sex (male/female), race (white, black, asian, hispanic, other), number of comorbidities (0-9; num.co), coma score (0-100, scoma), cancer (no cancer, cancer, metastatic cancer; ca), age (years), diabetes (yes/no), dementia (yes/no), disease group (nine groups, including colon and lung cancer; dzgroup).4 For our analysis, we treat num.co (0, 1,..., 5, 6 or more) and scoma (11 levels) as a factor, square-root transform age and omit 43 patients with missing values in any of the predictors listed above. We apply tramicp using both tram-GCM and tram-Wald. For tram-Wald, we only test the presence of main effects of the environments (without additional first-order interaction effects) due to non-convergence when fitting the models with interaction effects.
Footnote 4: ca is not a deterministic function of dzgroup.
### Choice of Environments
When applying oracle tests, tramicp maintains the coverage guarantee as long as the environment variables are non-descendant of the response (Peters et al., 2016, Section 3.3). In our study, all measured predictors precede the response chronologically, so, if all model assumptions are satisfied, all choices of environments come with the correct coverage but may yield differences in power. We choose num.co as the environment as, we believe, it is associated with several other predictors and subsequently creates enough heterogeneity. We also apply tramicp when additionally using race as an environment (For a single choice of a valid environment, no multiple testing correction is needed; however, when applying tramicp to several choices of environments, in order to obtain a family-wise coverage guarantee, one would need to apply a multiple testing correction, such as Bonferroni with the number of choices of environments).
### Results
The set of all predictors is not invariantIn the model including all predictors the standard Wald test rejects the null hypothesis of no effect for all predictors except race. A Wald test for the main effect of num.co yields a \(p\)-value \(<0.0001\). This provides strong evidence that the purely predictive model using all predictors is not invariant across num.co and thus uses a set of features that is different from the set of causal parents.
Evidence of age and cancer being direct causes of time-to-deathWe now apply tramicpGCM and tramicp-Wald to the SUPPORT2 dataset specifying the survival time as the response in a Cox proportional hazard model, using num.co as the environment and including all other predictors. Both algorithms output ca and age as plausible causal predictors (i.e., the intersection of all sets for which the invariance test was not rejected equals \(\{\texttt{ca},\texttt{age}\}\)). This can be seen in Figure 3, where all non-rejected sets include both ca and age. The predictor-specific \(p\)-values (see Section 3.2) are given in Table 3 ('Evidence of age and cancer being direct causes of time-to-death').
Multiple environments tramicp allows several environments to be specified. When using both race and num.co as environments in the SUPPORT2 dataset, tramicp-GCM outputs ca, age, diabetes as plausible causal predictors. tramicp-Wald additionally outputs dementia and sex.
Figure 3: Set-specific \(p\)-values of the tram-GCM and tram-Wald invariance test in the SUPPORT2 case study described in Section 5. Sets containing both ca and age are depicted as circles, whereas sets containing neither are depicted as crosses. The blue and red shaded regions mark the rejection region for tram-GCM and tram-Wald, respectively, and the dashed line is the identity function. Left: \(p\)-values on a linear scale. All invariant sets contain cancer and age, which is therefore the output of both tramicp-GCM and tramicp-Wald (see Table 3). Right: \(p\)-values on the \(\log_{10}\)-scale. tram-GCM is more conservative than tram-Wald, as all \(p\)-values fall below the identity (dashed line).
Choosing more variables as environments may yield more heterogeneity but may at the same time decrease the power of statistical tests. The predictor \(p\)-values are given in the Table 3 ('Multiple environments').
Incorporating prior knowledge about direct causesIf a set of predictors is known to cause the outcome, this set can always be included in the conditioning set (which reduces computational complexity, because fewer invariance tests have to be performed). We illustrate this by including age, dementia, and diabetes as'mandatory' covariates when running tramicp (see Section 3.3). In this case, both tramicp-GCM and tramicp-Wald still output ca as a causal predictor of survival. The predictor \(p\)-values are given in the third row of Table 3 ('Incorporating prior knowledge about direct causes').
Sensitivity analysis: Informative censoringWhen applying tramicp in a survival context, the assumption of uninformative censoring plays an important role. In their original analysis of the SUPPORT dataset, Knaus et al. (1995) have assumed that the censoring is uninformative. As a sensitivity analysis, we can introduce (possibly additional) informative censoring by treating the observed event times of a fraction of all patients who experienced the event as right-censored. For 10% (20%,..., 90%) of randomly chosen patients with exact event times, we repeat the analysis ten times. For low fractions of additionally informatively censored observations (10-20%), the output of tramicp-Wald remains stable (ca, age). For larger fractions (30-80%), age is contained in the output less often. For the largest considered fraction (90%), tramicp-Wald outputs mostly the empty set (see Table C1 in Appendix C).
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline
**Invariance test** & \multicolumn{6}{c}{**Predictor-specific \(p\)-values**} & \multicolumn{3}{c}{**Environment**} \\ \hline & \multicolumn{1}{c}{scoma} & \multicolumn{1}{c}{dzgroup} & \multicolumn{1}{c}{ca} & \multicolumn{1}{c}{age} & diabetes & dementia & sex & race & \\ \cline{2-10} \multicolumn{10}{l}{_Evidence of age and cancer being direct causes of time-to-death_} & & & & & & & \\ tram-GCM & 0.239 & 0.239 & **0.000** & **0.003** & 0.157 & 0.176 & 0.162 & 0.220 & num.co \\ tram-Wald & 0.127 & 0.127 & **0.000** & **0.001** & 0.080 & 0.077 & 0.089 & 0.127 & num.co \\ \hline \multicolumn{10}{l}{_Multiple environments_} & & & & & & & & \\ tram-GCM & 0.089 & 0.089 & **0.000** & **0.003** & **0.040** & 0.056 & 0.085 & - & num.co + race \\ tram-Wald & 0.050 & 0.050 & **0.000** & **0.002** & **0.031** & **0.029** & **0.040** & - & num.co + race \\ \hline \multicolumn{10}{l}{_Incorporating prior knowledge about direct causes_} & & & & & & \\ tram-GCM & 0.273 & 0.273 & **0.000** & - & - & - & 0.163 & 0.216 & num.co \\ tram-Wald & 0.127 & 0.127 & **0.000** & - & - & - & 0.089 & 0.127 & num.co \\ \hline \hline \end{tabular}
\end{table}
Table 3: tramicp applied to the SUPPORT2 dataset in the different settings described in Section 5. Predictor-specific \(p\)-values (see Section 3.2) are reported for the tram-GCM and tram-Wald invariant test, together with the environment variable used. \(p\)-values in bold are significant at the 5% level; in each row, the set of predictors with bold numbers corresponds to the output of tramicp.
Discussion
In this paper, we generalize invariant causal prediction to transformation models, which encompass many classical regression models and different types of responses including categorical and discrete variables. We show that, despite most of these models being neither closed under marginalization nor collapsible (Appendix A1), tramicp retains the same theoretical guarantees in terms of identifying a subset the causal parents of a response with high probability. We generalize the notion of invariance to discrete and categorical responses by considering score residuals which are uncorrelated with the environment under the null hypothesis. Since score residuals remain well-defined for categorical responses, our proposal is one way to phrase invariance in classification settings. We demonstrate empirically that tramicp-GCM and tramicp-Wald level at the nominal \(\alpha\), but also have non-trivial power against the alternatives considered in the simulation setup.
The tram-Wald invariance test hinges critically on correct model specification. Despite its high power in the simulation setups shown in Figure 2, the tram-Wald invariance test has size greater than its nominal level under slight model misspecification (for instance, presence of a non-linear effect). The tram-GCM test, however, directly extends to more flexible shift trams which can incorporate the non-linearity, comes with theoretical guarantees, and does not lead to anti-conservative behaviour under the null when testing invariance.
We have applied tramicp to roughly ten real world data sets (which technically would require a multiple testing correction), and have often observed that, depending on the choice of environment, either no subset of covariates is invariant (i.e., all invariance tests are rejected) or all subsets of covariates are invariant (i.e., no invariance test is rejected). In both cases, tramicp outputs the empty set - an output that is not incorrect but uninformative.
## Acknowledgments
We thank Niklas Pfister and Alexander Mangulad Christgau for insightful discussions. The research of LK was supported by the Swiss National Science Foundation (SNF; grant no. 214457). LK carried out part of this work during a research visit at the Department of Mathematical Sciences, University of Copenhagen, funded by a UZH GRC travel grant and SNF grant no. S-86013-01-01 and S-42344-04-01. During parts of this research project, SS and JP were supported by a research grant (18968) from the VILLUM Foundation. ARL is supported by a research grant (0069071) from Novo Nordisk Fonden.
|
2309.07580 | Combining Multiple View Components for Exploratory Visualization | The analysis of structured complex data, such as clustered graph based
datasets, usually applies a variety of visual representation techniques and
formats. The majority of currently available tools and approaches to
exploratory visualization are built on integrated schemes for simultaneous
displaying of multiple aspects of studying objects and processes. Usually, such
schemes partition screen space that is composed of multiple views and adopt
interaction patterns to focus on data-driven items. Widely known concepts as
overview plus-detail and focus-plus-context are ambiguous in interpretation by
means of technical terms. Therefore, their implementation by UI design
practitioners need reviews and a classification of the basic approaches to
visual composition of graphical representation modules. We propose a
description of basic components of the view and focus and an overview of their
multiple combinations. | Vladimir Guchev, Paolo Buono, Cristina Gena | 2023-09-14T10:26:46Z | http://arxiv.org/abs/2309.07580v1 | # Combining Multiple View Components for Exploratory Visualization
###### Abstract
The analysis of structured complex data, such as clustered graph based datasets, usually applies a variety of visual representation techniques and formats. The majority of currently available tools and approaches to exploratory visualization are built on integrated schemes for simultaneous displaying of multiple aspects of studying objects and processes. Usually, such schemes partition screen space that is composed of multiple views and adopt interaction patterns to focus on data-driven items. Widely known concepts as overview plus-detail and focus-plus-context are ambiguous in interpretation by means of technical terms. Therefore, their implementation by UI design practitioners need reviews and a classification of the basic approaches to visual composition of graphical representation modules. We propose a description of basic components of the view and focus and an overview of their multiple combinations.
Index Terms: K.5.1 [Human-centered computing]: Visualization--Visualization systems and tools Visualization toolkits; K.5.2 [Human-centered computing]: Visualization--Visualization techniques Graph drawings;
## 1 Elements of view and focus
The interactive visual exploration of complex multidimensional datasets, due to the variety of tasks and related techniques, usually assumes the different data manipulations at different levels of representation. Therefore, the single-view implementation is often not enough for full-fledged analytical work. The demand for data visualization tools based on multiple views that represent relations among dataset through multiple simultaneous representations is increasing [1, 11]. The research directions in this area includes: the organization of static predefined screen spaces for multiple dashboard-like views [4], the development of dynamic user-controlled screen spaces [8], the compressed in-screen and offscreen summaries at viewport borders [3], the use of whitespace and semi-transparent watermarks to enrich the presentation by providing an auxiliary context for selected data points.
The modern frameworks for data analysis allow flexible data manipulations through multi-layered data layout. The flexibility during the exploration is especially important for graph-based representations characterized by an intrinsic complex structure that must be visualized. Node-link representations are the most obvious way to visualize relationships but contain issues related to their readability, and require a trade-off between the level of detail of shown graph elements at different zoom levels and their layout on the screen. The exploration assumes selective extraction of content from a set of visual layers and elements of data layouts, according to the user choices. A data layout can be considered as a layer with different properties, such as detail of data points, spatial or structural embedding, level of visual aggregation, semantic annotation, distribution in space and time. Thus, the view available to the user reflects a part of data layout in the context of exploration.
This work proposes an approach to form the _base_, _focus_ and _context_ views, built according to the scheme shown in Fig. 1 (left). The _base view_ reflects the scope of dynamic query result from the data layout, which changes during the user exploration: specifically, during zoom and panning [2].
The _focus_ consists of the focus view on relevant items and is built on a data selection inside a focal area. The _context view_ represents the projection of the base view to adjusted general components of the data layout. The main parts of the focus component can be used both in conjunction or independently. The independent use of focal area marking usually applied for the attention management (e.g. system-initiated highlighting of a found data element) or as a visual feedback for user-performed manipulation over a data element; the independent use of focused view elements may be implemented for the fixed scope or structure of related data content.
In the case of a focus component, the captured focal area may have an additional display in order to augment (by annotation or extension), amplify (to change the presentation form) or compress (to reduce the content form) the visual representation.
The geometric combinations of the focal area and focused view elements are shown in Fig. 1 (center). When focal area and focused views are bound tightly (Fig. 1, center, top row), the following patterns are available: both elements are equal in area and position (center: a) --as applied in various semantic lenses (e.g. magic, "see-through", "bring neighbours") [5, 15]; the smaller focal area is placed below bigger focused view (center: b) --the part of base view is hidden by the overlaying layer or displayed via partial grid deformation or local layout re-arrangement [9] around focal area, as it is done in fish-eye lens or tabletions [3]; the adjacent combination of both focal area and focused view is a single bundled unit in the form of floating viewframe (center: c). Such solution gives a better understanding of the scale proportions and is used as the detail-on-demand.
When focal area and focused view are independent (Fig. 1, center, bottom row), the following patterns are available: the focused view is a fixed nested/built-in frame inside a base view (center: d) that shows the data inside the focal area --this approach is widely applied in geospatial systems [15]; the focused view is a fixed or draggable adjacent external to a base view frame (center: e) bound with the scope of focal area --such composition is used in tabletop display i-Loupe [17]. Differently by these example, Fig. 1 (center, f) implements the concept of a reducing glass that shows the navigation context of the base view in a global layout scope.
On the screen space use (Fig. 1, right), the information capacity of a view component is extendable by viewport frame areas, and juxtaposition of both, data-driven and supplementary layers, through whitespace (to mark graph visualization elements) or semitransparent watermark-style screens (quick look or pop-up annotations). The space around viewframes may fit pointers to indicate selections, as well as can be suitable for contextual awareness during exploration through off-screen summaries [7] and marginal diagrams [12].
## 2 Combining Multiple Components
Data visualization based on multiple views opens wide |
2303.17938 | Dilational Symmetries of Decomposition and Coorbit Spaces | We investigate the invariance properties of general wavelet coorbit spaces
and Besov-type decomposition spaces under dilations by matrices. We show that
these matrices can be characterized by quasi-isometry properties with respect
to a certain metric in frequency domain. We formulate versions of this
phenomenon both for the decomposition and coorbit space settings.
We then apply the general results to a particular class of dilation groups,
the so-called shearlet dilation groups. We present a general, algebraic
characterization of matrices the are coorbit compatible with a given shearlet
dilation group. We determine the groups of compatible dilations for a variety
of concrete examples. | Hartmut Führ, Reihaneh Raisi Tousi | 2023-03-31T10:05:24Z | http://arxiv.org/abs/2303.17938v1 | # Dilational symmetries of decomposition and coorbit spaces
###### Abstract.
We investigate the invariance properties of general wavelet coorbit spaces and Besov-type decomposition spaces under dilations by matrices. We show that these matrices can be characterized by quasi-isometry properties with respect to a certain metric in frequency domain. We formulate versions of this phenomenon both for the decomposition and coorbit space settings.
We then apply the general results to a particular class of dilation groups, the so-called shearlet dilation groups. We present a general, algebraic characterization of matrices the are coorbit compatible with a given shearlet dilation group. We determine the groups of compatible dilations for a variety of concrete examples.
Key words and phrases:coorbit spaces; decomposition spaces; coarse geometry; symmetry groups 2020 Mathematics Subject Classification: 42C15 (42C40 43A15 43A65 46C15 51F30)
## 1. Introduction
Coorbit spaces formalize a general idea underlying wavelet approximation theory, namely that smoothness properties are closely related to decay properties of the coefficients with respect to suitable families of elementary building blocks, see [8, 9, 10]. It has been understood in the meantime that this formalism can be applied to a large variety of different groups, thus allowing to efficiently capture the different approximation theoretic properties of the various systems of building blocks. In particular, generalized wavelet transforms, based on building blocks arising from a combination of translations and dilations (the latter taken from a suitably chosen matrix groups) acting on a suitably chosen single vector, provide a rich source of diverse examples. The references [6, 5, 16, 19, 11, 12] provide a small sample of the literature; a more comprehensive list of the existing constructions can be found in the introduction of [17].
While these results illustrate the scope of the initial definitions and results of coorbit space theory, being applicable to a large variety of fundamentally different dilation groups, they also point to the necessity to develop ways to understand the similarities and difference of the various groups. For example, the problem of understanding when two different dilation groups (corresponding to two different ways of constructing a wavelet system from a single mother wavelet) have the same approximation theoretic behaviour, as coded by the associated coorbit spaces, has been clarified only fairly recently [28, 17].
An alternative approach to understanding the various coorbit spaces consists in comparing them to more classical, isotropic smoothness spaces. The recent paper [18] presents a particular case of such an investigation, elucidating the embedding
behaviour of general coorbit spaces into isotropic Sobolev spaces. The more recent paper [2] performs a similar task for anisotropic Besov spaces, which can also be viewed as (generalized) coorbit spaces [20]. A motivating factor for undertaking these investigations was understanding the ways in which the choices entering the construction of generalized wavelet system, in particular the choice of dilation group, influence the approximation-theoretic properties of the associated wavelet systems.
In this paper, we contribute to the understanding of wavelet coorbit spaces by studying their _symmetries_, more specifically those that arise from a simple dilation \(f\mapsto f\circ A\), for an invertible matrix \(A\). We will derive sharp criteria for the matrices \(A\) that allow to extend this dilation operator from \(L^{2}(\mathbb{R}^{d})\) to general Besov-type coorbit spaces, using coarse geometric methods.
For a case study illustrating the applicability and scope of our general results, we will then consider the class of shearlet dilation groups in arbitrary dimensions. This class is particularly suited for such a case study: On the one hand, it is increasingly rich as the dimension \(d\) grows. On the other hand, the structural features of shearlet dilation groups give rise to a rather close interplay between coorbit theory on one end, and the algebraic structures underlying the construction of shearlet dilation groups on the other.
### Overview of the paper
The remainder of the paper is structured as follows: Section 2 contains our general results on symmetries of decomposition and coorbit spaces. The fundamentals concerning decomposition spaces and their matrix symmetries are the subject of Subsection 2.1. As a novelty, we introduce the group \(\mathcal{S}_{\mathcal{P}}\) of _decomposition space compatible matrices_, associated to an admissible covering \(\mathcal{P}\), in Definition 2.2. This matrix group can be understood as a symmetry group of the decomposition spaces associated to the covering.
The chief purpose of this paper is the characterization of decomposition space compatible matrices, for various classes of admissible coverings. Our main general result in this respect is Theorem 2.14, which provides a coarse geometric characterization, in a spirit similar to [17]. Subsection 2.2 then turns to generalized wavelets and their associated coorbit spaces. We introduce the group \(\mathcal{S}_{Co_{H}}\) of _coorbit compatible matrices_, a coorbit space analog to \(\mathcal{S}_{\mathcal{P}}\), in Definition 2.24. Using the decomposition space description of coorbit spaces (established in [21]) allows to prove a general characterization of coorbit compatible matrices in Theorem 2.28, and clarifies that coorbit compatibility is indeed a special case of decomposition space compatibility.
The remainder of the paper serves to illustrate the scope of the general theorems, in particular of Theorem 2.28. Sections 3 and 4 focus on generalized shearlet groups. Section 3 recalls the main ingredients that enter the construction of such groups. The algebraic structures underlying generalized shearlet dilation groups allow to translate the coarse geometric conditions from the previous section to rather stringent algebraic relations; these results are summarized in Corollary 4.9.
The final section contains explicit computations of the groups of coorbit compatible matrices for a variety of examples, namely for all admissible dilation groups in dimension \(2\), as well as the standard and Toeplitz shearlet dilation groups in arbitrary dimensions. The examples underscore the variety of behaviours that the coorbit spaces of different dilation groups can exhibit, e.g. as reflected as reflected
in the varying dimensions. In addition, the relative ease with which these symmetry groups are computed serves to highlight the usefulness of Theorem 2.28 and Corollary 4.9.
## 2. Decomposition and coorbit spaces: Relevant definitions and results
In this section we discuss decomposition spaces and wavelet coorbit spaces, and the matrices that leave these spaces invariant. We first consider (and solve) the more general problem of understanding dilation invariance of decomposition spaces, before specializing on wavelet coorbit spaces.
### Decomposition spaces
Decomposition spaces were conceived by Feichtinger and Grobner in [7], and later revisited (and somewhat updated) by Borup and Nielsen [3] and Voigtlaender [27]. In the following, we mostly rely on the last source, and focus on decomposition spaces of Besov type, associated to weighted mixed \(L^{p}\) norms.
The starting point for the definition of these spaces is the notion of an _admissible covering_\(\mathcal{Q}=(Q_{i})_{i\in I}\) of some open set \(\mathcal{O}\subset\mathbb{R}^{d}\) ([7]), i.e. a family of nonempty sets \(Q_{i}\subset\mathbb{R}^{d}\) such that
1. \(\bigcup_{i\in I}Q_{i}=\mathcal{O}\) and
2. \(\sup_{i\in I}\sharp\{j\in I:Q_{i}\cap Q_{j}\neq\emptyset\}<\infty\).
Throughout this paper, we will concentrate on the class of _(tight) structured admissible covering_, see Definition 2.5 of [28]. This means that \(Q_{i}=T_{i}Q+b_{i}\) with \(T_{i}\in\operatorname{GL}(\mathbb{R}^{d})\), \(b_{i}\in\mathbb{R}^{d}\) with an open, precompact set \(Q\), and the involved matrices fulfill
\[\sup_{i,j\in I:Q_{i}\cap Q_{j}\neq\emptyset}\|T_{i}^{-1}T_{j}\|<\infty. \tag{2.1}\]
The next ingredient is a special partition of unity \(\Phi=(\varphi_{i})_{i\in I}\) subordinate to \(\mathcal{Q}\), also called \(\mathrm{L}^{p}\)-BAPU (bounded admissible partition of unity), with the following properties
1. \(\varphi_{i}\in C_{c}^{\infty}(\mathcal{O})\quad\forall i\in I\),
2. \(\sum_{i\in I}\varphi_{i}(x)=1\quad\forall x\in\mathcal{O}\),
3. \(\varphi_{i}(x)=0\) for \(x\in\mathbb{R}^{d}\setminus Q_{i}\) and \(i\in I\),
4. if \(1\leq p\leq\infty\): \(\sup_{i\in I}\|\mathcal{F}^{-1}\varphi_{i}\|_{\mathrm{L}^{1}}<\infty\), \(\text{if }0<p<1\): \(\sup_{i\in I}|\det(T_{i})|^{\frac{1}{p}-1}\|\mathcal{F}^{-1}\varphi_{i}\|_{ \mathrm{L}^{p}}<\infty\).
Here, \(\mathcal{F}\) denotes the usual Fourier transform of a function in \(\mathrm{L}^{2}(\mathbb{R}^{d})\) defined by
\[\mathcal{F}f(\xi):=\int_{\mathbb{R}^{d}}f(x)e^{-2\pi i(x,\xi)}\mathrm{d}x\]
for \(\xi\in\mathbb{R}^{d}\). We also use the notation \(\widehat{f}:=\mathcal{F}(f)\). The final ingredient is a weight \((u_{i})_{i\in I}\) such that there exists \(C>0\) with \(u_{i}\leq Cu_{j}\) for all \(i,j\in I:Q_{i}\cap Q_{j}\neq\emptyset\). A weight with this property is also called _\(\mathcal{Q}\)-moderate_. The interpretation of this property is that the value of \((u_{i})_{i\in I}\) is comparable for indices corresponding to sets which are "close" to each other. Finally, we define the _(Fourier-side) decomposition space with respect to the covering \(\mathcal{Q}\) and the weight \((u_{i})_{i\in I}\) with integrability exponents \(0<p,q\leq\infty\)_ as
\[\mathcal{D}(\mathcal{Q},\mathrm{L}^{p},\ell_{u}^{q}):=\{f\in\mathcal{D}^{ \prime}(\mathcal{O}):\|f\|_{\mathcal{D}(\mathcal{Q},\mathrm{L}^{p},\ell_{u}^{ q})}<\infty\} \tag{2.2}\]
for
\[\|f\|_{\mathcal{D}(\mathcal{Q},\mathrm{L}^{p},\ell_{u}^{q})}:=\left\|\big{(}u_{i} \cdot\|\mathcal{F}^{-1}(\varphi_{i}f)\|_{\mathrm{L}^{p}(\mathbb{R}^{d})}\big{)} _{i\in I}\right\|_{\ell^{q}(I)}. \tag{2.3}\]
As the notation suggests, the decomposition spaces are independent of the precise choice of \(\Phi\); see [27] Corollary 3.4.11.
A crucial concept for the analysis of decomposition coverings is the definition of the set of neighbors in a covering.
**Definition 2.1** ([7] Definition 2.3).: _For a covering \(\mathcal{Q}=(Q_{i})_{i\in I}\) of \(\mathcal{O}\) with \(Q_{i}\subset\mathcal{O}\) for all \(i\in I\), we define the set of neighbors of a subset \(J\subset I\) as_
\[J^{*}:=\left\{\,i\in I\mid\exists j\in J:\ Q_{i}\cap Q_{j}\neq\emptyset\, \right\}.\]
_By induction, we set \(J^{0*}:=J\) and \(J^{(n+1)*}=\left(J^{n*}\right)^{*}\) for \(n\in\mathbb{N}_{0}\). Moreover, we use the shorthand notations \(i^{k*}:=\left\{\,i\,\right\}^{k*}\) and define \(Q_{i}^{k*}:=\bigcup_{j\in i^{k*}}Q_{j}\) for \(i\in I\) and \(k\in\mathbb{N}_{0}\)._
Recall that the overall aim of this paper is to clarify dilation invariance properties of various function spaces. The following definition formalizes these properties for the setting of decomposition spaces.
**Definition 2.2**.: _Let \(\mathcal{P}=(P_{j})_{j\in J}\) be a structured admissible covering of the open set \(\mathcal{O}\subset\mathbb{R}^{d}\). Given an invertible matrix \(A\in\mathrm{GL}(d,\mathbb{R})\), we call \(A\)**(decomposition space) compatible with \(\mathcal{P}\)** if for all \(0<p,q<\infty\) and all \(\mathcal{P}\)-moderate weights \(v\), one has_
\[\forall f\in C_{c}^{\infty}(\mathcal{O}\cap A\mathcal{O})\ :\ \|f\circ A^{-1}\|_{D( \mathcal{Q},L^{p},\ell_{u}^{q})}\asymp\|f\|_{D(\mathcal{Q},L^{p},\ell_{v}^{q})}\]
_We let_
\[\mathcal{S}_{\mathcal{P}}=\{A\in\mathrm{GL}(d,\mathbb{R}):A\text{ is compatible with }\mathcal{P}\}\]
The following observation is immediate from the definition.
**Remark 2.3**.: \(\mathcal{O}\subset\mathbb{R}^{d}\) _be open and let \(\mathcal{Q}=(Q_{i})_{i\in I}\) be an admissible covering. Then \(\mathcal{S}_{\mathcal{P}}\subset\mathrm{GL}(d,\mathbb{R})\) is a subgroup. In particular, \(A\in\mathcal{S}_{\mathcal{P}}\) iff \(A^{-1}\in\mathcal{S}_{\mathcal{P}}\)._
_It is currently open whether \(\mathcal{S}_{\mathcal{P}}\) is generally closed._
The results and techniques from [27] suggest that an understanding of \(\mathcal{S}_{\mathcal{P}}\) hinges on the comparison of different coverings, and the next definitions provide the pertinent vocabulary for such a comparison. While our exposition follows [27], most of the definitions can be traced back to [7].
**Definition 2.4** ([27] Definition 3.3.1.).: _Let \(\mathcal{Q}=(Q_{i})_{i\in I}\) and \(\mathcal{P}=(P_{j})_{j\in J}\) be families of subsets of \(\mathbb{R}^{d}\)._
1. _[label=)]_
2. _We define the set of_ \(\mathcal{P}\)_-neighbors of_ \(i\in I\) _by_ \(J_{i}:=\left\{\,j\in J\mid Q_{i}\cap P_{j}\neq\emptyset\,\right\}.\) _More generally, we call_ \(J_{i}\) _and_ \(I_{j}\) _intersection sets for the coverings_ \(\mathcal{Q}\) _and_ \(\mathcal{P}\)_._
3. _We call_ \(\mathcal{Q}\) _weakly subordinate to_ \(\mathcal{P}\) _if_ \(N(\mathcal{Q},\mathcal{P}):=\sup_{i\in I}\lvert J_{i}\rvert<\infty\)_._ _The quantity_ \(N(\mathcal{P},\mathcal{Q})\) _is defined analogously, and we call_ \(\mathcal{Q}\) _and_ \(\mathcal{P}\) _weakly equivalent if_ \(N(\mathcal{P},\mathcal{Q})<\infty\) _and_ \(N(\mathcal{Q},\mathcal{P})<\infty\)_._
4. _We call_ \(\mathcal{Q}\) _almost subordinate to_ \(\mathcal{P}\) _if there exists_ \(k\in\mathbb{N}_{0}\) _such that for every_ \(i\in I\) _there exists an_ \(j_{i}\in J\) _with_ \(Q_{i}\subset P_{j_{i}}^{k*}\)_. If_ \(k=0\) _is a valid choice, then we call_ \(\mathcal{Q}\) _subordinate to_ \(\mathcal{P}\)
The relevance of these notions, in particular of weak equivalence, is spelled out in the next two lemmas. Note that the formulation of the next lemma is a special case of the cited result.
**Lemma 2.5** ([28], Theorem 6.9).: _Let \(\mathcal{Q}=(Q_{i})_{i\in I},\mathcal{P}=(P_{j})_{j\in J}\) be two structured admissible coverings of the open set \(\mathcal{O}\subset\mathbb{R}^{d}\). If \(\mathcal{Q}\) and \(\mathcal{P}\) are not weakly equivalent, then_
\[\mathcal{D}(\mathcal{Q},L^{p},\ell_{u_{1}}^{q})\neq\mathcal{D}(\mathcal{P},L ^{p},\ell_{u_{2}}^{q})\]
_for all \(\mathcal{Q}\)-moderate weights \(u_{1}:I\to(0,\infty)\), for all \(\mathcal{P}\)-moderate weights \(u_{2}:J\to(0,\infty)\) and all \(p,q\in(0,\infty]\) with \((p,q)\neq(2,2)\)._
The exception \((p,q)\neq(2,2)\) is necessary to exclude trivial cases: In the case of \((p,q)=(2,2)\) the associated decomposition spaces are just weighted \(\mathrm{L}^{2}\) spaces, by the Plancherel Theorem. In particular, if \(\mathcal{O}\subset\mathbb{R}^{d}\) is open and of full measure, and the weight \(v\) is constant, one finds that \(\mathcal{D}(\mathcal{P},L^{2},\ell_{v}^{2})=L^{2}(\mathbb{R}^{d})\), for all admissible coverings \(\mathcal{P}\).
Weak subordinateness and equivalence of coverings are important assumptions for a multitude of sufficient criteria for embeddings of decomposition spaces and their equality, as developped in [28]. The following statement is [28, Lemma 6.11]. The statement about the range \(0\leq p,q\leq\infty\) is justified by the remark following the cited lemma.
**Lemma 2.6**.: _Let \(1\leq p,q\leq\infty\) and let \(\emptyset\neq\mathcal{O}\subset\mathbb{R}^{d}\) be open. Further, let \(\mathcal{Q}=(Q_{i})_{i\in I},\mathcal{P}=(P_{j})_{j\in J}\) be two tight structured admissible coverings of \(\mathcal{O}\), and let \(u_{1}\) be a \(\mathcal{Q}\)-moderate weight and \(u_{2}\) a \(\mathcal{P}\)-moderate weight._
_If \(\mathcal{Q}\) and \(\mathcal{P}\) are weakly equivalent and there exists \(C>0\) such that \(C^{-1}u_{1}(i)\leq u_{2}(j)\leq Cu_{1}(i)\) for all \(i\in I\) and \(j\in J\) with \(Q_{i}\cap P_{j}\neq\emptyset\), then_
\[\mathcal{D}(\mathcal{Q},L^{p},\ell_{u_{1}}^{q})=\mathcal{D}(\mathcal{P},L^{p},\ell_{u_{2}}^{q})\]
_with equivalent norms._
_If all sets in the coverings are connected, the conclusion holds for the range \(0\leq p,q\leq\infty\)._
The following corollary summarizes the significance of weak equivalence for the subsequent discussion. Note that some implications hold under less stringent assumptions. Part (c) points out an important _rigidity_ property of decomposition spaces: Whenever two scales of decomposition spaces coincide in a single nontrivial case, they coincide everywhere.
**Corollary 2.7**.: _Let \(\mathcal{P}=(P_{j})_{j\in J},\mathcal{Q}=(Q_{i})_{i\in I}\) denote two tight, structured admissible coverings of \(\mathcal{O}\), each consisting of connected sets. Then the following are equivalent:_
1. \(\mathcal{P}\) _and_ \(\mathcal{Q}\) _are weakly equivalent._
2. _For all moderate weights_ \(u_{1}\) _on_ \(I\) _and_ \(u_{2}\) _on_ \(J\) _satisfying_ \(C^{-1}u_{1}(i)\leq u_{2}(j)\leq Cu_{1}(i)\) _whenever_ \(Q_{i}\cap P_{j}\neq\emptyset\)_, with a global constant_ \(C\geq 1\)_, and all_ \(p,q\in(0,\infty]\)_, we have_ \[\mathcal{D}(\mathcal{Q},L^{p},\ell_{u_{1}}^{q})=\mathcal{D}(\mathcal{P},L^{p},\ell_{u_{2}}^{q})\,\] _with equivalent norms._
3. _There exist_ \(p,q\in(0,\infty],(p,q)\neq(2,2)\) _such that_ \[\mathcal{D}(\mathcal{Q},L^{p},\ell^{q})=\mathcal{D}(\mathcal{P},L^{p},\ell^{q })\.\]
Proof.: The implication (a) \(\Rightarrow\) (b) follows from Lemma 2.6, (b) \(\Rightarrow\) (c) is trivial, and \((c)\Rightarrow\) (a) is due to Lemma 2.5.
The next result clarifies the influence that the support set \(\mathcal{O}\) has on the scale of decomposition spaces.
**Theorem 2.8**.: _Let \(\emptyset\neq\mathcal{O},\mathcal{O}^{\prime}\subset\mathbb{R}^{d}\) open. Let \(\mathcal{Q}=(Q_{i})_{i\in I}\) denote an admissible covering of \(\mathcal{O}\), \(\mathcal{P}=(P_{j})_{j\in J}\) denote an admissible covering of \(\mathcal{O}^{\prime}\). Assume that either \(\mathcal{O}^{\prime}\cap\partial\mathcal{O}\neq\emptyset\) or \(\mathcal{O}\cap\partial\mathcal{O}^{\prime}\neq\emptyset\) holds, and that \(\mathcal{O}\cap\mathcal{O}^{\prime}\) is unbounded. Let \(p_{1},p_{2},q_{1},q_{2}\in(0,\infty]\). Then_
\[\forall f\in C_{c}^{\infty}(\mathcal{O}\cap\mathcal{O}^{\prime})\ :\ \left\|f\right\|_{D( \mathcal{Q},L^{p_{1}},\ell_{v}^{q_{1}})}\asymp\left\|f\right\|_{D(\mathcal{P},L^{p_{2}},\ell_{w}^{q_{2}})}\]
_can only hold in the trivial case, i.e., when \((p_{1},q_{1})=(2,2)=(p_{2},q_{2})\) and \(v_{i}\asymp w_{j}\) whenever \(Q_{i}\cap P_{j}\neq\emptyset\)._
Note that the assumptions on \(\mathcal{O},\mathcal{O}^{\prime}\) are fulfilled if they are distinct open and dense subsets. Density and openness of \(\mathcal{O}\) imply \(\partial\mathcal{O}=\mathbb{R}^{d}\setminus\mathcal{O}\), and thus \(\mathcal{O}^{\prime}\cap\partial\mathcal{O}=\emptyset\) can only happen if \(\mathcal{O}^{\prime}\subsetneq\mathcal{O}\). In that case however we get \(\partial\mathcal{O}^{\prime}\cap\mathcal{O}=(\mathbb{R}^{d}\setminus\mathcal{ O}^{\prime})\cap\mathcal{O}\neq\emptyset\). Furthermore, \(\mathcal{O}\cap\mathcal{O}^{\prime}\) is dense, hence unbounded.
Our next aim is a metric characterization of weak equivalence. For this purpose we need to introduce the metric induced by a covering:
**Definition 2.9**.: _Let \(\mathcal{O}\subset\mathbb{R}^{d}\) be open and let \(\mathcal{Q}=(Q_{i})_{i\in I}\) be a covering of \(\mathcal{O}\). For \(x,y\in\mathcal{O}\), we say \(x\) and \(y\) are connected by a \(\mathcal{Q}\)-chain (of length m) if there exist \(Q_{1},\ldots,Q_{m}\in\mathcal{Q}\) such that \(x\in Q_{1}\), \(y\in Q_{m}\) and \(Q_{k}\cap Q_{k+1}\neq\emptyset\) for all \(k\in\{1,\ldots,m-1\}\)._
We next define the metric, which is called \(\mathcal{Q}\)_-chain distance_ in [9, Definition 3.4].
**Definition 2.10**.: _Let \(\mathcal{O}\subset\mathbb{R}^{d}\) be open and let \(\mathcal{Q}=(Q_{i})_{i\in I}\) be a covering of \(\mathcal{O}\). Define the map \(d_{\mathcal{Q}}:\mathcal{O}\times\mathcal{O}\rightarrow\mathbb{N}_{0}\cup\{\infty\}\) by_
\[d_{\mathcal{Q}}(x,y)=\begin{cases}\inf\left\{\begin{array}{l}m\in\mathbb{N} \left|\begin{array}{l}x,y\text{ are connected by a}\\ \mathcal{Q}\text{-chain of length }m\end{array}\right.\end{array}\right\},&x\neq y \\ 0,&x=y,\end{cases}\]
_where we set \(\inf\emptyset=\infty\)._
**Remark 2.11**.: _The above map in fact defines a metric on \(\mathcal{O}\), without any further restrictions on the covering. Observe that the metrics used in this paper are allowed to take the value \(\infty\), just as in the precursor paper [17]. If the covering \(\mathcal{Q}\) consists of connected sets, the statement \(d_{\mathcal{Q}}(x,y)<\infty\) is equivalent to the fact that \(x,y\) are contained in the same connected component in \(\mathcal{O}\). Ultimately this slight extension of the standard definition of metrics is innocuous in the setting we study in this paper; we refer to [17], specifically to Remark 3.20 therein, for more details._
**Definition 2.12** (cf. [24] Definition 1.3.4).: _Let \((X,d_{X})\) and \((Y,d_{Y})\) be metric spaces. A map \(f:X\to Y\) is a quasi-isometry if the following conditions are satisfied:_
1. _The map_ \(f\) _is a quasi-isometric embedding. This means there exist constants_ \(L,C>0\) _such that_ \[L^{-1}d_{X}(x,x^{\prime})-C\leq d_{Y}\left(f(x),f(x^{\prime})\right)\leq Ld_{X} (x,x^{\prime})+C\] _for all_ \(x,x^{\prime}\in X\)
_._
2. _The map_ \(f\) _is coarsely surjective. This means that there exists_ \(K>0\) _such that for every_ \(y\in Y\) _exists an_ \(x\in X\) _with_ \(d_{Y}\left(f(x),y\right)\leq K\)_._
The following result provides the metric reformulation of weak equivalence; see Theorem 3.22 of [17].
**Theorem 2.13**.: _Let \(\mathcal{O}\subset\mathbb{R}^{d}\) be open and let \(\mathcal{Q}=(Q_{i})_{i\in I}\) and \(\mathcal{P}=(P_{j})_{j\in J}\) be structured admissible coverings of \(\mathcal{O}\) comprised of open connected subsets of \(\mathcal{O}\). Then the following statements are equivalent:_
1. _The coverings_ \(\mathcal{Q}\) _and_ \(\mathcal{P}\) _are weakly equivalent._
2. _The map_ \(\operatorname{id}:(\mathcal{O},d_{\mathcal{Q}})\to(\mathcal{O},d_{\mathcal{P} }),\ x\mapsto x\) _is a quasi-isometry._
The following following characterization of decomposition space compatible dilations is one of the central results of this paper:
**Theorem 2.14**.: _Let \(\mathcal{O}\subset\mathbb{R}^{d}\) be open and dense, and let \(\mathcal{Q}=(Q_{i})_{i\in I}\) be a structured admissible covering consisting of connected subsets. Let \(A\in\operatorname{GL}(d,\mathbb{R})\). Then \(A\in\mathcal{S}_{\mathcal{P}}\) iff the following two conditions hold:_
1. \(A\mathcal{O}=\mathcal{O}\)_._
2. _The map_ \(\varphi_{A}:(\mathcal{O},d_{\mathcal{P}})\to(\mathcal{O},d_{\mathcal{P}})\)_,_ \(z\mapsto Az\) _is a quasi-isometry._
Proof.: We first show necessity of the conditions. Hence assume that \(A\in\mathcal{S}_{\mathcal{P}}\).
Let \(\mathcal{O}^{\prime}=A\mathcal{O}\). Define the covering \(\mathcal{P}=(P_{i})_{i\in I}\) of \(\mathcal{O}^{\prime}\), with \(P_{i}=AQ_{i}\). Then it is straightforward to see that \(\mathcal{P}\) is a structured admissible covering of \(\mathcal{O}^{\prime}\), consisting of connected subsets. In addition, if \((\varphi_{i})_{i\in I}\) is a BAPU subordinate to \(\mathcal{P}\), then \((\varphi\circ A^{-1})_{i\in I}\) is a BAPU subordinate to \(\mathcal{Q}\).
We next observe that
\[(f\circ A^{-1})\cdot\varphi_{i}=(f\cdot\psi_{i})\circ A^{-1}\.\]
As a consequence,
\[\mathcal{F}^{-1}((f\circ A)\cdot\varphi_{i})=|\det(A)|^{-1}\left(\mathcal{F} ^{-1}(f\cdot\psi_{i})\right)\circ A^{-T}\]
where we used the notation \(A^{-T}=(A^{T})^{-1}\). This entails
\[\|\mathcal{F}^{-1}((f\circ A)\cdot\varphi_{i})\|_{p}=|\det(A)|^{1/p-1}\| \mathcal{F}^{-1}(f\cdot\psi_{i})\|_{p}\,\]
which immediately entails the norm equivalence
\[\|f\circ A^{-1}\|_{\mathcal{D}(\mathcal{Q},L^{p},\ell^{q})}\asymp\|f\|_{ \mathcal{D}(\mathcal{P},L^{p},\ell^{q})}\, \tag{2.4}\]
for all \(0<p,q<\infty\). On the other hand, the assumption gives rise to
\[\|f\|_{\mathcal{D}(\mathcal{Q},L^{p},\ell^{q})}\asymp\|f\circ A^{-1}\|_{ \mathcal{D}(\mathcal{Q},L^{p},\ell^{q})}\,\]
which combines with the previous norm equivalence to yield
\[\|f\circ A^{-1}\|_{\mathcal{D}(\mathcal{Q},L^{p},\ell^{q})}\asymp\|f\circ A^{ -1}\|_{\mathcal{D}(\mathcal{P},L^{p},\ell^{q})}\]
and finally
\[\|f\|_{\mathcal{D}(\mathcal{Q},L^{p},\ell^{q})}\asymp\|f\|_{\mathcal{D}( \mathcal{P},L^{p},\ell^{q})}. \tag{2.5}\]
Each of these norm equivalences holds for \(f\in C^{\infty}_{c}(\mathcal{O}\cap\mathcal{O}^{\prime})\). Since both \(\mathcal{O}\) and \(\mathcal{O}^{\prime}\) are open and dense, this entails via Theorem 2.8 that \(\mathcal{O}=A\mathcal{O}\), i.e. condition (i).
In order to establish condition (ii), note that the norm equivalence (2.5) on \(C^{\infty}_{c}(\mathcal{O})\) entails equality of the associated decomposition spaces, i.e.,
\[\forall 1\leq p,q<\infty\ :\ \mathcal{D}(\mathcal{Q},L^{p},\ell^{q})=\mathcal{D}( \mathcal{Q},L^{p},\ell^{q}). \tag{2.6}\]
To see this, observe that by Theorem 5.6 of [26], there is a countable system \((\gamma_{\kappa})_{\kappa\in K}\subset C^{\infty}_{c}(\mathcal{O})\) such that each \(f\in\mathcal{D}(\mathcal{Q},L^{p},\ell^{q})\) can be written as
\[f=\sum_{\kappa\in K}c_{\kappa}\gamma_{\kappa}\, \tag{2.7}\]
converging unconditionally in \(\|\cdot\|_{\mathcal{D}(\mathcal{Q},L^{p},\ell^{q})}\) as well as in \(\mathcal{D}^{\prime}(O)\). By choice of the \(\gamma_{\kappa}\in C^{\infty}_{c}(\mathcal{O})\), the norm equivalence (2.5) applies to each finite partial sum on the right hand side, resulting in the convergence of (2.7) in \(\mathcal{D}(\mathcal{P},L^{p},\ell^{q})\) as well. Thus we have shown \(\mathcal{D}(\mathcal{Q},L^{p},\ell^{q})\subset\mathcal{D}(\mathcal{P},L^{p}, \ell^{q})\), and the converse follows by symmetry.
But now Corollary 2.7 (c) \(\Rightarrow\) (a) entails weak equivalence of \(\mathcal{P}\) and \(\mathcal{Q}\), whence 2.13 allows to conclude that
\[\mathrm{id}:(\mathcal{O},d_{\mathcal{Q}})\to(\mathcal{O},d_{\mathcal{P}})\]
is a quasi-isometry. On the other hand, by construction of \(\mathcal{P}\) from \(\mathcal{Q}\), and of the associated metrics, the map
\[(\mathcal{O},d_{\mathcal{P}})\to(\mathcal{O},d_{\mathcal{Q}})\,\ \xi\mapsto A\xi\,\]
is in fact an isometry. In summary, we get that
\[(\mathcal{O},d_{\mathcal{Q}})\to(\mathcal{O},d_{\mathcal{Q}})\,\ \xi\mapsto A\xi\]
is the composition of a quasi-isometry and an isometry, hence a quasi-isometry, which is condition (ii).
The converse direction is obtained largely by the same arguments: Assuming (i) and (ii), we conclude that \(\mathcal{P}\) is weakly equivalent to \(\mathcal{Q}\) (as defined above), which in turn implies via Corollary 2.5 the equality of the associated decomposition spaces, and the equivalence of the associated norms. Now equation 2.4 finishes the proof.
### Admissible dilation groups
For a closed matrix group \(H\leq\mathrm{GL}(\mathbb{R}^{d})\), which we also call _dilation group_ in the following, we define the group \(G:=\mathbb{R}^{d}\rtimes H\), generated by dilations with elements of \(H\) and arbitrary translations, with the group law \((x,h)\circ(y,g):=(x+hy,hg)\). We denote integration with respect to a left Haar measure on \(H\) with \(\mathrm{d}h\), the associated left Haar measure on \(G\) is then given by \(d(x,h)=|\mathrm{det}\,h|^{-1}dxdh\). The Lebesgue spaces on \(G\) are always defined through integration with respect to a Haar measure. The group \(G\) acts on the space \(\mathrm{L}^{2}(\mathbb{R}^{d})\) through the _quasi-regular representation_\(\pi\) defined by \([\pi(x,h)f](y):=|\mathrm{det}\,h|^{-1/2}f(h^{-1}(y-x))\) for \(f\in\mathrm{L}^{2}(\mathbb{R}^{d})\). The _generalized continuous wavelet transform (with respect to \(\psi\in\mathrm{L}^{2}(\mathbb{R}^{d})\))_ of \(f\) is then given as the function \(W_{\psi}f:G\to\mathbb{C}:(x,h)\mapsto\left<f,\pi(x,h)\psi\right>.\) Important properties of the map \(W_{\psi}:f\mapsto W_{\psi}f\) depend on \(H\) and the chosen \(\psi\). If the quasi-regular representation is _square-integrable_, which means that there exists \(\psi\neq 0\) with \(W_{\psi}\psi\in\mathrm{L}^{2}(G)\), and irreducible, then we call \(H\)_admissible_ and the map \(W_{\psi}:\mathrm{L}^{2}(\mathbb{R}^{d})\to\mathrm{L}^{2}(G)\) is a multiple of an isometry, which gives rise to the (weak-sense) inversion formula
\[f=\frac{1}{C_{\psi}}\int_{G}W_{\psi}f(x,h)\pi(x,h)\psi\mathrm{d}(x,h)\, \tag{2.8}\]
i.e., each \(f\in\mathrm{L}^{2}(\mathbb{R}^{d})\) is a continuous superposition of the wavelet system. According to results in [13], [15], the admissibility of \(H\) can be characterized by the _dual action_ defined by \(H\times\mathbb{R}^{d}\to\mathbb{R}^{d},(h,\xi)\mapsto h^{-T}\xi\). In fact, \(H\) is admissible iff the dual action has a single open orbit \(\mathcal{O}:=H^{-T}\xi_{0}\subset\mathbb{R}^{d}\) of full measure for some \(\xi_{0}\in\mathbb{R}^{d}\) and
additionally the isotropy group \(H_{\xi_{0}}:=\{\,h:p_{\xi_{0}}(h)=\xi_{0}\,\}\subset H\) is compact; see e.g. [15].
Every admissible group gives rise to an associated admissible covering. This is done using the _dual action_ by picking a _well-spread_ family in \(H\), i.e. a family of elements \((h_{i})_{i\in I}\subset H\) with the properties
1. there exists a relatively compact neighborhood \(U\subset H\) of the identity such that \(\bigcup_{i\in I}h_{i}U=H\) - we say \((h_{i})_{i\in I}\) is _\(U\)-dense_ in this case - and
2. there exists a neighborhood \(V\subset H\) of the identity such that \(h_{i}V\cap h_{j}V=\emptyset\) for \(i\neq j\) - we say \((h_{i})_{i\in I}\) is _\(V\)-separated_ in this case.
The _dual covering induced by \(H\)_ is then given by the family \(\mathcal{Q}=(Q_{i})_{i\in I}\), where \(Q_{i}=p_{\xi_{0}}(h_{i}U)\) for some \(\xi_{0}\) with \(H^{-T}\xi_{0}=\mathcal{O}\). It can be shown that well-spread families always exist, and that the induced covering is indeed a tight structured admissible covering in the sense defined above. In particular, \(\mathrm{L}^{p}\)-BAPUs exist for this covering, according to Theorems 4.4.6 and 4.4.13 of [27]. Furthermore, there always exist induced coverings consisting of open and connected sets. For ease of reference, we state this as a lemma.
**Lemma 2.15** ([22] Corollary 2.5.9).: _Let \(H\) denote an admissible dilation group, with open dual orbit \(\mathcal{O}\). Then there always exists an induced covering of \(\mathcal{O}\) by \(H\) that is a tight structured admissible covering consisting of (path-) connected open sets._
We call any induced covering that is a structured admissible covering consisting of open and connected sets an _induced connected covering of \(\mathcal{O}\) by \(H\)_. Note that, two different induced coverings of the same group are always weakly equivalent, see e.g. [22] Corollary 2.6.5. This fact can be understood as a consequence of the decomposition space description of wavelet coorbit spaces in Theorem 2.18 below.
### Generalized wavelet coorbit spaces
Coorbit spaces are defined in terms of the decay behavior of the generalized wavelet transform. To give a precise definition, we introduce weighted mixed \(\mathrm{L}^{p}\)-spaces on \(G\), denoted by \(\mathrm{L}^{p,q}_{v}(G)\). By definition, this space is the set of functions
\[\left\{f:G\to\mathbb{C}:\int_{H}\left(\int_{\mathbb{R}^{d}}|f(x,h)|^{p}\,v(x,h )^{p}\mathrm{d}x\right)^{q/p}\frac{\mathrm{d}h}{|\det(h)|}<\infty\right\},\]
with natural (quasi-)norm \(\|\cdot\|_{\mathrm{L}^{p,q}_{v}}\). This definition is valid for \(0<p,q<\infty\), for \(p=\infty\) or \(q=\infty\) the essential supremum has to be taken at the appropriate place instead. The function \(v:G\to\mathbb{R}^{>0}\) is a measurable weight function that fulfills the condition \(v(ghk)\leq v_{0}(g)v(h)v_{0}(k)\) for some submultiplicative measurable weight \(v_{0}\). If the last condition is satisfied, we call \(v\) left- and right moderate with respect to \(v_{0}\). Thus, the expression \(\|W_{\psi}f\|_{\mathrm{L}^{p,q}_{v}}\) can be read as a measure of wavelet coefficient decay of \(f\). We consider weights which only depend on \(H\). The coorbit space \(\mathrm{Co}\left(\mathrm{L}^{p,q}_{v}(\mathbb{R}^{d}\rtimes H)\right)\) is then defined as the space
\[\left\{f\in(\mathcal{H}^{1}_{w})^{\frown}:W_{\psi}f\in W(\mathrm{L}^{p,q}_{v}( G))\right\} \tag{2.9}\]
for a suitable wavelet \(\psi\) fulfilling various technical conditions, and some control weight \(w\) associated to \(v\). The space \((\mathcal{H}^{1}_{w})^{\frown}\) denotes the space of antilinear functionals on \(\mathcal{H}^{1}_{w}:=\left\{f\in\mathrm{L}^{2}(\mathbb{R}^{d}):W_{\psi}f\in \mathrm{L}^{1}_{w}(G)\right\}\). Given a Banach function space \(Y\) on \(G\), we let \(W(Y)\) denote the Wiener amalgam space defined by \(W_{Q}(Y):=\{f\in\mathrm{L}^{p,q}_{v}(G)\}\). We call \(W_{Q}(Y)\) the _dual covering induced by \(H\)_.
**Lemma 2.16** ([22] Corollary 2.5.9).: _Let \(H\) denote an admissible dilation group, with open dual orbit \(\mathcal{O}\). Then there always exists an induced covering of \(\mathcal{O}\) by \(H\) that is a tight structured admissible covering consisting of (path-) connected open sets._
Proof.: Let \(\mathcal{O}\) be a subset of \(\mathcal{O}\). We say that \(\mathcal{O}\) is _\(\mathcal{O}\)-dense_ in \(\mathcal{O}\) if and only if \(\mathcal{O}\) is \(\mathcal{O}\)-dense_ in \(\mathcal{O}\). Let \(\mathcal{O}\) be a subset of \(\mathcal{O}\). We say that \(\mathcal{O}\) is _\(\mathcal{O}\)-dense_ in \(\mathcal{O}\) if and only if \(\mathcal{O}\) is \(\mathcal{O}\)-dense_ in \(\mathcal{O}\) if and only if \(\mathcal{O}\) is \(\mathcal{O}\)-dense in \(\mathcal{O}\). We say that \(\mathcal{O}\) is _\(\mathcal{O}\)-dense_ in \(\mathcal{O}\) if and only if \(\mathcal{O}\) is \(\mathcal{O}\)-dense_ in \(\mathcal{O}\) if and only if \(\mathcal{O}\) is \(\mathcal{O}\)-dense in \(\mathcal{O}\). We say that \(\mathcal{O}\) is _\(\mathcal{O}\)-dense_ in \(\mathcal{O}\) if and only if \(\mathcal{O}\) is \(\mathcal{O}\)-dense_ in \(\mathcal{O}\) if and only if \(\mathcal{O}\) is \(\mathcal{O}\)-dense in \(\mathcal{O}\). We say that \(\mathcal{O}\) is _\(\mathcal{O}\)-dense_ in \(\mathcal{O}\) if and only if \(\mathcal{O}\) is \(\mathcal{O}\)-dense_ in \(\mathcal{O}\) if and only if \(\mathcal{O}\) is \(\mathcal{O}\)-dense in \(\mathcal{O}\). We say that \(\mathcal{O}\) is _\(\mathcal{O}\)-dense_ in \(\mathcal{O}\) if and only if \(\mathcal{O}\) is \(\mathcal{O}\)-dense in \(\mathcal{O}\) if and only if \(\mathcal{O}\) is \(\mathcal{O}\)-dense in \(\mathcal{O}\).
\(\mathrm{L}^{\infty}_{\mathrm{loc}}(G)|M_{Q}f\in Y\}\) with quasi-norm \(\|f\|_{W_{Q}(Y)}:=\|M_{Q}f\|_{Y}\) for \(f\in W_{Q}(Y)\). Here we used the _maximal function_\(M_{Q}f\) for some suitable unit neighborhood \(Q\subset G\), given by \(M_{Q}f:G\to[0,\infty],\ x\mapsto\mathrm{ess\ sup}_{y\in xQ}\,|f(y)|\).
The appearance of the Wiener amalgam space in (2.9) is necessary to guarantee consistently defined quasi-Banach spaces in the case \(\{p,q\}\cap(0,1)\neq\emptyset\), see [25] and [27]. In the classical coorbit theory for Banach spaces, which was developed in [9], [10], the Wiener amalgam space is replaced by \(\mathrm{L}^{p,q}_{v}(G)\) and this change leads to the same space for \(p,q\geq 1\), see [25].
Many useful properties of these spaces are known and hold in the quasi-Banach space case as well as in the Banach space case. The most prominent examples of coorbit spaces associated to generalized wavelet transforms are the homogeneous Besov spaces and the modulation spaces. However, each shearlet group, a class of groups we introduce in the next subsection, gives rise to its own scale of coorbit spaces, as well; see [23], [4] and [16].
**Remark 2.16**.: _For \(0\leq p,q\leq 2\) and constant weights \(v\), the coorbit spaces \(Co(L^{p,q}_{v}))\) has a canonical embedding into \(L^{2}(\mathbb{R}^{d})\); see Remark 2.15 of [17]. Here the appeal to the anti-dual \((\mathcal{H}^{1}_{w})^{\neg}\) can be avoided, i.e. one can simply define_
\[Co(L^{p,q}_{v})=\left\{f\in L^{2}(\mathbb{R}^{d}):W_{\psi}f\in W(\mathrm{L}^{p,q}_{v}(G))\right\}\.\]
_This has the useful implication that coorbit spaces associated to different dilation groups can be compared in a straightforward manner._
The next definition will be useful for the transfer of weights from the coorbit to the decomposition space setting.
**Definition 2.17** ([27] Definition 4.5.3.).: _For \(q\in(0,\infty]\) and a weight \(m:H\to(0,\infty)\), we define the weight \(m^{(q)}:H\to(0,\infty),\ h\mapsto|\mathrm{det}(h)|^{\frac{1}{2}-\frac{1}{q}}m(h).\) Here, we set \(\frac{1}{\infty}:=0\)._
The connection between coorbit spaces and decomposition spaces is given by the next theorem. For the Banach space case, we also refer to [21]. A recent extension beyond the irreducible setting can be found in [20].
**Theorem 2.18** ([27] Theorem 4.6.3).: _Let \(\mathcal{Q}\) be a covering of the dual orbit \(\mathcal{O}\) induced by \(H\), \(0<p,q\leq\infty\) and \(u=(u_{i})_{i\in I}\) a suitable weight, then the Fourier transform \(\mathcal{F}:\mathrm{Co}\left(\mathrm{L}^{p,q}_{v}(\mathbb{R}^{d}\rtimes H) \right)\to\mathcal{D}(\mathcal{Q},\mathrm{L}^{p},\ell^{q}_{u})\) is an isomorphism of (quasi-) Banach spaces. The weight \((u_{i})_{i\in I}\) can be chosen as \(u_{i}:=v^{(q)}(h_{i})\), where \((h_{i})_{i\in I}\) is the well-spread family used in the construction of \(\mathcal{Q}\) and we call such a weight a \(\mathcal{Q}-\)discretization of \(v\)._
**Remark 2.19**.: _In the following, we will mostly concentrate on constant weights, i.e. on the study of coorbit spaces of the type \(Co(L^{p,q}(G))\) corresponding to \(v\equiv 1\). This has the important consequence that the \(\mathcal{Q}\)-discretization \((u_{i})_{i\in I}\) obtained from a dual covering \(\mathcal{Q}=(Q_{i})_{i\in I}=(h_{i}^{-T}Q)_{i\in I}\) fulfills_
\[u_{i}=|\mathrm{det}(h_{i})|^{\frac{1}{2}-\frac{1}{q}}=|Q|^{\frac{1}{2}-\frac{1 }{q}}|Q_{i}|^{\frac{1}{q}-\frac{1}{2}}\asymp|Q_{i}|^{\frac{1}{q}-\frac{1}{2}}\.\]
_In the parlance of [17, Definition 2.7], the induced weight \((u_{i})_{i\in I}\) is **intrinsic** with exponent \(\alpha=\frac{1}{q}-\frac{1}{2}\)._
**Remark 2.20**.: _Note that the domain of the Fourier transform in the previous theorem requires some additional clarification, which can be found in Remark 2.15 of
[17]. _For the following discussion it will be sufficient to recall the already mentioned inclusion of \(Co(L^{p,q}_{v})\subset L^{2}(\mathbb{R}^{d})\) for \(1\leq p,q\leq 2\) and constant weight \(v\). With this identification the Fourier transform from Theorem 2.18 coincides with the Plancherel transform on \(L^{2}(\mathbb{R}^{d})\)._
We next formalize the property that two admissible dilation groups have the same coorbit spaces. We already pointed out that a literal interpretation of this property is not generally available, at least not for all possible choices of coorbit space norms.
**Definition 2.21**.: _Let \(H_{1},H_{2}\leq GL(\mathbb{R}^{d})\) denote admissible matrix groups. We call \(H_{1},H_{2}\) coorbit equivalent if for all \(0<p,q\leq\infty\) and for all \(f\in L^{2}(\mathbb{R}^{d})\) we have_
\[\|f\|_{Co(L^{p,q}(\mathbb{R}^{d}\rtimes H_{1}))}\asymp\|f\|_{Co(L^{p,q}( \mathbb{R}^{d}\rtimes H_{2}))}\.\]
_Here the norm equivalence is understood in the generalized sense that one side is infinite iff the other side is._
**Remark 2.22**.: _An example of distinct groups that are coorbit equivalent can be found in [21], Section 9: If \(H=\mathbb{R}^{+}\times SO(d)\), for \(d>1\), and \(C\in GL(\mathbb{R}^{d})\) is arbitrary, then \(H\) and \(C^{-1}HC\) are coorbit equivalent, but typically distinct._
The question whether two groups are coorbit equivalent can now be answered using the metric criteria for decomposition spaces. The following result is essentially [17, Theorem 4.17]. Note that condition (c) follows by combining the decomposition space description of coorbit spaces from Theorem 2.18 with the rigidity property of decomposition spaces, formulated after Lemma 2.6.
**Theorem 2.23**.: _Let \(H_{1},H_{2}\leq\operatorname{GL}(\mathbb{R}^{d})\) denote admissible matrix groups, and let \(\mathcal{O}_{1},\mathcal{O}_{2}\) denote the associated open dual orbits. Then the following are equivalent:_
1. \(H_{1}\) _and_ \(H_{2}\) _are coorbit equivalent._
2. _For all_ \(1\leq p,q\leq 2\)_:_ \(Co(L^{p,q}(\mathbb{R}^{d}\rtimes H_{1}))=Co(L^{p,q}(\mathbb{R}^{d}\rtimes H _{2}))\)_, as subspaces of_ \(L^{2}(\mathbb{R}^{d})\)_._
3. _There exists_ \(1\leq p,q\leq 2\) _with_ \((p,q)\neq(2,2)\)_, such that_ \(Co(L^{p,q}(\mathbb{R}^{d}\rtimes H_{1}))=Co(L^{p,q}(\mathbb{R}^{d}\rtimes H _{2}))\)_, as subspaces of_ \(L^{2}(\mathbb{R}^{d})\)_._
4. \(\mathcal{O}_{1}=\mathcal{O}_{2}\)_, and the coverings induced by_ \(H_{1}\) _and_ \(H_{2}\) _on the common open orbit are weakly equivalent._
Following the cue of coorbit equivalence, we now introduce a notion of coorbit compatible matrices that focuses on invariance of certain coorbit spaces.
**Definition 2.24**.: _Let \(H<\operatorname{GL}(\mathbb{R}^{d})\) denote an admissible matrix group, and \(A\in\operatorname{GL}(\mathbb{R}^{d})\). We call \(A\)**coorbit compatible with \(H\)**, if for all \(0<p,q\leq\infty\) and for all \(f\in L^{2}(\mathbb{R}^{d})\) we have_
\[\|f\|_{Co(L^{p,q}(\mathbb{R}^{d}\rtimes H))}\asymp\|f\circ A^{-1}\|_{Co(L^{p,q}(\mathbb{R}^{d}\rtimes H))}\.\]
_We let_
\[\mathcal{S}_{Co_{H}}=\{A\in\operatorname{GL}(\mathbb{R}^{d}):\text{$A$ is coorbit compatible with $H$}\}\]
Just as for \(\mathcal{S}_{\mathcal{P}}\), we immediately obtain that \(\mathcal{S}_{Co_{H}}\) is a group:
**Remark 2.25**.: _If \(H<\operatorname{GL}(\mathbb{R}^{d})\) is an arbitrary admissible matrix group, then \(\mathcal{S}_{Co_{H}}\subset\operatorname{GL}(\mathbb{R}^{d})\) is a subgroup. In particular, \(A\in\mathcal{S}_{Co_{H}}\) iff \(A^{-1}\in\mathcal{S}_{Co_{H}}\). Furthermore, given \(A\in H\), we can compute_
\[W_{\psi}(f\circ A)(x,h)=(W_{\psi}f)((0,A^{-1})\cdot(x,h))\,\]
_and now left invariance of the spaces \(L^{p,q}_{v}(\mathbb{R}^{d}\rtimes H)\) entails that \(A\in\mathcal{S}_{Co_{H}}\), showing the inclusion \(H\subset\mathcal{S}_{Co_{H}}\). It is currently not known whether \(\mathcal{S}_{Co_{H}}\) is generally closed._
We next introduce the word metrics needed to formulate criteria for coorbit compatibility at the group level.
**Definition 2.26**.: _Let \(H\) be a locally compact group and let \(W\subset H\) be a unit neighborhood. Define the map \(d_{W}:H\times H\to\mathbb{N}_{0}\cup\{\infty\}\) in the following way_
\[d_{W}(x,y)=\begin{cases}\inf\big{\{}\ m\in\mathbb{N}\ \big{|}\ x^{-1}y\in W^{m}\ \big{\}}&x\neq y\\ 0&x=y,\end{cases}\]
_where we again set \(\inf\emptyset=\infty\)._
The following results rest on a somewhat subtle technical condition on the dual stabilizers
\[H_{\xi}=\{h\in H:h^{-T}\xi=\xi\}\,\xi\in\mathcal{O}\.\]
We will use \(H_{0}\subset H\) to denote the connected component of the identity element in \(H\). The fact that these notations clash for the zero element \(0\) is immaterial for the following, since \(0\not\in\mathcal{O}\), hence the stabilizer of \(0\) does not enter the discussion.
Throughout the following, the condition \(H_{\xi}\subset H_{0}\) will repeatedly occur, where \(\xi\) is an arbitrary element of \(\mathcal{O}\). This condition is independent of the choice of \(\xi\in\mathcal{O}\). Further observations, and a more detailed discussion of the role of this condition in the context of coorbit equivalence can be found in Section 4 of [17].
We then have the following result, which is Theorem 4.16 from [17].
**Theorem 2.27**.: _Assume \(H_{\xi}\subset H_{0}\). Let \(W\subset H\) be a relatively compact, symmetric unit neighborhood with \(W\subset H_{0}\). Furthermore, let \(\mathcal{Q}=(h_{i}^{-T}Q)_{i\in I}\) be an induced connected covering of \(\mathcal{O}\) by \(H\) with \(\xi\in Q\), for some open, relatively compact \(Q\subset\mathcal{O}\). Then_
\[p_{\xi}:(H,d_{W})\to(\mathcal{O},d_{\mathcal{Q}}),h\mapsto h^{-T}\xi\]
_is a quasi-isometry._
With these observations in place, we can now formulate and prove the following characterization of compatible dilations. The theorem can be viewed as a rather natural analogue of Theorem 2.23, and it is arguably the main result of our paper.
Before we formulate the theorem, we recall the definition of a _word metric_ on a locally compact group: Given an open symmetric neighborhood of the identity \(W\) in such a group \(G\), let \(W^{0}=\{e_{G}\}\), and \(W^{n}=\{x_{1}\cdot\ldots\cdot x_{n}:x_{1},\ldots,x_{n}\in W\text{ for all }n\in\mathbb{N}\). Finally, define \(x\neq y\)
\[d_{W}(x,y)=\min\{n\in\mathbb{N}:x^{-1}y\in W^{n}\}\.\]
where by convention, \(d_{W}(x,y)=\infty\) if \(x^{-1}y\not\in\langle W\rangle\), the subgroup generated by \(W\). In particular, if \(G_{0}<G\) denotes the connected component the identity element of \(G\) and \(W\subset G_{0}\) is an open symmetric neighborhood of the identity element, then \(\langle W\rangle=G_{0}\), and \(d_{W}(x,y)<\infty\) is equivalent to saying that \(x\) and \(y\) are in the same connected component of \(G\).
**Theorem 2.28**.: _Let \(H\) denote an admissible matrix group with open dual orbit \(\mathcal{O}\). Assume that \(H_{\xi}\subset H_{0}\) holds. Let \(A\in\operatorname{GL}(d,\mathbb{R})\). Then the following are equivalent:_
1. \(A\in\mathcal{S}_{Co_{H}}\)
_._
2. \(H\) _is coorbit equivalent to_ \(AHA^{-1}\)__
3. _For any covering_ \(\mathcal{P}\) _of_ \(\mathcal{O}\) _induced by_ \(H\)_,_ \(A^{-T}\in\mathcal{S}_{\mathcal{P}}\)_._
4. _Let_ \(W\subset H_{0}\) _denote any open, symmetric unit neighborhood, and_ \(d_{W}\) _denote the associated word metric on_ \(H\)_. Let_ \((p_{\xi})^{-1}:\mathcal{O}\to H\) _denote an arbitrary right inverse to_ \(p_{\xi}\)_. Then_ \(A^{-T}\mathcal{O}=\mathcal{O}\)_, and the map_ \[\varphi_{A}:H\to H\,\ \varphi_{A}:h\mapsto(p_{\xi})^{-1}(A^{-T}h^{-T}.\xi)\] _is a quasi-isometry with respect to_ \(d_{W}\)_._
Proof.: The equivalence (a) \(\Leftrightarrow\) (b) follows from [21, Lemma 44], combined with the canonical embedding \(Co_{H}(L^{p,q}(\mathbb{R}^{d}\rtimes H))\subset L^{2}(\mathbb{R}^{d})\).
Proof of (a) \(\Rightarrow\) (c): Pick \(g\in C_{c}^{\infty}(\mathcal{O}\cap A^{-T}\mathcal{O})\). Then \(\mathcal{F}^{-1}(g)\in L^{2}(\mathbb{R}^{d})\) with \(\mathcal{F}^{-1}(g\circ A^{-T})=|\det(A)|\mathcal{F}^{-1}(g)\circ A\). Combining Theorem 2.18 with the assumption \(A\in\mathcal{S}_{Co_{H}}\) now implies the following chain of norm equivalences, for any covering \(\mathcal{P}\) induced by \(H\)
\[\|g\circ A^{-T}\|_{\mathcal{D}(\mathcal{P},L^{p},\ell^{q})} \asymp \|\mathcal{F}^{-1}(g\circ A^{-T})\|_{Co(L^{p,q}(\mathbb{R}^{d} \rtimes H))}\] \[\asymp \|\mathcal{F}^{-1}(g)\|_{Co(L^{p,q}(\mathbb{R}^{d}\rtimes H))}\] \[\asymp \|g\|_{\mathcal{D}(\mathcal{P},L^{p},\ell^{q})}\]
which shows that \(A^{-T}\in\mathcal{S}_{\mathcal{P}}\).
Proof of (c) \(\Rightarrow\) (a): Assume that \(A^{-T}\in\mathcal{S}_{\mathcal{P}}\), for a covering \(\mathcal{P}\) induced by \(H\). Consider \(1\leq p,q\leq 2\). Then we get, for all \(g\in\mathcal{F}^{-1}(C_{c}^{\infty}(\mathcal{O}))\), the chain of norm equivalences
\[\|g\circ A\|_{Co(L^{p,q}(\mathbb{R}^{d}\rtimes H))} \asymp \|\mathcal{F}(g\circ A)\|_{\mathcal{D}(\mathcal{P},L^{p},\ell^{q})}\] \[\asymp \|\mathcal{F}(g)\circ A^{-T}\|_{\mathcal{D}(\mathcal{P},L^{p}, \ell^{q})}\] \[\asymp \|\mathcal{F}(g)\|_{\mathcal{D}(\mathcal{P},L^{p},\ell^{q})}\] \[\asymp \|g\|_{Co(L^{p,q}(\mathbb{R}^{d}\rtimes H))}\.\]
Here the second-to-last equivalence was due to \(A\in\mathcal{S}_{\mathcal{P}}\), and the remaining ones due to Theorem 2.18. Hence it remains to prove that this norm equivalence holds for all \(g\in L^{2}(\mathbb{R}^{d})\) (in the extended sense), which is achieved by using a density argument similar to the one used in the proof of Theorem 2.14. More precisely, picking any nonzero \(\psi\in\mathcal{F}^{-1}(C_{c}^{\infty}(\mathcal{O}))\), Lemma 2.7 of [16] implies that \(\psi\) is a so-called **better vector** in the sense of [9]. By Theorem 6.1 of the cited paper, there exists a discrete family \(((x_{\kappa},h_{\kappa}))_{\kappa\in K}\subset\mathbb{R}^{d}\rtimes H\) such that the system \((\pi(x_{\kappa},h_{\kappa})\psi)_{\kappa\in K}\) is a Banach frame both for \(L^{2}(\mathbb{R}^{d})\) and for \(Co(L^{p,q}(\mathbb{R}^{d}\rtimes H))\).
This means that every \(f\in L^{2}(\mathbb{R}^{d})\) can be written as
\[f=\sum_{\kappa\in K}c_{\kappa}\pi(x_{\kappa},h_{\kappa})\psi\, \tag{2.10}\]
with unconditional convergence in \(\|\cdot\|_{2}\), but also in \(\|\cdot\|_{Co(L^{p,q}(\mathbb{R}^{d}\rtimes H))}\), as soon as \(f\) is contained in the latter, smaller space. By choice of \(\psi\), the finite partial sums of (2.10) all lie in \(\mathcal{F}^{-1}(C_{c}^{\infty}(\mathcal{O}))\).
On the other hand, \(A^{-T}\in\mathcal{S}_{\mathcal{P}}\) also entails \(A^{-T}\mathcal{O}=\mathcal{O}\), and the finite partial sums of the expansion
\[f\circ A=\sum_{\kappa\in K}c_{\kappa}\pi(x_{\kappa},h_{\kappa})\psi\circ A \tag{2.11}\]
therefore also lie in \(\mathcal{F}^{-1}(C_{c}^{\infty}(\mathcal{O})\). Hence if \(f\in Co(L^{p,q}(\mathbb{R}^{d}\rtimes H))\), we obtain by the already established norm equivalence that (2.11) converges also in \(\|\cdot\|_{Co(L^{p,q}(\mathbb{R}^{d}\rtimes H))}\), finally leading to \(f\circ A\in Co(L^{p,q}(\mathbb{R}^{d}\rtimes H))\). In addition, the norm equivalence valid for the partial sums extends (with identical constants) to the limit \(f\circ A\).
The converse inclusion follows by symmetry, observing that by Remark 2.25 the argument can be applied with \(A\) systematically replaced by \(A^{-1}\).
Proof of (c) \(\Leftrightarrow\) (d): \(A^{-T}\in\mathcal{S}_{\mathcal{P}}\) implies \(A^{-T}\mathcal{O}=\mathcal{O}\), as well as the quasi-isometry property of
\[\mathcal{O}\to\mathcal{O}\,\ \zeta\mapsto A^{-T}\zeta\.\]
Suitably composing this map with the quasi-isometries \(p_{\xi}\) and \((p_{\xi})^{-1}\) (by 2.27) gives the map \(\varphi_{A}\).
The converse direction is proved analogously.
The next corollary notes a simple application of \((a)\Longleftrightarrow(b)\). It can be seen as a natural extension of the inclusion \(H\subset\mathcal{S}_{Co_{H}}\) from Remark 2.25.
**Corollary 2.29**.: _Let \(H<GL(\mathbb{R}^{d})\) denote an admissible subgroup, and let_
\[N_{H}=N_{H}(GL(\mathbb{R}^{d}))=\{A\in GL(\mathbb{R}^{d}):AHA^{-1}=H\}\,\]
_the normalizer subgroup of \(H\) in \(GL(\mathbb{R}^{d})\). Then \(N_{H}\subset\mathcal{S}_{Co_{H}}\). In particular, \(\mathbb{R}^{*}\cdot I_{d}\subset\mathcal{S}_{Co_{H}}\)._
## 3. A review of shearlet dilation groups
In this section, we review the main structural features of shearlet dilation groups. In full generality,this class was first introduced and studied in [19]. A more recent overview with additional results can be found in [1].
We start out by presenting the two main examples of shearlet dilation groups, known prior to [19]
**Definition 3.1** ([1] Example 17. and Example 18.).:
1. _For_ \(\lambda=(\lambda_{1},\ldots,\lambda_{d-1})\in\mathbb{R}^{d-1}\)_, we define the standard shearlet group in_ \(d\)_-dimensions_ \(H^{\lambda}\) _as the set_ \[\left\{\begin{array}{cccc}\epsilon\operatorname{diag}\left(a,a^{\lambda_{1} },\ldots,a^{\lambda_{d-1}}\right)\left(\begin{array}{cccc}1&s_{1}&\ldots&s_ {d-1}\\ &1&0\ldots&0\\ &&\ddots&0\\ &&&1\end{array}\right)\left|\begin{array}{c}a>0,\\ s_{i}\in\mathbb{R},\\ \epsilon\in\{\,\pm 1\,\}\end{array}\right.\end{array}\right\}.\]
2. _For_ \(\delta\in\mathbb{R}\)_, we define the Toeplitz shearlet group in_ \(d\)_-dimensions_ \(H^{\delta}\) _as the set_ \[\left\{\begin{array}{cccc}\epsilon\operatorname{diag}\left(a,a^{1-\delta}, \ldots,a^{1-(d-1)\delta}\right)\cdot T(1,s_{1},\ldots,s_{d-1})&\left|\begin{array} []{c}a>0,\\ s_{i}\in\mathbb{R},\\ \epsilon\in\{\,\pm 1\,\}\end{array}\right.\end{array}\right\},\] _where the matrix_ \(T(1,s_{1},\ldots,s_{d-1})\) _is defined by_ \[T(1,s_{1},\ldots,s_{d-1}):=\begin{pmatrix}1&s_{1}&s_{2}&\ldots&\ldots&s_{d-1} \\ &1&s_{1}&s_{2}&\ldots&s_{d-2}\\ &&\ddots&\ddots&\ddots&\\ &&&1&s_{1}&s_{2}\\ &&&&1&s_{1}\\ \end{pmatrix}.\]
The presentation of more general shearlet dilation groups requires additional elementary notations from the area of (matrix) Lie groups. In the following, \(\mathfrak{gl}(\mathbb{R}^{d})\) denotes the set of all real \(d\times d\)-matrices. We let
\[\exp:\mathfrak{gl}(\mathbb{R}^{d})\to\operatorname{GL}(\mathbb{R}^{d})\]
be the exponential map defined by
\[\exp(A):=\sum_{k=0}^{\infty}\frac{A^{k}}{k!}\.\]
Furthermore, we denote with \(T(\mathbb{R}^{d})\subset\operatorname{GL}(\mathbb{R}^{d})\) the group of upper triangular \(d\times d\)-matrices with one on their diagonals. By definition, the Lie algebra of a closed subgroup \(H\subset\operatorname{GL}(\mathbb{R}^{d})\) is the set \(\mathfrak{h}\) of all matrices \(Y\) in \(\mathfrak{gl}(\mathbb{R}^{d})\) such that \(\exp(tY)\in H\) for all \(t\in\mathbb{R}\).
The following definition was first formulated in [19]; see also [1].
**Definition 3.2**.: _Let \(H\subset\operatorname{GL}(\mathbb{R}^{d})\) be a closed, admissible dilation group. The group \(H\) is called generalized shearlet dilation group if there exist two closed subgroups_
\[S,D\subset\operatorname{GL}(\mathbb{R}^{d})\]
_such that_
1. \(S\) _is a connected abelian subgroup of_ \(T(\mathbb{R}^{d})\)_;_
2. \(D=\left\{\exp(rY)\mid r\in\mathbb{R}\,\right\}\) _is a one-parameter group, where_ \(Y\in\mathfrak{gl}(\mathbb{R}^{d})\) _is a diagonal matrix;_
3. _every_ \(h\in H\) _has a unique representation as_ \(h=\pm ds\) _for some_ \(d\in D\) _and_ \(s\in S\)_._
\(S\) _is called the shearing subgroup of \(H\), \(D\) is called the scaling subgroup of \(H\), and \(Y\) is called the infinitesimal generator of \(D\)._
The article [1] provides a systematic construction process for shearlet groups in arbitrary dimension that works by first selecting a shearing group \(S\) and then determining conditions on the infinitesimal generator \(Y\) of the scaling group \(D\).
We denote the canonical basis of \(\mathbb{R}^{d}\) with \(e_{1},\dots,e_{d}\) and the identity matrix in \(\operatorname{GL}(\mathbb{R}^{d})\) with \(I_{d}\). The next result contains information about the structure of shearing subgroups.
**Lemma 3.3** ([1] Lemma 5. and Lemma 6.).: _Let \(S\) be the shearing subgroup of a generalized shearlet dilation group \(H\subset\operatorname{GL}(\mathbb{R}^{d})\). Then the following statements hold:_
1. _There exists a unique basis_ \(X_{2},\dots,X_{d}\) _of the Lie algebra_ \(\mathfrak{s}\) _of_ \(S\) _with_ \(X_{i}^{T}e_{1}=e_{i}\) _for_ \(2\leq i\leq d\)_, called the canonical basis of_ \(\mathfrak{s}\)_._
2. _We have_ \(S=\left\{\,I_{d}+X\mid X\in\mathfrak{s}\,\right\}\)_._
3. _Let_ \(\mathfrak{s}_{k}=\operatorname{span}\{X_{j}:j\geq k\}\)_, for_ \(k\in\{2,\dots,d\}\)_. These are associative matrix algebras satisfying_ \(\mathfrak{s}_{k}\mathfrak{s}_{\ell}\subset\mathfrak{s}_{k+\ell-1}\)_, where we write_ \(\mathfrak{s}_{m}=\{0\}\) _for_ \(m>d\)_._
4. \(H\) _is the inner semidirect product of the normal subgroup_ \(S\) _with the closed subgroup_ \(D\cup-D\)_._
Every generalized shearlet dilation group \(H\) is admissible, and the next result shows that all of them share the same dual orbit. This ensures that one of the basic conditions for coorbit equivalence is already fulfilled. The following result was also first obtained in [19].
**Lemma 3.4**.: _Let \(S\subset T(\mathbb{R}^{d})\) be a shearing subgroup and \(D\) be a compatible scaling subgroup such that_
\[H=DS\cup(-DS)\]
_is a generalized shearlet dilation group. Then the unique open dual orbit of \(H\) is given by \(\mathcal{O}=\mathbb{R}^{*}\times\mathbb{R}^{d-1}\), and the dual action of \(H\) on \(\mathcal{O}\) is free._
## 4. Characterizing coorbit compatible dilations for general shearlet dilation groups
### The symmetries of the dual orbit
Throughout this section, we let \(\mathcal{O}=\mathbb{R}^{*}\times\mathbb{R}^{d-1}\), the dual orbit of a shearlet dilation group in dimension \(d-1\). We let
\[\mathcal{S}(\mathcal{O})=\{A\in\operatorname{GL}(d,\mathbb{R}):A^{-T} \mathcal{O}=\mathcal{O}\}\,\]
then Theorem 2.28 implies
\[\mathcal{S}_{Co_{H}}\subset\mathcal{S}(\mathcal{O})\.\]
The next lemma characterizes the matrices in \(\mathcal{S}(\mathcal{O})\) and notes some basic formulae that will be helpful for computations.
**Lemma 4.1**.: _Let \(\mathcal{O}=\mathbb{R}^{*}\times\mathbb{R}^{d-1}\), and \(A\in\operatorname{GL}(d,\mathbb{R})\)._
1. \(A\in\mathcal{S}(\mathcal{O})\) _holds iff_ \[A=\left(\begin{array}{cc}\lambda&z\\ \mathbf{0}&B\end{array}\right)\] _with_ \(\lambda\in\mathbb{R}^{*}\)_,_ \(z\in\mathbb{R}^{1\times(d-1)}\) _and_ \(B\in\operatorname{GL}(d-1,\mathbb{R})\)_._
2. _Given two matrices_ \(A,A^{\prime}\in\mathcal{S}(\mathcal{O})\)_, i.e.,_ \[A=\left(\begin{array}{cc}\lambda&z\\ \mathbf{0}&B\end{array}\right)\,\ A^{\prime}=\left(\begin{array}{cc}\lambda^{ \prime}&z^{\prime}\\ \mathbf{0}&B^{\prime}\end{array}\right)\] _then_ (4.1) \[AA^{\prime}=\left(\begin{array}{cc}\lambda\lambda^{\prime}&zB^{\prime}+ \lambda z^{\prime}\\ \mathbf{0}&BB^{\prime}\end{array}\right)\] _and_ (4.2) \[A^{-1}=\left(\begin{array}{cc}\lambda^{-1}&-\lambda^{-1}zB^{-1}\\ \mathbf{0}&B^{-1}\end{array}\right)\]
Proof.: For part (a), we first observe that the subgroup property of \(\mathcal{S}(\mathcal{O})\) implies that \(A\in\mathcal{S}(\mathcal{O})\) is equivalent to \(A^{T}\mathcal{O}=\mathcal{O}\). With \(\xi_{0}=(1,0,\ldots,0)^{T}\), we note that \(\xi\in\mathcal{O}\) iff \(\langle\xi,\xi_{0}\rangle\neq 0\). Hence we get for \(A\in GL(\mathbb{R}^{d})\) the following equivalences:
\[A\in\mathcal{S}(\mathcal{O}) \Longleftrightarrow A^{T}\mathcal{O}\subset\mathcal{O}\] \[\Longleftrightarrow \forall\xi\in\mathbb{R}^{d}\ :\ \left(\langle\xi,\xi_{0}\rangle\neq 0 \Rightarrow\langle A^{T}\xi,\xi_{0}\rangle\neq 0\right)\] \[\Longleftrightarrow \forall\xi\in\mathbb{R}^{d}\ :\ \left(\langle\xi,\xi_{0}\rangle\neq 0 \Rightarrow\langle\xi,A\xi_{0}\rangle\neq 0\right)\] \[\Longleftrightarrow A\xi_{0}=\lambda\xi_{0}\,\lambda\neq 0\.\]
Here \(\lambda\) is the scalar from part (a), and the existence of \(z,B\) as in (a) follows immediately. Equation (4.1) is a special instance of block matrix calculus, and can be employed to solve the equation \(AA^{\prime}=I_{d}\) to obtain \(A^{\prime}=A^{-1}\) in equation (4.2).
The next lemma reduces the general problem of characterizing \(\mathcal{S}_{Co_{H}}\) within \(\mathcal{S}(\mathcal{O})\) to the characterization within a smaller subgroup. The subgroup in question is defined as
\[\mathcal{S}_{1}(\mathcal{O})=\left\{\left(\begin{array}{cc}1&\mathbf{0}\\ \mathbf{0}&B\end{array}\right):B\in\mathrm{GL}(d-1,\mathbb{R})\right\}\.\]
**Lemma 4.2**.: _Let \(A\in\mathcal{S}(\mathcal{O})\). Let \(S\subset\mathrm{GL}(d,\mathbb{R})\) denote the shearing subgroup of a shearlet dilation group \(H\). Then \(A\) factors uniquely as_
\[A=\lambda\cdot h\cdot A_{1}\,\lambda\in\mathbb{R}^{*}\,\ h\in S\,\ A_{1}\in \mathcal{S}_{1}(\mathcal{O}).\]
_One has the equivalence_
\[A\in\mathcal{S}_{Co_{H}}\Longleftrightarrow A_{1}\in\mathcal{S}_{Co_{H}}\.\]
Proof.: We can clearly write \(A=\lambda\cdot A^{\prime}\) uniquely, where
\[A^{\prime}=\left(\begin{array}{cc}1&z^{\prime}\\ \mathbf{0}&B^{\prime}\end{array}\right)\.\]
Define \(z^{\prime\prime}=-z^{\prime}(B^{\prime})^{-1}\). Since the orbit map \(p_{\xi_{0}}:h\to h^{-T}\xi_{0}\in\mathcal{O}\) is onto, there exists \(h\in H\) with \(h^{-T}\xi_{0}=(1,z^{\prime\prime})^{T}\), which is equivalent to
\[h^{-1}=\left(\begin{array}{cc}1&z^{\prime\prime}\\ \mathbf{0}&B^{\prime\prime}\end{array}\right)\.\]
for a suitable invertible matrix \(B^{\prime\prime}\). Note that the entry \(1\) in the upper left corner entails \(h\in S\). Now the choice of \(z^{\prime\prime}\) and part (b) of Lemma 4.1 entail that
\[h^{-1}A^{\prime}=\left(\begin{array}{cc}1&0\\ \mathbf{0}&B^{\prime\prime}B^{\prime}\end{array}\right)=A_{1}\in\mathcal{S}_{ 1}(\mathcal{O}),\]
and we have shown the desired factorization
\[A=\lambda\cdot h\cdot A_{1}\.\]
By Corollary 2.29 and Remark 2.25 respectively, the first two factors are contained in \(\mathcal{S}_{Co_{H}}\). Hence the product is in \(\mathcal{S}_{Co_{H}}\) iff \(A_{1}\) is.
**Remark 4.3**.: _Let \(S\) denote a shearing subgroup \(S\), and let \(Y=\mathrm{diag}(1,\lambda_{2},\ldots,\lambda_{d})\), and \(D=\exp(\mathbb{R}Y)\). Following [1], we call \(Y\)**compatible with**\(S\) iff \(H=DS\cup-DS\) is a generalized shearlet dilation group. We stress that this notion of compatibility is distinct from coorbit compatibility, as formulated in Definition 2.24. In fact, as observed in [1], compatible matrices are characterized by the condition that \(\exp(\mathbb{R}Y)\) normalizes \(S\). This can be slightly rewritten as follows: If we define \(\mathrm{Diag}(d)<\mathrm{GL}(\mathbb{R}^{d})\) as the diagonal subgroup of \(GL(\mathbb{R}^{d})\), we obtain that_
\[Y\ \text{compatible with}\ S\Leftrightarrow\exp(\mathbb{R}Y)\subset N_{S}\cap \mathrm{Diag}(d)\.\]
_The benefit of this observation is the following: Given two dilation subgroups \(D,D^{\prime}\) that are both compatible with the same shearing subgroup \(S\), then \(D^{\prime}\subset N_{H}\subset\mathcal{S}_{Co_{H}}\), where \(H=DS\cup-DS\). To see this, observe that \(D,D^{\prime}\) commute elementwise, since they both consist of diagonal matrices, hence the elements of \(D^{\prime}\) normalize \(H\)._
_In other words, elements of dilation groups compatible with \(S\) are coorbit compatible with every shearlet dilation group having \(S\) as shearing subgroup. As a consequence, if \(H,H^{\prime}\) are different shearlet dilation groups sharing the same shearing subgroup, then \(H^{\prime}\subset\mathcal{S}_{Co_{H}}\)._
### Characterizing compatible dilations within \(\mathcal{S}_{1}(\mathcal{O})\)
In order to enable concrete computations, we now introduce coordinates to the shearlet dilation group \(H\). Recall that we have \(H=DS\cup(-DS)\). The associated Lie algebra is \(\mathfrak{h}=\mathbb{R}\cdot Y\oplus\mathfrak{s}\), with \(Y\) the infinitesimal generator of the scaling subgroup \(D\), and \(\mathfrak{s}\) the Lie algebra of the shearing subgroup \(S\). By Proposition 7 of [1], we can normalize \(Y\) so that
\[Y=\mathrm{diag}(1,\lambda_{2},\ldots,\lambda_{d})\.\]
We use \(\xi_{0}=(1,0,\ldots,0)\in\mathcal{O}\). We parameterize group elements of \(H\) as follows: Given \(r\in\mathbb{R}\) and \(t=(t_{2},\ldots,t_{d})\in\mathbb{R}^{d-1}\), we let
\[h(r,t) = \exp(-rY)\left(I_{d}+\sum_{j=2}^{d}t_{j}X_{j}\right)^{-1}\in H. \tag{4.3}\]
Note that \(D=\{h(r,0):r\in\mathbb{R}\}\) and \(S=\{h(0,t):t\in\mathbb{R}^{d-1}\}\). Note further that this only parameterizes \(DS\), and not all of \(H\), but this will be sufficient for the coming arguments.
We can then compute
\[p_{\xi_{0}}^{H}(h(r,t))=\exp(rY)(I_{d}+\sum_{j=2}^{d}t_{j}X_{j}^{T})\left( \begin{array}{c}1\\ 0\\ \vdots\\ 0\end{array}\right)=\left(\begin{array}{c}e^{r}\\ e^{\lambda_{2}r}t_{2}\\ \vdots\\ e^{\lambda_{d}r}t_{d}\end{array}\right)\.\]
The next lemma exhibits an explicit formula for the crucial map \(\varphi_{A}:H\to H\), for \(A\in\mathcal{S}_{1}(\mathcal{O})\), with respect to the newly introduced coordinates.
**Lemma 4.4**.: _Let_
\[A=\left(\begin{array}{cc}1&\mathbf{0}\\ \mathbf{0}&B\end{array}\right)\in\mathcal{S}_{1}(\mathcal{O})\,\]
_and_
\[\varphi_{A}:H\to H\,\ \varphi_{A}(h)=(p_{\xi_{0}})^{-1}(A^{-T}h^{-T}\xi_{0})\.\]
_Then one has_
\[\varphi_{A}(h(r,t))=h(r,\tilde{Y}_{-r}B^{-T}\tilde{Y}_{r}t)\,\]
_where we used \(\tilde{Y}_{s}=\exp(s\mathrm{diag}(\lambda_{2},\ldots,\lambda_{d}))\)._
_In particular, the restriction \(\varphi_{A}|_{S}:S\to S\) is a well-defined bijection._
Proof.: Simple calculations show that
\[\varphi_{A}(h(r,t)) = (p_{\xi_{0}})^{-1}(A^{-T}p_{\xi_{0}}(h(r,t)))\] \[= (p_{\xi_{0}})^{-1}\left(\left(\begin{array}{cc}1&\mathbf{0}\\ \mathbf{0}&B^{-T}\end{array}\right)\left(\begin{array}{c}e^{r}\\ e^{\lambda_{2}r}t_{2}\\ \vdots\\ e^{\lambda_{d}r}t_{d}\end{array}\right)\right)\] \[= (p_{\xi_{0}})^{-1}\left(\left(\begin{array}{c}e^{r}\\ B^{-T}\tilde{Y}_{r}t\end{array}\right)\right)\] \[= h(r,\tilde{Y}_{-r}B^{-T}\tilde{Y}_{r}t).\]
The statement about \(\varphi_{A}|_{S}\) is then clear.
Our characterization of the elements \(A\in\mathcal{S}_{Co_{H}}\cap\mathcal{S}_{1}(\mathcal{O})\) rests on two algebraic conditions on \(A\). The following lemma clarifies one of these conditions:
**Lemma 4.5**.: _Let_
\[A=\left(\begin{array}{cc}1&{\bf 0}\\ {\bf 0}&B\end{array}\right)\in{\mathcal{S}}_{1}({\mathcal{O}})\,\]
_and \(\varphi_{A}:H\to H\) defined as in the previous lemma. Then the following are equivalent:_
1. \(B\) _commutes with_ \(\tilde{Y}=\operatorname{diag}(\lambda_{2},\ldots,\lambda_{d})\)_._
2. \(\varphi_{A}|_{S}:S\to S\) _commutes with the conjugation action of_ \(D\) _on_ \(S\)_, i.e._ \[\forall d\in D\forall s\in S\ :\ d^{-1}\varphi_{A}(s)d=\varphi_{A}(d^{-1}sd)\.\]
3. _For all_ \(h(r,t)\in DS\)_, one has_ \[\varphi_{A}(h(r,t))=h(r,B^{-T}t)=h(r,0)\varphi_{A}(h(0,t))\.\]
4. \(B^{-T}\) _commutes with all matrices_ \[\tilde{Y}_{r}=\exp(r\ \operatorname{diag}(\lambda_{2},\ldots,\lambda_{d}))\.\]
Proof.: For \((a)\Leftrightarrow(d)\) note that \(B\) commutes with \(\tilde{Y}\) iff \(B^{-1}\) commutes with \(\tilde{Y}\), iff \(B^{-T}\) commutes with \(\tilde{Y}^{T}=\tilde{Y}\). It is straightforward to check that the last condition is equivalent to \((d)\).
\((d)\Leftrightarrow(c)\) follows directly from the formula
\[\varphi_{A}(h(r,t))=h(r,\tilde{Y}_{-r}B^{-T}\tilde{Y}_{r}t)\]
established in Lemma 4.4.
In order to show \((d)\Leftrightarrow(b)\), we consider \(d=h(r,0)\) and \(s=h(0,t)\), and define \(\tilde{Z}_{r}=\operatorname{diag}(e^{-r(\lambda_{2}-1)},...,e^{-r(\lambda_{d} -1)})\). Then we have the relation
\[d^{-1}h(0,t)d=h(0,\tilde{Z}_{r}t). \tag{4.4}\]
To see this, we first compute
\[d^{-1}h(0,t)^{-1}d = \begin{pmatrix}e^{-r}&&\\ &e^{-r\lambda_{2}}&&\\ &&\ddots&\\ &&&e^{-r\lambda_{d}}\end{pmatrix}\begin{pmatrix}1&&t\\ &1&0\\ &&\ddots&\\ &&&1\end{pmatrix}\begin{pmatrix}e^{r}&&\\ &e^{r\lambda_{2}}&&\\ &&\ddots&\\ &&&e^{r\lambda_{d}}\end{pmatrix}\] \[= \begin{pmatrix}1&e^{r(\lambda_{2}-1)t}&e^{r(\lambda_{3}-1)t}& \ldots&e^{r(\lambda_{d}-1)t}\\ &1&&\\ &&1&&\\ &&&\ddots&\\ &&&&1\end{pmatrix}\] \[= h(0,\tilde{Z}_{r}t)^{-1}\in S.\]
Now inverting both sides yields equation (4.4). With the help of this equation, condition (b) is reformulated as
\[\forall r\in\mathbb{R}\ \forall t\in\mathbb{R}^{d-1}:\ h(0,\tilde{Z}_{r}B^{-T}t)=h (0,B^{-T}\tilde{Z}_{r}t)\,\]
which is equivalent to the condition that \(B^{-T}\) commutes with all \(\tilde{Z}_{r}\). But this last condition is equivalent to (d), because of \(\tilde{Z}_{r}=e^{r}\tilde{Y}_{-r}\)
**Remark 4.6**.: _Since \(Y\) is diagonal, the matrices commuting with \(\tilde{Y}\) are particularly easy to determine: Note that \(\mathbb{R}^{d-1}\) is the direct sum of eigenspaces of \(\tilde{Y}\). Then it is straightforward to check that a matrix \(B\) commutes with \(\tilde{Y}\) iff \(B\) maps each eigenspace into itself. This condition translates directly to a block diagonal structure of \(B\), up to possible permutations of the entries._
_More precisely, let \(B=(b_{i,j})_{2\leq i,j\leq d}\) be any matrix in \(GL(\mathbb{R}^{d-1})\). Here we adjusted the index set for the entries to comply with the indexing conventions for \(t=(t_{2},\ldots,t_{d})\in\mathbb{R}^{d-1}\) adopted above. Then \(B\tilde{Y}=\tilde{Y}B\) is equivalent to requiring \(b_{i,j}=0\) whenever \(\lambda_{i}\neq\lambda_{j}\)._
The following lemma notes an algebraic condition relating certain automorphisms of \(S\) to those of \(\mathfrak{s}\), and exhibits a close connection to the normalizer of \(S\). The following lemma makes use of the various ways in which the shearing group \(S\) can be written, using Lemma 3.3 and the parametrization \(h\), i.e.
\[S=\left\{I_{d}+X:X\in\mathfrak{s}\right\}=\left\{I_{d}+\sum_{i=2}^{d}t_{i}X_{ i}:t\in\mathbb{R}^{d-1}\right\}=\left\{h(0,t):t\in\mathbb{R}^{d-1}\right\}\.\]
**Lemma 4.7**.: _Let_
\[A=\left(\begin{array}{cc}1&\mathbf{0}\\ \mathbf{0}&B\end{array}\right)\in\mathcal{S}_{1}(\mathcal{O})\,\]
_with \(B\in GL(\mathbb{R}^{d-1})\). Then the following are equivalent:_
1. _The map_ \(\psi_{B}:h(0,t)\mapsto h(0,Bt)\) _is a group automorphism of_ \(S\)_._
2. _The linear isomorphism_ \(\Psi_{B}:\mathfrak{s}\to\mathfrak{s}\)_,_ \[\sum_{i=2}^{d}t_{i}X_{i}\mapsto\sum_{i=2}^{d}s_{i}X_{i}\,\ s=Bt\] _is an automorphism of the associative matrix algebra_ \(\mathfrak{s}\)_._
3. \(A\in N_{S}\)_._
_In particular, every algebra automorphism \(\Psi:\mathfrak{s}\to\mathfrak{s}\) gives rise to a unique matrix \(B\) such that \(\Psi=\Psi_{B}\), and vice versa._
Proof.: We start out by noting that \(h(0,t)=(I_{d}+\sum_{i=2}^{d}t_{i}X_{i})^{-1}\), and therefore \(\psi_{B}(I_{d}+X)^{-1}=I_{d}+\Psi_{B}(X)\), for all \(X\in\mathfrak{s}\), by definition of \(\Psi_{B}\). Furthermore, since \(S\) is commutative, \(\psi_{B}\) is a group isomorphism iff \(\tilde{\psi}_{B}\), defined by \(\tilde{\psi}(h(0,t))=\psi_{B}(h(0,t))^{-1}\) is an isomorphism. Using these observations, one gets via linearity of \(\Psi_{B}\) that
\[\tilde{\psi}_{B}((I_{d}+X)(I_{d}+Y))=\tilde{\psi}_{B}(I_{d}+X) \tilde{\psi}_{B}(I_{d}+Y)\] \[\Leftrightarrow I_{d}+\Psi_{B}(X+Y+XY)=I_{d}+\Psi_{B}(X)+\Psi_{B}(Y)+\Psi_{B}(X) \Psi_{B}(Y)\] \[\Leftrightarrow \Psi_{B}(XY)=\Psi_{B}(X)\Psi_{B}(Y)\.\]
In other words, the multiplicativity properties of \(\tilde{\psi}_{B}\) on \(S\) and of \(\Psi_{B}\) on \(\mathfrak{s}\) are equivalent. Since \(\Psi_{B}\) is by definition a linear bijection, the desired equivalence follows.
The proof of the equivalence \((a)\Leftrightarrow(c)\) requires additional notation. We write
\[h(0,t)^{-1}=I_{d}+\sum_{i=2}^{d}t_{i}X_{i}=\left(\begin{array}{cc}1&t^{T}\\ \mathbf{0}&I_{d-1}+C(t)\end{array}\right)\,\]
with a linear map \(C:\mathbb{R}^{d-1}\to\mathbb{R}^{(d-1)\times(d-1)}\) induced by the entries of the canonical basis \(X_{2},\ldots,X_{d}\) of \(\mathfrak{s}\). With this notation we get the product formula
\[h(0,t_{1})^{-1}h(0,t_{2})^{-1}=h(0,t_{1}+t_{2}+C(t_{2})^{T}\cdot t_{1})^{-1}. \tag{4.5}\]
Hence, we see that the condition (a) is equivalent to the equation
\[h(0,B^{T}\cdot(t_{1}+t_{2}+C(t_{2})^{T}\cdot t_{1}))=h(0,B^{T}\cdot t_{1}+B^{T} \cdot(t_{2}+C(B^{T}\cdot t_{2})^{T}\cdot B^{T}\cdot t_{1}))\,\]
holding for all \(t_{1},t_{2}\in\mathbb{R}^{d-1}\). Using the fact that \(h\) is bijective, we can simplify this condition to
\[\forall t\in\mathbb{R}^{d-1}\ :\ B^{T}\cdot C(t)^{T}=C(B^{T}\cdot t)\cdot B^{T}\.\]
A further slight simplification therefore establishes
\[\psi_{B}\ \text{is a group homomorphism}\Leftrightarrow\forall t\in\mathbb{R}^{d-1} \ :\ B^{-1}\cdot C(t)\cdot B=C(B^{T}\cdot t). \tag{4.6}\]
On the other hand, a direct computation yields
\[A^{-1}\cdot h(0,t)^{-1}\cdot A = \left(\begin{array}{cc}1&\mathbf{0}\\ \mathbf{0}&B\end{array}\right)\cdot\left(\begin{array}{cc}1&t^{T}\\ \mathbf{0}&I_{d-1}+C(t)\end{array}\right)\cdot\left(\begin{array}{cc}1& \mathbf{0}\\ \mathbf{0}&B^{-1}\end{array}\right)\] \[= \left(\begin{array}{cc}1&(B^{T}\cdot t)^{T}\\ \mathbf{0}&I_{d-1}+B^{-1}\cdot C(t)\cdot B\end{array}\right)\.\]
This shows that
\[A^{-1}\cdot h(0,t)^{-1}\cdot A\in S\]
is equivalent to
\[A^{-1}\cdot h(0,t)^{-1}\cdot A=h(0,B^{T}\cdot t)^{-1}\,\]
for all \(t\in\mathbb{R}^{d-1}\). Now a comparison of the lower right block matrices on both sides of the equation reveals that this is in turn equivalent to the right-hand side of the equivalence (4.6). This concludes the proof of (a) \(\Leftrightarrow\) (c).
Finally, given any linear map \(\Psi:\mathfrak{s}\to\mathfrak{s}\), the existence of a matrix \(B\) fulfilling
\[\forall t\in\mathbb{R}^{d-1}\ :\ \Psi\left(\sum_{i=2}^{d}t_{i}X_{i}\right)=\sum_{i =2}^{d}s_{i}X_{i}\,\ s=Bt\]
is a basic fact of linear algebra. The equivalence (a) \(\Leftrightarrow\) (b) therefore establishes the final statement of the lemma.
We can now prove the main result of this section, which is an algebraic characterization of the elements of \(\mathcal{S}_{Co_{H}}\).
**Theorem 4.8**.: _Let \(H=DS\cup-DS\) denote a shearlet dilation group, with infinitesimal generator \(Y=\operatorname{diag}(1,\lambda_{2},\ldots,\lambda_{d})\) of \(S\). Let_
\[A=\left(\begin{array}{cc}1&\mathbf{0}\\ \mathbf{0}&B\end{array}\right)\in\mathcal{S}_{1}(\mathcal{O})\.\]
_Then the following are equivalent:_
1. \(A\in\mathcal{S}_{Co_{H}}\)_._
2. \(A\in N_{S}\)_, and_ \(A\) _commutes with_ \(Y\)_._
3. \(A\in N_{H}\)
Proof.: The implication (b) \(\Rightarrow\) (c) is proved by straightforward computation, whereas (c) \(\Rightarrow\) (a) was observed in Corollary 2.29. The following argument therefore focuses on (a) \(\Rightarrow\) (b).
Assume \(A\in\mathcal{S}_{Co_{H}}\). We aim to use Lemma 4.7, and therefore want to establish that the map \(\psi_{B}=\varphi_{A}|_{S}:S\to S\) is a group homomorphism.
For this purpose, fix \(s\in\mathbb{R}^{d-1}\). Define
\[\alpha_{S}:\mathbb{R}^{d-1}\to H\,\ \alpha_{S}(t)=\varphi_{A}(h(0,t)h(0,s))^{-1} \varphi_{A}(h(0,t)),t\in\mathbb{R}^{d-1}.\]
Then \(\alpha_{S}\) is a composition of polynomial maps, hence it is polynomial (note that inversion is polynomial on the set of unipotent matrices). Since \(A\in\mathcal{S}_{Co_{H}}\), by Theorem 2.28, \(\varphi_{A}\) is quasi-isometry with respect to a suitable word metric \(d_{H}\) on \(H\). So there exist \(a>0,b\geq 0\) such that
\[d_{H}(\alpha_{S}(t),e_{H}) = d_{H}(\varphi_{A}(h(0,t)h(0,s))^{-1}\varphi_{A}(h(0,t)),e_{H})\] \[= d_{H}(\varphi_{A}(h(0,t)),\varphi_{A}(h(0,t)h(0,s)))\] \[\leq ad_{H}(h(0,t),h(0,t)h(0,s))+b\] \[= ad_{H}(e_{H},h(0,s))+b,\]
using left-invariance of the metric in the second equality. Thus \(\alpha_{S}\) is a bounded polynomial function, hence constant. In particular,
\[\varphi_{A}(h(0,t)h(0,s))^{-1}\varphi_{A}(h(0,t))=\alpha_{S}(t)=\alpha_{S}(0)= \varphi_{A}(h(0,s))^{-1}.\]
Thus
\[\varphi_{A}(h(0,t)h(0,s))=\varphi_{A}(h(0,t))\varphi_{A}(h(0,s)),\]
which implies that \(\psi_{B}=\varphi_{A}|_{S}\) is a group homomorphism. Now Lemma 4.7 yields \(A\in N_{S}\).
To prove the second condition, note that
\[\varphi_{A}(h(r,t))=h(r,\tilde{Y}_{-r}B^{-T}\tilde{Y}_{r}t)\ =h(r,0)h(0, \tilde{Y}_{-r}B^{-T}\tilde{Y}_{r}t),\]
with \(\tilde{Y}_{r}=\exp(r\ {\rm diag}(\lambda_{2},\ldots,\lambda_{d}))\). Fix \(d=h(r,0)\). Define
\[\beta(t)=\varphi_{A}(h(0,t)d)^{-1}\varphi_{A}(h(0,t))=\varphi_{A}(dd^{-1}h(0, t)d)^{-1}\varphi_{A}(h(0,t)).\]
Now we have:
\[\beta(t) = \varphi_{A}(dd^{-1}h(0,t)d)^{-1}\varphi_{A}(h(0,t))\] \[= \varphi_{A}(h(r,0)h(0,\tilde{Z}_{r}t))^{-1}\varphi_{A}(h(0,t))\] \[= [h(r,0)h(0,\tilde{Y}_{-r}\tilde{Z}_{r}B^{-1}\tilde{Y}_{r}t)]^{-1}h (0,B^{-T}t)\] \[= h(0,\tilde{Y}_{-r}\tilde{Z}_{r}B^{-1}\tilde{Y}_{r}t)^{-1}h(-r,0)h( 0,B^{-T}t),\]
with \(r\in\mathbb{R}\) fixed, thus \(\beta(t)\) defines a polynomial in \(t\). Furthermore, it is bounded, since, for any word metric \(d_{H}\) on \(H\), we can employ the quasi-isometry property of \(\varphi_{H}\) to estimate
\[d_{H}(\beta(t),e_{H}) = d_{H}((\varphi(h(0,t)d)^{-1}\varphi_{A}(h(0,t)),e_{H})\] \[= d(\varphi_{A}(h(0,t)),\varphi_{A}(h(0,t)d))\] \[\leq ad(h(0,t),h(0,t)d)+b\] \[\leq ad(e_{H},d)+b\]
It follows that \(\beta\) must be constant, i.e. \(\beta(t)=h(-r,0)=d^{-1}\), for all \(t\in\mathbb{R}^{d-1}\). Using equation 4.4, we get
\[h(-r,0)h(0,B^{-T}t) = h(-r,0)h(0,B^{-T}t)h(r,0)h(-r,0)\] \[= h(0,\tilde{Z}_{r}B^{-T}t)h(-r,0).\]
Here we recall the diagonal matrices \(\tilde{Z}_{r}=\mathrm{diag}(e^{-r(\lambda_{2}-1)},...,e^{-r(\lambda_{d}-1)})\) from the proof of Lemma 4.5. In summary, the equation \(\beta(t)=d^{-1}\) holds if and only if \(h(0,\tilde{Y}_{-r}\tilde{Z}_{r}B^{-T}\tilde{Y}_{r}t)=h(0,\tilde{Z}_{r}B^{-T}t)\), for all \(t\in\mathbb{R}^{d-1}\), if and only if \(\tilde{Y}_{-r}B^{-T}\tilde{Y}_{r}=B^{-T}\), (note that \(\tilde{Z}_{r},\tilde{Y}_{r}\) commute). Now Lemma 4.5 (d) \(\Rightarrow\) (a) yields that \(B\) and \(\tilde{Y}\) commute, and it follows that \(A\) and \(Y\) commute.
Combining Theorem 4.8 with Lemma 4.2 gives rise to the following characterization of \(\mathcal{S}_{Co_{H}}\). Observe that the condition \((a)\), together with the final statement of Lemma 4.7, essentially reduces the task of computing \(\mathcal{S}_{Co_{H}}\) to that of determining a subgroup of algebra automorphisms of \(\mathfrak{s}\) commuting with the conjugation action of \(\exp(\mathbb{R}Y)\) on \(\mathfrak{s}\).
**Corollary 4.9**.: _Let \(A\in\mathcal{S}(\mathcal{O})\). Then \(A\in\mathcal{S}_{Co_{H}}\) holds iff_
\[A=\lambda\cdot h\cdot A_{1}\,\ \lambda\in\mathbb{R}^{*}\,\ h\in S\,\ A_{1}= \left(\begin{array}{cc}1&\mathbf{0}\\ \mathbf{0}&B\end{array}\right)\in\mathcal{S}_{1}(\mathcal{O}),\]
_and \(A_{1}\) fulfills the following equivalent conditions:_
* \(A_{1}\in N_{S}\) _and_ \(A_{1}\) _commutes with_ \(Y\)_._
* \(A_{1}\in N_{H}\)_._
_In particular, \(\mathcal{S}_{Co_{H}}=N_{H}\)._
As a further application, we note
**Corollary 4.10**.: _Let \(H\) denote a shearlet dilation group. Then \(\mathcal{S}_{Co_{H}}\subset GL(\mathbb{R}^{d})\) is a closed matrix group, with_
\[d\leq\dim(\mathcal{S}_{Co_{H}})\leq d^{2}-d+1\.\]
Proof.: Since the factorization is unique, it is not hard to show that the factorization map \((\lambda,h,A_{1})\mapsto\lambda hA_{1}\) is a diffeomorphism onto a closed subgroup. Hence the dimensions for the choices \(\lambda,h,A_{1}\) add up to yield the dimension of \(\mathcal{S}_{Co_{H}}\). The dimension for the first two variables are \(1,d-1\), respectively, whereas the dimension of the set of choices for the matrix \(B\) lies between \(0\) and \((d-1)^{2}\).
Section 5 will exhibit examples of shearlet dilation groups showing that the upper bound for the dimension of \(\mathcal{S}_{Co_{H}}\) is sharp, as well examples \(H\) with dimension \(\dim(\mathcal{S}_{Co_{H}})=d+1\). It is currently open whether there exist shearlet dilation groups \(H\) with \(\dim(\mathcal{S}_{Co_{H}})=d\).
We finally use the corollary to quickly derive a correction to Theorem 5.9 [17].
**Theorem 4.11**.: _Let \(H_{1},H_{2}\) denote two shearlet dilation groups of equal dimension. Then \(H_{1}\) and \(H_{2}\) are coorbit equivalent iff \(H_{1}=H_{2}\)._
Proof.: Assume that \(H_{1}\) and \(H_{2}\) are coorbit equivalent shearlet dilation groups. Assume that \(H_{i}=D_{i}S_{i}\cup-D_{i}S_{i}\), with scaling subgroups \(D_{i}\) and shearing subgroups \(S_{i}\). While the precise formulation of [17][Theorem 5.9] is incorrect, its proof of the _necessary conditions_ for coorbit equivalence of shearlet dilation groups, in particular Lemmas 5.11 and 5.12 of [17], are correct. This entails \(D_{1}=D_{2}\) by [17, Lemma
5.11]. In addition, [17, Lemma 5.12] provides a matrix \(C\in\mathrm{GL}(\mathbb{R}^{d})\) such that \(S_{2}=C^{-1}S_{1}C\), and in addition, the conjugation actions of \(C\) commutes with the conjugation action of \(D_{1}=D_{2}\). As a consequence, \(H_{2}=C^{-1}H_{1}C\).
But now the fact that \(H_{1}\) and \(H_{2}\) are coorbit equivalent yields via Corollary 4.9 that \(C\in\mathcal{S}_{Co_{H_{1}}}=N_{H}\). But this means \(H_{2}=H_{1}\).
## 5. Determining \(\mathcal{S}_{Co_{H}}\): Examples
### The full picture in two dimensions
For dimension two, the admissible dilation groups have been classified up to conjugacy and finite index subgroups in [14]; the following is a complete list of representatives, with their open dual orbits:
* **Diagonal group** \[D=\left\{\left(\begin{array}{cc}a&0\\ 0&b\end{array}\right):ab\neq 0\right\}\,\] with \(\mathcal{O}=(\mathbb{R}^{*})^{2}\).
* **Similitude group** \[H=\left\{\left(\begin{array}{cc}a&b\\ -b&a\end{array}\right):a^{2}+b^{2}>0\right\}\] with \(\mathcal{O}=\mathbb{R}^{2}\setminus\{0\}\).
* **Shearlet group(s)** For a fixed parameter \(c\in\mathbb{R}\), \[S_{c}=\left\{\pm\left(\begin{array}{cc}a&b\\ 0&a^{c}\end{array}\right):a>0\right\}\,\] with \(\mathcal{O}=\mathbb{R}^{*}\times\mathbb{R}\).
No two distinct groups from this list are coorbit equivalent [17]. We will now determine the compatible dilations for each:
* In the case of the diagonal group \(D\), the requirement \(A^{T}\mathcal{O}=\mathcal{O}\) already leads to severe restrictions. In fact, one readily sees that \[\mathcal{S}(\mathcal{O})=\{R^{\epsilon}h\ :\ h\in D,\epsilon\in\{0,1\}\}\] with the reflection matrix \(R=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right)\). Since \(R\) clearly normalizes \(D\), Remark 2.25 and Corollary 2.29 entail \[\mathcal{S}_{Co_{H}}=\mathcal{S}(\mathcal{O})=N(H)\.\]
* The compatible dilations for the similitude group have already been determined in [21], namely as \[\mathcal{S}_{Co_{H}}=\mathcal{S}(\mathcal{O})=GL(\mathbb{R}^{d})\.\] Note that in this case, \(N(H)\subsetneq\mathcal{S}_{Co_{H}}\), and that unlike the diagonal case, the group of compatible dilations has strictly bigger dimension than \(H\) itself.
* Using the results from Section 4, in particular Corollary 4.9 we get \[\mathcal{S}_{Co_{S_{c}}}=\left\{\left(\begin{array}{cc}a&b\\ 0&d\end{array}\right):ad\neq 0\right\}\.\] Again we have \[\mathcal{S}_{Co_{S_{c}}}=\mathcal{S}(\mathcal{O})=N(S_{c})\.\]
Note that \(\mathcal{S}_{Co_{S_{c}}}\) is independent of \(c\). Since we know by [17] that different choices of \(c\) lead to distinct scales of coorbit space, we have found an instance where distinct scales of coorbit spaces can have the same symmetry groups.
**Standard and Toeplitz shearlet dilation groups in arbitrary dimensions.**
The shearing subgroup of the shearlet dilation group has the Lie algebra
\[\mathfrak{s}=\left\{\left(\begin{array}{cccc}0&t_{2}&\ldots&t_{d}\\ 0&\ldots&\ldots&0\\ \vdots&\vdots&\vdots&\vdots\\ 0&\ldots&\ldots&0\end{array}\right):t_{2},\ldots,t_{d}\in\mathcal{R}\right\}\.\]
Recall from Corollary 4.9, that the main challenge lies in characterizing the elements in the intersection \(\mathcal{S}_{1}(\mathcal{O})\cap\mathcal{S}_{Co_{H}}\), i.e. elements of the type
\[A=\left(\begin{array}{cc}1&\mathbf{0}\\ \mathbf{0}&B\end{array}\right)\]
and that this characterization rests on two conditions:
* The map \(\Psi:\mathfrak{s}\ni\sum_{i=2}^{d}t_{i}X_{i}\mapsto\sum_{i=2}^{d}s_{i}X_{i},\ s=B^{-T}t\) is an algebra automorphism of \(\mathfrak{s}\). Clearly, we can replace \(B\) by \(B^{-1}\), and will systematically do so in the subsequent arguments.
* If \(Y=\operatorname{diag}(1,\lambda_{2},\ldots,\lambda_{d})\) is the infinitesimal generator of the scaling subgroup, and \(\tilde{Y}=\operatorname{diag}(\lambda_{2},\ldots,\lambda_{d})\), then \(\tilde{Y}\) and \(B\) commute.
Here it is relevant to note that the first condition is fulfilled for every choice of \(B\): The mapping \(\psi\) is linear and bijective by construction. Furthermore, the associative algebra structure of \(\mathfrak{s}\) is trivial in the sense that given any two matrices \(a_{1},a_{2}\in\mathfrak{s}\), the product is given by \(a_{1}a_{2}=0\). Hence any _linear_ automorphism \(\mathfrak{s}\to\mathfrak{s}\) is automatically an _algebra_ automorphism. Therefore the first condition is trivially fulfilled, and we get from Corollary 4.9 that
\[A=\mathcal{S}_{Co_{H}}\Leftrightarrow A=\lambda\cdot h\cdot\left(\begin{array} []{cc}1&\mathbf{0}\\ \mathbf{0}&B\end{array}\right)\,\ B\in GL(\mathbb{R}^{d-1})\,\ B\tilde{Y}=\tilde{Y}B\.\]
For the characterization of the second condition, we refer to Remark 4.6. The dimension of \(\mathcal{S}_{Co_{H}}\) is now determined in terms of the eigenvalue multiplicities: If \(n_{1},\ldots,n_{k}\) are the multiplicities of the distinct eigenvalues of \(\tilde{Y}\), we obtain
\[\dim(\mathcal{S}_{Co_{H}})=d+\sum_{\ell=1}^{k}n_{\ell}^{2}\.\]
Note that any partition of \(d-1\), i.e. every tuple \((n_{\ell})_{\ell=1,\ldots,k}\) of positive integers satisfying
\[\sum_{\ell=1}^{k}n_{\ell}=d-1\]
can occur as multiplicities of a suitable choice of \(Y\). The extreme cases are given by \(\tilde{Y}=\lambda I_{d-1}\), which leads to
\[\dim(\mathcal{S}_{Co_{H}})=d^{2}-d+1\,\]
and the multiplicity-free case, resulting in
\[\dim(\mathcal{S}_{Co_{H}})=2d-1\.\]
**Coorbit compatible dilations for Toeplitz shearlet dilation groups.** In order to describe the coorbit compatible dilations for the Toeplitz shearlet dilation groups, the main datum required to apply Corollary 4.9 is the automorphism group of the Lie algebra \(\mathfrak{s}\) of the shearing subgroup.
Recall that the shearing subgroup for the Toeplitz shearlet group consists of the elements
\[T(1,s_{1},\ldots,s_{d-1}):=\begin{pmatrix}1&s_{1}&s_{2}&\ldots&\ldots&s_{d-1} \\ &1&s_{1}&s_{2}&\ldots&s_{d-2}\\ &&\ddots&\ddots&\ddots&\\ &&&1&s_{1}&s_{2}\\ &&&&1&s_{1}\\ &&&&1\end{pmatrix},\]
with \(s_{1},\ldots,s_{d-1}\in\mathbb{R}^{d-1}\). The associated Lie algebra then consists of the matrices
\[T(0,s_{1},\ldots,s_{d-1}):=\begin{pmatrix}0&s_{1}&s_{2}&\ldots&\ldots&s_{d-1} \\ &0&s_{1}&s_{2}&\ldots&s_{d-2}\\ &&\ddots&\ddots&\ddots&\\ &&&0&s_{1}&s_{2}\\ &&&&0&s_{1}\\ &&&&0\end{pmatrix}.\]
The canonical basis of \(\mathfrak{s}\) (in the sense of Lemma 3.3) is given by
\[X_{2}=T(0,1,0,\ldots,0)\,\ X_{3}=T(0,0,1,0,\ldots,0)\,\ \ldots\,\ X_{d}=T(0,\ldots,0,1)\,\]
and it is not hard to verify the relations
\[\forall j=2,\ldots,d\ :\ X_{j}=X_{2}^{j-1}\.\]
We proceed as in the standard shearlet case, and first determine the relevant algebra automorphisms \(\Psi\) of \(\mathfrak{s}\). Clearly a necessary condition for such maps is that \(\Psi\) maps the generating element \(X_{2}\) onto another generating element. Furthermore, a general element
\[b_{2}=\sum_{j=2}^{d}c_{j}X_{j}\,\ c_{2},\ldots,c_{d}\in\mathbb{R}\]
is easily seen to be generating iff \(c_{2}\neq 0\). In such a case, letting
\[b_{j}=b_{2}^{j-1}\]
defines a second basis \(b_{2},\ldots,b_{d}\) of \(\mathfrak{s}\), and the unique linear map
\[\Psi:\mathfrak{s}\rightarrow\mathfrak{s}\,\ X_{j}\mapsto b_{j}\,\]
is readily seen to be an algebra automorphism. In short, the map
\[\operatorname{Aut}(\mathfrak{s})\rightarrow\left\{b=\sum_{j=2,\ldots,d}c_{j}X _{j}\ :\ c_{j}\in\mathbb{R},c_{2}\neq 0\right\}\,\ \Psi\mapsto\Psi(X_{2})\]
is a bijection. As a consequence, \(\operatorname{Aut}(\mathfrak{s})\) is a \(d-1\)-dimensional matrix group.
Hence, returning to the factorization
\[A=\lambda\cdot h\cdot A_{1}\,\ \lambda\in\mathbb{R}\,\ h\in S\,\ A_{1}=\left( \begin{array}{cc}1&\mathbf{0}\\ \mathbf{0}&B\end{array}\right),\]
from Corollary 4.9, the possible choices for \(B\) can be described as
\[B=\left(\begin{array}{ccccc}c_{2}&c_{3}&\dots&\dots&c_{d}\\ 0&c_{2}^{2}&*&*&*\\ 0&0&c_{2}^{3}&*&*\\ 0&0&0&\ddots&*\\ 0&\dots&\dots&\dots&c_{2}^{d-1}\end{array}\right)\,\ (c_{2},\dots,c_{d})^{T}\in \mathbb{R}^{*}\times\mathbb{R}^{d-1}\.\]
Here the entries above the diagonal and below the first line depend uniquely on \(c_{2},\dots,c_{d}\). They can be determined, either recursively or using the multinomial formula, explicitly from \(c_{2},\dots,c_{d}\); however, we refrain from giving a general formula. To give an idea of the general pattern, we consider the cases \(d=3,4,5\). For \(d=3\), one gets the general form
\[B=\left(\begin{array}{cc}c_{2}&c_{3}\\ 0&c_{2}^{2}\end{array}\right)\,\]
for \(d=4\) one has
\[B=\left(\begin{array}{ccc}c_{2}&c_{3}&c_{4}\\ 0&c_{2}^{2}&2c_{2}c_{3}\\ 0&0&c_{2}^{3}\end{array}\right)\,\]
and finally in the case \(d=5\) the resulting matrices are of the form
\[B=\left(\begin{array}{cccc}c_{2}&c_{3}&c_{4}&c_{5}\\ 0&c_{2}^{2}&2c_{2}c_{3}&2c_{2}c_{4}+c_{3}^{2}\\ 0&0&c_{2}^{3}&c_{2}^{2}c_{3}\\ 0&0&0&c_{2}^{4}\end{array}\right)\.\]
Finally, we need to determine the influence of the scaling subgroup. Recall from [1] that the infinitesimal generators \(Y\) have the form \(Y_{\delta}=\operatorname{diag}(1,1-\delta,1-2\delta,\dots,1-(d-1)\delta)\). The dimension of \(\mathcal{S}_{Co_{H}}\) depends on \(\delta\), as follows:
1. If \(\delta=0\), then \(\tilde{Y}=I_{d-1}\), and we get \[\dim(\mathcal{S}_{Co_{H}})=1+d-1+d-1=2d-1\.\]
2. In the case \(\delta\neq 0\), all eigenvalues of \(\tilde{Y}\) have multiplicity \(1\), hence the set of possible choices for \(B\) consists precisely of the algebra automorphisms of \(\mathfrak{s}\) that are diagonal over the canonical basis. This fixes \(b_{2}=c_{2}X_{2}\), leading to \(b_{j}=c_{2}^{j-1}X_{j}\), and as \(c_{2}\) runs through the positive real numbers, one obtains a one-parameter matrix group. Thus we obtain in this case \[\dim(\mathcal{S}_{Co_{H}})=1+d-1+1=d+1\,\] which misses the lower dimension bound from Corollary 4.10 by 1.
|
2301.13499 | Leveraging the SCION Internet Architecture to Accelerate File Transfers
over BitTorrent | As the needs of Internet users and applications significantly changed over
the last decade, inter-domain routing became more important to fulfill these
needs. The ways how data flows over the Internet are still completely in the
hand of network operators, who optimize traffic according to their own, local
view of the network. We observe two potential limitations from this: Optimizing
according to the local view may a) result in unused capacities in the global
network and b) not meet the actual needs of users and applications. To identify
and overcome these limitations, we present our BitTorrent over SCION approach,
which enables multipath communication and intelligent path selection for
endhosts in global torrent networks. We compare our implementation against
BitTorrent over BGP and BGP-M in a small-scale Internet topology, observing an
increase in goodput of 48% through multipathing compared to BitTorrent over BGP
and 33% compared to the BGP-M candidate. Furthermore, we show that our proposed
disjoint path selection algorithm is able to improve traffic flow in the
network with a low number of outgoing connections to unchoked peers. | Marten Gartner, Thorben Krüger, David Hausheer | 2023-01-31T09:36:23Z | http://arxiv.org/abs/2301.13499v1 | # Leveraging the SCION Internet Architecture to Accelerate File Transfers over BitTorrent
###### Abstract
As the needs of Internet users and applications significantly changed over the last decade, inter-domain routing became more important to fulfill these needs. The ways how data flows over the Internet are still completely in the hand of network operators, who optimize traffic according to their own, local view of the network. We observe two potential limitations from this: Optimizing according to the local view may a) result in unused capacities in the global network and b) not meet the actual needs of users and applications. To identify and overcome these limitations, we present our BitTorrent over SCION approach, which enables multipath communication and intelligent path selection for endshots in global Internet networks. We compare our implementation against BitTorrent over BGP and BGP-M in a small-scale Internet topology, observing an increase in goodput of 48% through multipathing compared to BitTorrent over BGP and 33% compared to the BGP-M candidate. Furthermore, we show that our proposed disjoint path selection algorithm is able to improve traffic flow in the network with a low number of outgoing connections to unchoked peers.
Peer-to-Peer, Path-aware networking, Path Selection, SCION, BitTorrent
## I Introduction
The Internet was designed decades ago, and some of the original design decisions have had unfortunate side-effects and security implications that persist to this day. So far, from the perspective of a mere host, there has been no general mechanism for splitting traffic across multiple different paths to a specific destination, which would be beneficial bandwidth-intensive applications.
As the de facto standard protocol for inter-domain routing, BGP [32] is used to disseminate routes between Autonomous Systems (ASes), forming the Internet as we know it today. Within each AS on a given path through the Internet, packet forwarding is affected by (sometimes highly complex) local traffic engineering preferences and policies. In general however, there is just a single packet forwarding path between two hosts in a BGP-based Internet. To also add support for multiple forwarding paths, BGP-M [24] was proposed, which adds support for load sharing across multiple inter-domain links, subject to configurable preferences. However, BGP-M is does little to help optimizing traffic flow in the global Internet: Firstly, it's impact is necessarily of limited, local scope. Secondly, BGP-M is not adaptive and can not dynamically react to changing network conditions. Given these limitations and given the existence of alternative inter-domain paths in the Internet [2], we hypothesize that the current Internet still has unused capacities that cannot be exploited entirely by using BGP or BGP-M as the inter-domain routing protocol.
Path-aware networking architectures promise to overcome these limitations by offering deeper insight into and better control over packet forwarding in the network. Different approaches to enable path-control in networks have been proposed. Some try to work within the limits of the current Internet architecture [18, 41], other attempts involve a complete redesign of the Internet architecture from scratch. While there are a number of path-aware approaches [34, 42] with their own merits, our work focuses on the (arguably) most mature and most widely-deployed open-source SCION architecture [7, 44].
To leverage unused capacities in the network, especially in the backbone, Peer-to-Peer (_P2P_) applications promise unique opportunities through their globally distributed nature [39, 8, 36]. As the most well-researched and understood protocol, we selected BitTorrent as the foundation of our work, adding support for active endhost-based path-selection via SCION. We compare our augmented BitTorrent implementation with a non path-aware BitTorrent implementation in comparable BGP and BGP-M-based inter-domain network topologies.
We anticipate that any achieved improvements will easily translate to other P2P networks, e.g. IPFS [38]. While BitTorrent itself is famous for improving bandwidth utilization on the last mile, we hypothesize that in combination with SCION, it could also unlock unused capacities in the network at large. To this end, we contribute the following:
* We discuss the existence of unused capacities in the backbone with BGP or BGP-M as utilized inter-domain routing protocol
* We show why path-aware networking is able to unlock these capacities and discuss the impact of host-based path selection
* We present our BitTorrent over SCION design introducing the notion of path-level peers and propose an algorithm for disjoint path selection
* Finally, we analyze the performance improvements of BitTorrent over SCION aggregating unused capacities in the network compared to BitTorrent over BGP and BGP-M
The remainder of this work is structured as follows: In Section II we provide background for SCION and BitTorrent, followed by a discussion about limitations of BGP-based deployments and impacts of host-based path selection in Section III. We present the design and implementation of multipath support for BitTorrent over SCION in Section IV, followed by a presentation of our disjoint path selection algorithm. Afterwards, we show our virtualized Internet-scale testbed in Section V and the experimental results of comparing Multipath BitTorrent over SCION against an unmodified implementation in BGP and BGP-M-based scenarios in Section VI. Finally, we discuss related work in Section VII, conclude and provide outlook for future work in Section VIII.
## II Background
In the following, we provide a brief overview on BGP, BGP-M and the SCION architecture as well as on BitTorrent.
### _BGP and BGP-M_
BGP plays a unique role in the current Internet. While it can also enable in intra-domain routing (iBGP), we will exclusively refer to BGP's more important role as the Internet's predominant inter-domain routing protocol in the rest of this work. The principle task of a BGP border router is to inform and update its neighbours about specific IP address ranges (i.e., IP prefixes), to which the router's AS is able to forward traffic to. The router announces the prefixes that its AS has learned from its other neighbors. Routing loops are prevented by means of the _AS_PATH_, a list of hops to which each router prepends its own AS Number (ASN) before sending it on as part of a route announcement. BGP routers maintain the respective AS_PATHs as well as the announced prefixes in a routing table. The forwarding destination is determined by referencing this table based on the destination address of an incoming packet. By default, BGP selects a single path for each match in the routing table according to configurable policies. These policies may reflect e.g., local forwarding preferences, filters constraining the number of exported routes, etc.
To account for the fact that there often are multiple matching paths to a particular destination, multipath BGP _(BGP-M)_[15] was introduced. For certain cases, BGP-M adds support for traffic load splitting1 over multiple outgoing AS links. In the most simple case, if several possible forwarding AS_PATHs are of the same length, Equal-Cost Multipath Routing _(ECMP)_ can be applied, with a configurable maximum number of parallel paths2. In Addition, BGP-M also allows for more advanced policies on how to combine multiple paths, e.g., some Cisco routers offer Unequal-Cost Load Sharing [21], which allows load splitting over paths with different length by using a dedicated configurable weight.
Footnote 1: Load splitting is based on flow hashes of the 5 tuple _(src address, dst address, src port, dst port, layer 4 protocol)_.
Footnote 2: In some border router implementations, the relevant settings are: a) bgp bestpath as-path multipath-relax to enable load splitting for equal length paths, and b) maximum-paths to set the maximum number of paths over which load splitting will be performed.
### _Scion_
The SCION architecture [44] has been designed to overcome the limitations of BGP and BGP-M in the future Internet, addressing modern threat models at the fundamental protocol level and endeavouring to avoid many current issues with hijacking attacks and single roots of trust. SCION also promises communication guarantees and path control capabilities, allowing applications to use two or more paths in parallel to a given destination, generally enabling _multipath communication_.
For traditional multipath approaches like MPTCP [29] and MPQUIC [9] to work as intended, a host must provide multiple network interfaces. With these protocols, multipath communication implies sending data over multiple interfaces in parallel, beyond which no further influence on the data forwarding paths is possible. SCION on the other hand is based around "packet carried forwarding state" (PCFS), where every packet contains the complete inter-AS path to the intended destination (in the form of _hop fields_) in its packet header. Packets can thereby be easily directed via different paths, by changing the SCION header alone. Our work heavily relies upon this precise _path awareness_ property of the SCION architecture.
Unlike the flatter organizational hierarchy of today's BGP-based Internet, in SCION, collections of ASes are combined into Isolation Domains, _(ISDs)_, which are envisioned to correspond to, e.g., geographical regions or legislative domains (like a single country), but can also be made up of company or research networks. One or multiple ASes in an ISD form the _ISD Core_, which collectively manages a cryptographic trust root (_TRC_) on behalf of the other ISD members, enabling service authentication and other cryptographic functions within the ISD, simultaneously avoiding many of the notorious problems that plague globally centralized cryptographic trust systems with single points of failure that are beyond the control of the ISD. The ISD Core also manages the exchange of path information with other ISDs, while independent peering is also possible among non-Core ASes of different ISDs.
### _BitTorrent_
The BitTorrent protocol specifies file transfer as a distributed mechanism between peers without the need for any central coordination. Some initial way to exchange network addresses among peers is nevertheless required, e.g., by means of a _tracker_ or, alternatively, a distributed hash table (DHT). In BitTorrent, files are typically not transferred in the usual form of a single, continuous byte stream that contains the complete file. Instead, large files are split into equal-sized _pieces_ (typically with a size between 32KB and 256KB). It is a key feature of BitTorrent that exchange of these pieces can be easily parallelized. BitTorrent peers that already have a complete local copy of a file are referred to as _seeders_, as opposed to _leechers_, which still have to obtain some or all of the constituent pieces of the file from other peers. For each new file uploaded to the BitTorrent network, there must be at least one seeding peer that initiates the distribution of the file.
## III Host-Based Path Selection
### _Limitations of the BGP-Based Inter-Domain_
Generally, two critical aspects of BGP and BGP-M for inter-domain routing may lead to unused capacities in the global Internet: Firstly, AS operators inherently only have limited insight into network conditions beyond their local network and can also only manage traffic locally. Secondly, BGP and BGP-M do not provide features to dynamically adapt routing to different needs. In this section, we further characterize these limitations before discussing possible improvements that path-aware networking could bring to the table.
Despite the large benefits of traffic engineering of AS-operators on intra-domain level, the potential of optimizing inter-domain routing is limited in the current Internet. Each operator can only optimize the traffic flow in their own AS until it reaches its local destination or the neighbour AS. This may especially impact performance for flows that traverse multiple hops before they reach their destination, since each hop performs its own, local optimization. In case one of the first hops performs a non-optimal routing decision for the flow (e.g. routing to particular neighbour interfaces that are already under heavy load), the overall performance of the flow is affected.
As discussed in Section II, BGP-M provides several options to perform load sharing on multiple links. However, these options need to be configured statically in the network. Consequently, the network can not always fulfill the varying needs of different participants, e.g., endhosts who prefer to optimize for different criteria. BGP-M reflects the anticipated needs of AS operators, not the actual needs that endhosts have, which may differ significantly from those anticipated by the AS operators.
Path-aware networking promises to overcome many of these limitations and their impacts on performance, and, (in the case of SCION,) gives endhosts the opportunity to freely choose suitable inter-domain paths for their traffic.
### _Implications of Host-Based Path Selection_
Traffic engineering on the Internet is performed by network operators attempting to locally optimize data flows for various factors. With host-based path selection, operators hand over this control to endhosts. While this promises to help endhosts to optimize their traffic, it comes with potential implications for network operators and may be at odds with their own interests, especially with respect to the inter-domain. Operators tend to prefer the use of peering links over that of transit links to avoid costs, while endhosts do not have such an incentive to avoid transit links. Additionally, host-based path selection ideally requires up-to-date insight into network conditions on all hosts. It also needs to ensure that hosts do not change paths too often, which could result in undesirable oscillation. In simulation, Scherrer et al. show that the impact of oscillation through host-based path selection is low [37]. Moreover, in SCION, endhosts can only dictate the ingress and egress router interfaces of ASes, allowing network operators to still optimize their internal traffic within these constraints. Overall, while the benefits, drawbacks and trade-offs of path-awareness are certainly not yet fully understood, there is significant potential for it in the the future Internet.
## IV Design and Implementation of BitTorrent over SCION
In this section, we present our approach of path-level peers to enable multipath support for BitTorrent over SCION, followed by our disjoint path selection algorithm.
### _Multipath through Path-Level Peers_
As one of its features, SCION provides path control for inter-AS traffic while guaranteeing that the traffic flows along the chosen paths. This opens up the potential to aggregate capacities in the network, enabling applications to leverage multipath communication and parallel data processing to increase performance. In this work, we choose BitTorrent as a suitable, already parallelized application as a foundation for our experiments on bandwidth aggregation via multiple SCION paths.
By default, a BitTorrent peer is identified solely by its network address and port. This address could be either an IPv4, IPv6 or, in our case, a SCION address. We will refer to peers that are only described by their address as _address-level_ peers for the rest of this work. To distinguish between multiple SCION paths to a particular peer, we introduce a new representation that we will refer to as a _path-level_ peer. They are represented by the tuple \((addr,path)\), consisting of the peer's SCION address (including the port), together with one possible path to this address.
We introduce this notion of path-level peers to a path-aware BitTorrent implementation which we will henceforth refer to as _BitTorrent over SCION_, and which treats different paths to the same peer as several distinct, path-level peers.
Generally, peers that are returned from a tracker or that are added via static bootstrapping3 to BitTorrent are address-level
Fig. 1: Multipath implementation by downloading a torrent file over multiple paths (connections).
peers, since they may not be SCION peers to begin with and may not have path information associated with them. Thus, to generate path-level peers from a given SCION address, an additional path lookup is required. For a resulting number of possible paths that are available to a peer, the same number of corresponding path-level peers can be generated.
BitTorrent over SCION is configured with an upper bound for the number of different path-level peers, as the default BitTorrent client is, too. Path-level peers are obtained from address-level peers through a path selection algorithm.
After this obtaining of path-level peers, the usual BitTorrent P2P algorithms operate over each path independently. For each path-level peer, a QUIC4[23] connection is established to ensure reliable transfer of data. Figure 1 shows the piece download of a particular file. Each successfully established connection fetches piece information from a queue and requests the particular piece by sending request messages and wait for peers sending back the requested pieces. After each received piece, its integrity is verified. For this, a hash is computed and verified against the one referenced in the torrent file for that piece. Retrieved pieces are stored in main memory until the file is downloaded completely.
Footnote 4: We choose QUIC, because TCP is currently not yet implemented for SCION.
Since requests and retrievals of pieces over different connections are handled in their own dedicated threads, pieces can easily be downloaded concurrently, which speeds up the process and improves the overall download bandwidth. It is the responsibility of the main thread to iteratively check the result bitmap for missing pieces. Once all pieces are retrieved, the main thread closes all connections to all still connected peers that serve pieces of the current torrent and assembles the complete file to disk.
### _Upload-based Disjoint Path Selection_
Although BitTorrent over SCION may simply use all available paths to each connected peer, there are reasons to limit the number of path-level peers. When this limit is set suitably high, all available paths to each peer are considered. However, multiple paths may share the same bottleneck, making it pointless to aggregate them in hope of increasing overall performance. Since each BitTorrent peer has an upper limit of outgoing connections to neighbours, a performance increase could also be expected by simply increasing this upper limit to exchange pieces with more peers. Consequently, using the shortest path or simply aggregating all paths does not promise to have significant impacts on BitTorrent over SCION's performance compared to path-unaware BitTorrent implementations. To address this, we use built-in SCION capabilities to implement an improved, _disjoint_ path selection strategy, aiming to avoid such shared bottlenecks.
In BitTorrent over SCION, our improved path selection strategy relies on two core assumptions: 1) Different paths that share the same hop may also share a bottleneck at this hop and 2) Peers that offer pieces to others are in a good position for path selection decisions with knowledge about all downloading peers, allowing them to strategically distribute their outgoing traffic via disjoint paths.
We implement a disjoint path selection strategy by searching for overlaps in all paths and discard all but one of the paths that share the same hops, following assumption 1). With respect to assumption 2), we decide to delegate path selection exclusively to the uploading peer. Both in combination promise to outperform naive path selection approaches.
As presented in Section II, a SCION path consists of multiple hops. Each hop contains ingress and egress interfaces of the respective AS. In SCION, the interfaces are represented as numeric IDs, that are unique within the given AS. The combination of the AS identifier with the interface number results in a globally unique _interface ID_. In Figure 2, we show an intuitive mapping of a visual representation of hops and their interfaces to a list of such interface IDs. The interface IDs are used to determine disjointness between paths by counting the number of identical interface IDs.
```
Data:\(peers,maxOutgoingConns\) Result:\(pathLevelPeers\leftarrow[\ ]\); \(allPaths\leftarrow[\ ]\); for\(p\in peers\)do \(paths\gets lookupPaths(peer)\); \(allPaths\gets append(allPaths,paths)\); end for for\(path1\in allPaths\)do for\(path2\in allPaths\)do if\(path1!=path2\)then \(confs\gets numConflicts(path1,path2)\); \(path.conflicts+=confs\); end for end for end for \(allPaths\gets sortByConflictsAndHops(allPaths)\) \(i\gets 0\); while\(i\leq maxOutgoingConns\)do \(pathLevelPeer\gets fromPath(allPaths[i])\) \(pathLevelPeers\leftarrow append(pathLevelPeers,pathLevelPeer)\); \(i\gets i+1\) end while
```
**Algorithm 1**Disjoint path selection
```
Data:\(peers,maxOutgoingConns\) Result:\(pathLevelPeers\leftarrow[\ ]\); \(allPaths\leftarrow[\ ]\); for\(p\in peers\)do \(paths\gets lookupPaths(peer)\); \(allPaths\gets append(allPaths,paths)\); end for for\(path1\in allPaths\)do for\(path2\in allPaths\)do if\(path1!=path2\)then \(confs\gets numConflicts(path1,path2)\); \(path.conflicts+=confs\); end for end for end for
```
**Algorithm 2**SICION
```
Data:\(peers,maxOutgoingConns\) Result:\(pathLevelPeers\leftarrow[\ ]\); \(allPaths\leftarrow[\ ]\); \(allPaths\leftarrow append(allPaths,paths)\); \(path.conflicts+=confs\); end for end for
```
**Algorithm 3**Disjoint path selection
Fig. 2: List-based representation of hop interfaces using unique ids to perform conflict detection.
Based on the interface IDs in the SCION paths, the uploading peer is able to perform disjoint path selection to all connected peers. To achieve this, an interested peer connects over the first available path to the uploading peer and waits for it to connect back. The uploading peer now applies its disjoint path selection and connects back to the interested peer over the selected path set. Algorithm 1 depicts the procedure of finding the least disjoint paths to all connected address-level peers and returning a proper list of path-level peers. At first, the paths to each address-level peers are determined in a loop and aggregated in the _allPaths_ variable. Afterwards, each path in allPaths is checked against all other paths calculating the number of conflicts (i.e. conflicting interface IDs), which is saved in the path. Next, the allPath list is sorted in ascending order by the number of conflicts and the number of hops. Finally, until the number of _maxOutgoingConns_ is reached, the algorithm iterates over allPaths and transforms each path into a path-level peer, which is stored in the return variable _pathLevelPeers_.
## V Benchmark Setup: A Representative Small-Scale Internet Testbed
To compare our BitTorrent over SCION implementation against a BGP-based setup, we choose to run a virtualized network of multiple ASes that reflects the topology of the current Internet in a small-scale setup. Figure 3 shows the designed topology that we used to evaluate BitTorrent over SCION.
The topology consists of two core layers which represent Tier-1 and Tier-2 ASes in the Internet, connected via peering and transit links. With the growing popularity of IXPs, the number of Tier-3 ASes decreased significantly over time. Consequently, our testbed does not contain any Tier-3 ASes. The topology design follows actual AS and peering data, obtained from CAIDA data [4, 5, 6] and peeringdb [28], with randomized AS numbers. We limit the network link capacities to 15 Mbit/s for Tier-1 links and 10 Mbit/s for all remaining links to make all candidates network bound. Otherwise, the CPU could limit candidates resulting in potentially biased results. We derive two torrent networks from our proposed topology: The first network 5ASes consists of the 5 ASes marked dark green (AS102, AS1002, AS1004, AS103, AS1006) and the second network consists of the 10 ASes marked dark and light green (AS102, AS1002, AS1004, AS103, AS1006, AS1001, AS101, AS04, AS105, AS1009). ASes are connected either with transit or peering links. Our topology follows the _valley-free_ routing [30]. Consequently, peering links can only be used by the peering neighbours and their customers. In BGP and BGP-M, this is implemented with prefix lists on each AS that has peering links. Since filtering support of peering links in SCION is currently in progress, we apply a static path filter to each AS to filter out valley-free violations. Since ISD's are a concept exclusively for SCION, we locate all SCION ASes in the same ISD, to achieve better comparability.
In our evaluations, we run three different BitTorrent candidates: The first one is BitTorrent over BGP, which is a usual BitTorrent client implemented in Go based on inter-domain routing performed by BGP. The second candidate is BitTorrent over BGP-M, which uses the same BitTorrent client but with configured load splitting in BGP in each AS. Despite other interesting approaches for load splitting, we apply ECMP-based load splitting for BGP-M in our testbed, since it is the most deployed approach in the current Internet. Our third candidate is BitTorrent over SCION, which implements the disjoint path selection based on the presented idea of path-level peers.
The complete topology is running on a bare-metal server in a virtualized environment based on Docker. Each AS runs one or more containers (routers, hosts). Multiple ASes are connected via Docker Bridge Networks [13]. The server is running an Intel(R) Xeon(R) Gold 6150 CPU @ 2.70GHz with 36 threads and 500GB of main memory.
## VI Evaluation
After presenting our virtualized Internet topology, we evaluate our BitTorrent over SCION approach in this section.
### _Terminology_
The following parameters are used to create and evaluate different BitTorrent experiments:
* _MaxPeers_: Number of available peers in the torrent
* _OutgoingConns_: Number of outgoing connections (to peers) that each peer instantiates
* _NumASes_: Number of involved ASes in the experiment
To design our experiments, we stick to findings from Hamra et al. [19] measuring BitTorrent's performance in real-world torrents. In most cases, MaxPeers is significantly higher than OutgoingConns, meaning each peer exchanges pieces with a subset of all available peers. Hamra et al. show that setting OutgoingConns to the half of MaxPeers is a good tradeoff. Furthermore, the MaxPeers parameter changes over time in real-world torrents. This number increases often at the beginning of the torrent when many peers are interested in downloading the file and decreases after the majority of peers finished downloading and leave the torrent. We adapt this behaviour for our experiments.
In the following, we present the results of our two experiments: At first, we evaluate how heavy multipathing implemented in BitTorrent over SCION can aggregate bandwidth in the network that is not available for BitTorrent over BGP/BGP-M. Afterwards, we compare BitTorrent over SCION against BitTorrent over BGP/BGP-M with a varying number of OutgoingConns to evaluate the effect of our disjoint path selection approach.
### _Bandwidth Aggregation_
In our first experiment, we run 20 BitTorrent peers in the torrent networks 5ASes and 10ASes exchanging a 100Mbyte file. By choosing 2 different torrent network sizes with a fixed-size topology, we can evaluate the impact of density
of peers and the number of additional ASes that are not participating in the torrent, and consequently may provide additional capacities. We set the OutgoingConns parameter to infinity in this experiment (in detail it is set to the maximum number of path-level peers that one peer can connect to) to evaluate the maximum possible goodput that all candidates can achieve.
Figure 4a) shows the aggregated goodput in percent of all peers for the three candidates, while BGP serves as baseline with 100%. We decide to compare the goodput instead of the overall bandwidth, since SCION packets have a larger header than plain IP packets.
For the torrent network 5ASes, we observe an aggregated goodput of 111% for BGP-M compared to BGP, while SCION achieves 148% compared to BGP. From these results, we observe that enabling multipath BGP via ECMP increases the overall goodput by 11%. We conclude that in the 5ASes torrent network, the number of equal length BGP paths are comparatively low, leading to a small increase of goodput. However with BitTorrent over SCION's approach of path-level peers, a 48% increase of goodput is achieved compared to BGP and a 33% increase compared to BGP-M. In the 10ASes torrent network, we observe an increase of goodput for BitTorrent over SCION of still 38% compared to BitTorrent over BGP, despite that the 10AS torrent network has a smaller number of not participating ASes that may provide additional network capacities. Consequently, BitTorrent over SCION is able to aggregate also heterogeneous paths and we confirm our hypothesis of BitTorrent over SCION's capability of aggregating bandwidth in the network that is unused in BGP/BGP-M.
To verify that increased goodput has a direct impact on the actual download time of peers, we measure the average download time of all peers in the 5ASes and 10ASes torrent networks, shown in Figure 4b). Reflected by the lowest overall goodput, we observe the highest average download time for peers in BitTorrent over BGP, which is our baseline at 100%. BitTorrent over BGP-M results in 91% average download time for the 5ASes torrent network. In BitTorrent over SCION, the average download time is around 69%. Also for the 10ASes torrent network, we observe that the goodput reflects the average download times, resulting in 73% for BitTorrent over SCION and 88% for BitTorrent over BGP-M. As expected, the increased goodput through aggregating unused capacities in the network directly translates into lower download times for peers, increasing the overall performance of the system.
In addition to the goodput and download times, we also measure the overall CPU usage of all candidates running the 5ASes and 10ASes torrent network, shown in Figure 4c). Since we observed an equal distribution of load over all threads, we present the CPU usage as average CPU usage per thread. While BitTorrent over BGP and BGP-M only use 5% and 8%, respectively, BitTorrent over SCION uses around 31% of the available CPU usage for the 5ASes torrent network. We observe an expectable increase of resource usage of all candidates running 10ASes, with SCION using around 47% of each thread. Despite the higher throughput, BitTorrent over SCION achieves, the increase of CPU usage is not inreasing in the same amount. We reason this increase by the open-source SCION stack [26] (there also exists a closed-source SCION stack optimized for performance [1]). Especially the SCION Border Router implementation has potential to be optimized for performance, while the framework to route BGP and BGP-M (frrouter [25]) is heavily tuned. Consequently, we assume that using the closed-source SCION stack, we can decrease the CPU usage to a level comparable to BGP and BGP-M.
From this experiment, we confirm our hypothesis that BitTorrent over SCION can aggregate otherwise unused capacities in the network through multipath usage. We observe a significantly higher CPU usage for BitTorrent over SCION, which can potentially be strongly reduced by using the high-performance SCION stack, which is closed source.
Fig. 3: Virtualized testbed of a representative Internet topology
### _Peer Selection_
Always setting the OutgoingConns parameter sufficiently high may create unfair advantages for BitTorrent over SCION. Therefore, we compare all 3 candidates with a varying upper limit of OutgoingConns in this experiment. We again run 20 peers exchanging pieces of a 100Mbyte file in our 5ASes and 10ASes torrent network. As discussed before, setting OutgoingConns to the half of MaxPeers is a good tradeoff, we decide to vary OutgoingConns between 3 and 10. We expect that BitTorrent over BGP and BGP-M stagnates with a low number of OutgoingConns, meaning simply adding more connected peers to BitTorrent over BGP and BGP-M does not directly lead to improved performance, while BitTorrent over SCION handles an increasing number of OutgoingConns better.
Figure 5 shows the average download time of all peers with a varying number of OutgoingConns for the 5ASes torrent network. We observe that BitTorrent over BGP and BGP-M start to stagnate after 5 OutgoingConns, while BitTorrent over SCION results in decreased download times until 8 OutgoingConns. With OutgoingConns greater than 7, the results are matching the ones presented in the bandwidth aggregation experiment.
We conclude that between 3 and 7 OutgoingConns, BitTorrent over SCION is still able to find disjoint paths between peers, while with more than 7 OutgoingConns, the additionally used path-level peers share bottlenecks. However, we assume that in real-world, Internet-scale torrent networks, the number of disjoint paths is significantly higher, leading to better results for higher numbers of OutgoingConns.
In Figure 6, we show the average download time of all peers with a varying number of OutgoingConns for the 10ASes torrent network. We observe a high decrease of download times with less than 5 OutgoingConns for all candidates, while BitTorrent over SCION is still able to decrease the download time for up to 7 OutgoingConns. All three candidates have reached their minimum download times for greater than 7 OutgoingConns, in contrast to 5 OutgoingConns for the 5ASes torrent network. This is reasoned by the higher percentage of participating ASes in the torrent compared to the total number of ASes, and for BitTorrent over SCION especially by the lower number of additional network capacities.
From this experiment, we observe that BitTorrent over SCION also outperforms BitTorrent over BGP and BGP-M with a limited number of connected peers, disproving the intuitive argument that the path-level peer approach only works for a sufficiently high number of OutgoingConns.
## VII Related Work
The mature P2P mechanisms behind BitTorrent have made the protocol an attractive target for networking research in the past. Ren et al. present _TopBt_[33], an adaption of BitTorrent that uses proximities in addition to transmission rates to detect peers to collaborate with. Catro et al. propose _BestPeer_[3],
Fig. 4: Comparison of BitTorrent over BGP/BGP-M and SCION in the 5AS and 10AS torrent network with 4 peers per AS. a) shows the aggregated goodput of all peers in % with BGP as 100% baseline, b) shows the download time in % with BGP as 100% and c) the aggregated CPU usage of all threads in %.
Fig. 5: Average download time in seconds for BitTorrent over BGP, BGP-M and SCION in the 5ASes torrent network.
a peer selection algorithm that supports multipath in a multi-radio, multi-channel wireless mesh network.
Recent works cover analyses of BitTorrent's locality [8, 36, 39], concluding that the majority of BitTorrent's traffic is still running globally. Furthermore, Decker et al. analyze behavioral patterns and topologies in existing torrent networks [10], while Cuevas et al. [8] analyze how BitTorrent's locality impacts transit costs in existing networks.
A lot of research investigates IP multicast as an efficient way to distribute content to multiple peers without duplicating the traffic [11]. Since IP multicast requires expensive dedicated support in network equipment, it has so far only seen localized deployment [12, 31]. As an alternative to IP multicast, overlay approaches are considered: Bullet by Kostic et al [22] is an overlay approach to efficiently distribute files from a single source to a large number of receivers. Also IPFS [38] shares similarities with BitTorrent through its P2P based nature. Finally, Fujinoki et al. provide an approach to unlock private peering links for inter-domain routing in the Internet [16], which provides interesting potential for our multipath approach.
Next to SCION, several other approaches to enable path control on the host exist. PathLet Routing by Godfrey et al. [17] is an approach based on segmentation of inter-domain routes into path fragments. Establishing multipath data transfer can also be realized completely on the application level: Yu et al. proposed mpath [41], an algorithm and implementation to leverage proxies to create multiple paths to a particular end host.
To detect shared bottlenecks, different approaches have been proposed for MPTCP, some via active measurements [40, 14], others via passive shared bottleneck detection [20]. These approaches are constrained by an inherent lack of information about the actual path that the data uses through the Internet. With Espresso [43], Google presented a BGP-based approach for traffic distribution and bottleneck avoidance at the edge of their network, rather than on the endhosts.
Finally, demonstrating SCION's high-performance capabilities, Neukom et al. propose Hercules [27], a protocol for very high performance bulk data transfer over SCION and de Ruiter et al. present a SCION Border Router implementation in P4 [35].
## VIII Conclusions and Future Work
In this work, we developed the notion of _path-level peers_, which allowed us to enhance BitTorrent with SCION support to add multipath features, with minimal modifications to the underlying file-sharing algorithms. Furthermore, we propose an algorithm for disjoint selection of path-level peers to improve the usage of network capacities. We evaluate BitTorrent over SCION in a virtualized inter-domain testbed comparing it to BitTorrent over BGP and BGP-M. We observe a 48% improvement of goodput for BitTorrent over SCION compared to BGP and 38% compared to BGP-M, which reflects in smaller average download times for participating peers. Furthermore, we show that our proposed disjoint path selection algorithm is able to improve traffic flow in the network with a low number of outgoing connections to unchoked peers. Consequently, we confirm our hypothesis that BitTorrent over SCION is capable of aggregating capacities in the network that are unused when BGP or BGP-M is utilized for inter-domain routing.
As future improvement for BitTorrent over SCION, we plan to extend the tracker implementation to pre-select particular path-level peers. Peers may actively communicate their selected path sets to the tracker, which can improve the location of shared bottlenecks based on the knowledge about path usage of all known peers.
Furthermore, we plan to evaluate the impact of peers actively communicating the selected path set to other peers. We assume that peers can improve the location and avoidance of shared bottlenecks with this approach.
Finally, we plan to extend our disjoint path selection to allow multiple peers to reuse the same hops without creating shared bottlenecks, by observing variation in bandwidth to all connected peers when adding new paths containing shared hops.
|
2306.17658 | ODE Transformations of Nonlinear DAE Power Systems | Dynamic power system models are instrumental in real-time stability,
monitoring, and control. Such models are traditionally posed as systems of
nonlinear differential algebraic equations (DAEs): the dynamical part models
generator transients and the algebraic one captures network power flow. While
the literature on control and monitoring for ordinary differential equation
(ODE) models of power systems is indeed rich, that on DAE systems is
\textit{not}. DAE system theory is less understood in the context of power
system dynamics. To that end, this letter presents two new mathematical
transformations for nonlinear DAE models that yield nonlinear ODE models whilst
retaining the complete nonlinear DAE structure and algebraic variables. Such
transformations make (more accurate) power system DAE models more amenable to a
host of control and state estimation algorithms designed for ODE dynamical
systems. We showcase that the proposed models are effective, simple, and
computationally scalable. | Mohamad H. Kazma, Ahmad F. Taha | 2023-06-30T13:44:12Z | http://arxiv.org/abs/2306.17658v3 | # ODE Transformations of Nonlinear DAE Power Systems
###### Abstract
Dynamic power system models are instrumental in real-time stability, monitoring, and control. Such models are traditionally posed as systems of nonlinear differential algebraic equations (DAEs): the dynamical part models generator transients and the algebraic one captures network power flow. While the literature on control and monitoring for ordinary differential equation (ODE) models of power systems is indeed rich, that on DAE systems is _not_. DAE system theory is less understood in the context of power system dynamics. To that end, this letter presents two new mathematical transformations for nonlinear DAE models that yield nonlinear ODE models whilst retaining the complete nonlinear DAE structure and algebraic variables. Such transformations make (more accurate) power system DAE models more amenable to a host of control and state estimation algorithms designed for ODE dynamical systems. We showcase that the proposed models are effective, simple, and computationally scalable.
Time-domain simulation, transient stability analysis, nonlinear descriptor models, power systems.
## I Introduction
Power systems monitoring, state estimation, control, and transient stability analysis are all reliant on high-fidelity models of multi-machine power systems. In power grids, transient stability analysis determines how the power system maintains synchronicity under time-varying conditions and large uncertainties from load disturbances [1, 2]. Such analysis relies on time-domain simulations of the system that is expressed as a set of differential algebraic equations (DAEs) [3, 4]. The differential-algebraic nature couples the system dynamics with power flow constraints, thus resulting in a more accurate model. Nonlinear power system DAEs are an extreme case of stiff dynamical systems [5], meaning that the system has time constants that span several orders of magnitude--in particular the algebraic constraints exhibit null time constants.
In general, nonlinear DAEs are solved using implicit discretization schemes [6]. Multi-step methods offer stable and efficient schemes when dealing with nonlinear DAEs [7]. Such discrete-time modeling methods include: backward differential formulas (BDF) [8, 9], backward Euler (BE) method [9], and trapezoidal implicit (TI) method [9, 10]. Simulating discrete-time models requires an integrative time-step algorithm [11], and solvability of power system DAEs under implicit integration methods is well-established. The Newton-Raphson (NR) method [9, 12] is generally implemented within power system simulation packages to solve discretized DAEs [13]. Despite such well-developed time-domain numerical solution, from a systems theory perspective, the literature on nonlinear DAE power networks is limited--unlike that of ODE models [14].
Common modeling for systems' control and estimation is based on ordinary differential equations (ODEs) formulation; this is due to the aforementioned limitation on system theory for DAEs. Typically, the formulation of ODE systems from of DAE models is performed by either neglecting the algebraic constraints or by formulating a decoupled modeling approach [15]. The simplified models potentially limit the transient stability simulations and ultimately the estimation and control performance. Time-domain simulations resulting from the full DAE models of a power network can, for instance, give an accurate depiction of the dynamics under topological changes triggered by faults and be modeled to include uncertain non-generator loads from renewable energy resources.
The limitation on model fidelity from a control and estimation perspective is expressed in the form of the following research question: _How do we extended existing systems control theory, developed for ODE dynamical systems, to accurately apply it to DAE models._
Descriptor systems--arising from DAEs models--appear in numerous applications, with a few examples being chemical, electrical and mechanical systems. For such reason, there is a rise of interest towards translating control and stability theory to the analysis of descriptor systems [16]. Recent studies--see, [15, 16, 17, 18, 19, 20] and reference therein--present literature on developing the state estimation and control theory of DAE systems, in particular that of linear DAEs. However, in this letter we aim to address the limitation on state estimation, control, and transient stability analysis of power systems by giving a new perspective on DAE to ODE system modeling. Therefore we instead attempt to address the posed research question in the form of: _Is there a methodology to accurately restructure the DAE system into an ODE model without loss of information and therefore exploit existing ODE systems control theoretic?_
To that end, in this letter we introduce two simple yet effective methods to transform nonlinear DAE power system models into ODEs. The idea is to formulate ODE-structured representations of the network dynamics while depicting the full nonlinear DAE structure along with the algebraic constraints. These transformations allow then the utilization of |
2309.15806 | Lyra: Orchestrating Dual Correction in Automated Theorem Proving | Large Language Models (LLMs) present an intriguing avenue for exploration in
the field of formal theorem proving. Nevertheless, their full potential,
particularly concerning the mitigation of hallucinations and refinement through
prover error messages, remains an area that has yet to be thoroughly
investigated. To enhance the effectiveness of LLMs in the field, we introduce
the Lyra, a new framework that employs two distinct correction mechanisms: Tool
Correction (TC) and Conjecture Correction (CC). To implement Tool Correction in
the post-processing of formal proofs, we leverage prior knowledge to utilize
predefined prover tools (e.g., Sledgehammer) for guiding the replacement of
incorrect tools. Tool Correction significantly contributes to mitigating
hallucinations, thereby improving the overall accuracy of the proof. In
addition, we introduce Conjecture Correction, an error feedback mechanism
designed to interact with prover to refine formal proof conjectures with prover
error messages. Compared to the previous refinement framework, the proposed
Conjecture Correction refines generation with instruction but does not collect
paired (generation, error & refinement) prompts. Our method has achieved
state-of-the-art (SOTA) performance on both miniF2F validation (48.0% -> 55.3%)
and test (45.5% -> 51.2%). We also present 3 IMO problems solved by Lyra. We
believe Tool Correction (post-process for hallucination mitigation) and
Conjecture Correction (subgoal adjustment from interaction with environment)
could provide a promising avenue for future research in this field. | Chuanyang Zheng, Haiming Wang, Enze Xie, Zhengying Liu, Jiankai Sun, Huajian Xin, Jianhao Shen, Zhenguo Li, Yu Li | 2023-09-27T17:29:41Z | http://arxiv.org/abs/2309.15806v4 | # Lyra: Orchestrating Dual Correction in Automated Theorem Proving
###### Abstract
Large Language Models (LLMs) present an intriguing avenue for exploration in the field of formal theorem proving. Nevertheless, their full potential, particularly concerning the mitigation of hallucinations and refinement through prover error messages, remains an area that has yet to be thoroughly investigated. To enhance the effectiveness of LLMs in the field, we introduce the Lyra, a new framework that employs two distinct correction mechanisms: _Tool Correction_ (TC) and _Conjecture Correction_ (CC). To implement _Tool Correction_ in the post-processing of formal proofs, we leverage prior knowledge to utilize predefined prover tools (e.g., Sledgehammer) for guiding the replacement of incorrect tools. _Tool Correction_ significantly contributes to mitigating hallucinations, thereby improving the overall accuracy of the proof. In addition, we introduce _Conjecture Correction_, an error feedback mechanism designed to interact with prover to refine formal proof conjectures with prover error messages. Compared to the previous refinement framework, the proposed _Conjecture Correction_ refines generation with instruction but does not collect paired (generation, error & refinement) prompts. Our method has achieved state-of-the-art (SOTA) performance on both miniF2F validation (\(48.0\%\to 55.3\%\)) and test (\(45.5\%\to 51.2\%\)). We also present 3 IMO problems solved by Lyra. We believe _Tool Correction_ (post-process for hallucination mitigation) and _Conjecture Correction_ (subgoal adjustment from interaction with environment) could provide a promising avenue for future research in this field.
## 1 Introduction
Formal proof automation is a challenging task that has garnered increased attention in recent years (Bansal et al., 2019; Polu and Sutskever, 2020; Lample et al., 2022; Jiang et al., 2022; Wu et al., 2022; Wang et al., 2023b). Unlike other domains where deep learning approaches have shown remarkable success, previous studies have proposed techniques to synthesize additional formal training data (Wu et al., 2022; Polu and Sutskever, 2020; Han et al., 2021; Bansal et al., 2019; Polu et al., 2023). Recently, large language models (LLMs) trained on informal mathematical data have showcased impressive quantitative reasoning abilities (Lewkowycz et al., 2022; Welleck et al., 2022).
Draft, Sketch, and Prove (DSP) (Jiang et al., 2023) maps informal proofs to formal proof sketches, and uses the sketches to guide an automated prover by directing its search to easier sub-problems. Following this direction, Subgoal-based Learning (Zhao et al., 2023) replaces the informal proof with subgoal-proof and learns how to optimize subgoal demonstration selection. However, they have not been able to post-process LLM generation or gradually refine previous generations.
In this paper, we seek to build Lyra based on LLM, focusing on formal theorem proving. There are two major challenges for LLM generation: 1) hallucination mitigation; 2) interaction with
the environment. To mitigate LLM hallucination, we propose _Tool Correction_ to leverage prior knowledge and rules to guide incorrect tool replacement. As shown in the observation in Figure 1, prover fails to prove conjecture \(x=19*(x\ div\ 19)+4\) because LLM wrongly believes that by (simp add: div_mult_mod_eq) can prove \(x=19*(x\ div\ 19)+4\), while the conjecture is correct but employed tool simp is not powerful enough. _Tool Correction_ employs predefined tools (e.g. sledgehammer, arith) to guide incorrect tool replacement and finally prove the conjecture. We also propose a general interaction technique with LLM named _Conjecture Correction_. To further improve and modify the conjectures, _Conjecture Correction_ leverages a general framework that can easily integrate feedback from any environment, in this case, the Isabelle prover, to further polish conjectures. We believe the Lyra presents our insights to mitigate LLM hallucination and interact with the environment.
The proposed method significantly outperforms competing approaches in formal theorem-proving tasks, achieving a pass rate of \(51.2\%\) on the miniF2F test dataset, a \(5.7\%\) absolute improvement over the previous state-of-the-art. Furthermore, the insights gained from _Tool Correction_ and _Conjecture
Figure 1: **Our proposed Lyra framework contains two modules. _Tool Correction_: employ the predefined tools to replace the incorrect tools and prove the conjectures. The prover fails because LLM wrongly believes that by (simp add: div_mult_mod_eq) can prove \(x=19*(x\ div\ 19)+4\). Actually, the conjecture is correct and simple, and the prover fails to prove it because it employs an incorrect tool. Hence, the prover successfully proves the conjecture when employing by arith. _Conjecture Correction_: We design an interaction framework that integrates previous formal sketch and prover error messages to better sketch generation. The steps with the ATPWithTC delimiters are generated by an automated prover with _Tool Correction_.**
_Correction_ design can be applied to other frameworks that need to interact with the environment. In summary, our contributions are as follows:
* We introduce Lyra, a method composed of two components _Tool Correction_ and _Conjecture Correction_, to guide automated provers with formal proof sketches.
* _Tool Correction_ employs the predefined tools to replace the incorrect tools to mitigate hallucination, while _Conjecture Correction_ integrates previous formal sketch and prover error messages to refine proof.
* We establish a new SOTA of 55.3% and 51.2% on miniF2F validation and test, outperform previous best 7.3% and 5.7% respectively. And we newly solve two IMO problems: IMO_1974_p5 and IMO_1981_p6.
## 2 Related Works
**Interactive theorem provers.** Contemporary mathematical verification systems are centered on interactive theorem provers (ITPs), including Isabelle (Paulson, 1994), Lean (de Moura et al., 2015), Coq (Barras et al., 1997), and Metamath (Megill and Wheeler, 2019). ITPs lay the groundwork for mathematical definitions and theorems on a robust logical foundation through their core kernels. The validation of each theorem is kernel-based and takes place within the ITP. To achieve formal proof, a theorem is initially expressed in the programming language of the ITP and systematically refined into simpler subgoals until it aligns with previously established facts. In this paper, the chosen ATP is Isabelle, known for its potent prover tools, including sledgehammer (Paulson, 2010).
**Machine learning for formal proving.** Numerous approaches advocate the integration of machine learning with contemporary interactive theorem provers (ITPs) (Yang and Deng, 2019; Gauthier et al., 2021). They leverage the recent advancements in language models (Polu and Sutskever, 2020; Han et al., 2021; Polu et al., 2023; Jiang et al., 2022; Lample et al., 2022; Mikula et al., 2023). These techniques recommend actions based on the current proving state, and the tree search identities a sequence of correct steps using actions provided by the language model. Potent methods like MCTS (Silver et al., 2018; Wu et al., 2021; Laurent and Platzer, 2022) or dynamic-tree MCTS (Wang et al., 2023) are utilized for this purpose. Previous work (Wu et al., 2022) has demonstrated the few-shot statement autoformalization capability of LLMs (Chowdhery et al., 2022). In investigating these findings' applicability to proof autoformalization, DSP (Jiang et al., 2023) conducted an in-depth analysis using Draft, Sketch, and Proof. Subgoal-Learning (Zhao et al., 2023) further employs a subgoal-goal-based informal proof approach. In an effort to support the open-source community, LeanDojo (Yang et al., 2023) created a Lean playground that includes toolkits, data, models, and benchmarks. While these methods directly use the results generated by LLMs, we adopt a different approach by employing predefined tools to post-process the generations to mitigate hallucination, specifically _Tool Correction_.
**Large language model refinement.** Calibration studies conducted on LLLMs reveal that the probabilistic predictions made by current LLMs are closely aligned with the actual frequencies of token occurrences, resulting in well-calibrated predictions for specific tasks (Guo et al., 2017; Kadavath et al., 2022; Jiang et al., 2020). As LLMs exhibit reliable calibration, an increasing number of research studies emphasize using self-evaluation for verification. For instance, Reflexion (Shinn et al., 2023) leverages an agent with dynamic memory and self-reflection capabilities, while Self-Refine (Madaan et al., 2023) proposes a method to generate outputs from LLMs and refine their previously generated outputs based on their own feedback. Taking a similar approach, methods like Self-Debug (Chen et al., 2023) and CRITICS (Gou et al., 2023) interact with code interpreters to further debug. In contrast, Progressive-Hint Prompting (Zheng et al., 2023) iteratively extracts hints from previous LLM's answers as hints for the next answer generation. However, previous works require extensive prompts, including generation prompts and refine prompts. Our approach _Conjecture Correction_ refines generation with instruction but does not collect paired (generation, error & refinement) prompts.
```
#tactic_list:listofthetacticesofformalproof
#prover:IshelleProver
#TOUsage:whetheremployToolCorrectiontool_heuristics=['byautod','byarith','byblast','bysimp','byfastforce','byforce','byeval','bypresburger','byso','bylinarith','by(autosimp;field_simps)','sledeghammer'] fortacticintactic_list: use_heuristics=False output=prover:run_tactic(tactic) ifnot(output='error1'isnone); iftCusage:{UseToolCorrectionorNot iftactic.strip().startswith('by")ortactic.strip()==("."): use_heuristics=True if("sledeghammer"in tactic)oruse_heuristic: fortool_tryintool_heuristics: output=prover:run_tactic(tool_try) ifoutput['error']isNone: break ifoutput['error']isnotNone: return"tactic_failed",output ifoutput['tactic_state']=='nogoals'; return"success",output return"proof_incomplete",output
```
**Algorithm 1** Pseudocode of _Tool Correction_ in a Python-like style.
## 3 Method
This section describes our Lyra for formal proof automation, which leverages _Tool Correction_ and _Conjecture Correction_ to guide automated formal theorem provers with proof sketches.
### Background: Pipeline of DSP
DSP (Jiang et al., 2023) aims to generate a formal sketch from an informal statement, verifiable by an off-the-shelf automated theorem prover. DSP creates \(N\) demonstration examples, denoted as \(E=E_{1},E_{2},...,E_{N}\), each containing informal/formal components (statements, proofs, sketches). The pipeline of DSP has the following three steps.
**Informal proof generation.** There are two scenarios: one with an existing human informal proof and another where a language model generates draft-proof candidates without a human reference. For LLM-generated informal proof, DSP provides the model with a few examples containing both (statement, informal proof) for informal proof generation. Subsequently, DSP presents a problem statement that needs to be translated and the model then generates the subsequent tokens to produce the desired informal proof.
**Formal proof generation.** DSP leverages the few-shot learning capabilities of a large language model. Specifically, DSP provides the model with a few example pairs containing (statement, informal proof, formal sketch) for formal proof generation. Subsequently, DSP presents a (statement, informal proof) that needs to be translated. The model then generates the subsequent tokens to produce the desired formal sketch.
**Prover validation.** In the final phase, off-the-shelf automated provers address sketch gaps. These systems create formally valid proofs. DSP framework remains agnostic to prover type (symbolic, neural, hybrid). Successful prover results yield verifiable formal proofs.
### Tool Correction
_Tool Correction_ employs prior knowledge to employ predefined tools (e.g. sledgehammer) to guide incorrect tool replacement, as shown in Algorithm 1. We introduce the _Tool Correction_ as a remedy to alleviate the generation errors stemming from Large Language Models (LLMs). Through empirical observation, it becomes evident that despite the factual accuracy of conjectures, LLMs at times adopt misguided tools that do not withstand validation by theorem provers, as shown in Figure 1.
```
#tactic_list:listofthetacticesofformalproof
#prover:IshelleProver
#TOUsage:whetheremployToolCorrectiontool_heuristics=['byautod','byarith','byblast','bysimp','byfasforce','byeval','bypresburger','byso','byinfarith','by(autosimp;field_simps)','sledeghammer'] fortacticintactic_list: use_heuristics=False output=prover:run_tactic(tactic) iftactic.strip().startswith('by')ortactic.strip()==("."): use_heuristics=True if("sledeghammer"in tactic)oruse_heuristic: fortool_tryintool_heuristics: output=prover:run_tactic(tool_try) ifoutput['error']isNone: break ifoutput['error']isnotNone: return"tactic_failed",output ifoutput['tactic_state']=='nogoals'; return"success",output return"proof_incomplete",output
```
**Algorithm 2** Tool Correction
For instance, consider the statement \(x=19*(x\;div\;19)+4\), where LLM proposes to utilize the tacticby(simp add:div_mult_mod_eq), leading to failure. This is the LLM hallucination,
as by (simp add: div_mult_mod_eq) is suited for proving \(a=a\ div\ b\ *\ b+a\ mod\ b\) but not \(x=19*(x\ div\ 19)+4\). Substituting it with by arith enables the theorem prover to successfully verify \(x=19*(x\ div\ 19)+4\). Hence, in certain instances, LLM might formulate correct conjectures but employ inappropriate tools, resulting in unsuccessful proof attempts. To address this, _Tool Correction_ leverages predefined tools to enhance the success rate.
The _Tool Correction_ approach entails the validation of a given tactic \(t\) using Isabelle. If validation succeeds, we proceed; if not, _Tool Correction_ intervenes to modify the tactic. Specifically, when a tactic is equal to "." or commencing with "by" or "sledgehammer" but the tactic fails, we attempt the application of \(t_{tool}\). This \(t_{tool}\) can be either: 1) "sledgehammer" or; 2) by tool with tool belonging to the set (auto, simp, blast, fastforce, force, eval, presburger, sos, arith, lianirth, auto simp: field simps).
By integrating _Tool Correction_, we systematically explore the applicability of "sledgehammer" and 11 heuristic tools. If any of these successfully pass the theorem prover, we progress to the subsequent tactics. However, if proof still fails to prove the tactic after trying all \(t_{tool}\) fail, the overall proof attempt is deemed unsuccessful.
### Conjecture Correction
For _Conjecture Correction_, we design a framework that can easily integrate previous formal sketches and error messages from the prover to improve sketch generation. LLMs, particularly GPT-4, can leverage prior responses or contextual cues for improved output. Nonetheless, integrating feedback into mathematical proving remains a challenge. This stems from two primary factors: 1) diverse theorem provers employ distinct syntax, complicating the design of varied prompts; 2) often require an extensive token count, incurring a high computational cost and exceeding model length limits. To address these limitations, Lyra uses _Conjecture Correction_, offering a versatile refinement pipeline that can transform a non-refined framework into a refined one. Compared to the previous refinement framework, such as Self-Refine (Madaan et al., 2023) or Self-Debug (Chen et al., 2023), the proposed _Conjecture Correction_ refines generation with instruction, but does not collect paired (generation, error & refinement) prompts. The details are shown in Algorithm 2.
```
#round_count: the current round number
#prompt_sample: the prompt and proposed question
#previous_response: previous formal proof
#error_inferror: information from Isabelle input(!"role!", "system", "content": "Vuu are an expert in \ Mathematica!" Proof and Isabelle Proof Assistant. Follow the given \ examples and complete the proof with Isabelle Proof Assistant"),
#roller "user", "content": prompt_sample]] if round_test15=10; #11False, the initial round.
#Otherwise, then refine round.
#Refine Round
#input_append(!"role!", "assistant", "content": "previous_respon)
#input_append(!"role!", "user", "content": "[*the last proof has the \ following errors from Isabelle Prover. Therefore,\n|1] Please Follow \ the above prompt_in(n2) And Utilize the Following Errors to redo \ the last formal proof.fn (!).n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\n\n\n\n\\n\n\n\n\n\n\n\n\n\n\\n\n\n\\n\n\n\\n\n\n\\n\n\n\n\\n\n\n\\n\n\n\n\n\n\\n\n\n\\n\n\n\n\\n\n\n\n\\n\n\n\\n\n\n\\n\n\n\n\n\\n\n\n\n\n\\n\n\n\n\n\n\\n\n\\n\n\n\n\\n\n\n\n\n\n\n\n\n\n\n\n\\n\n\\n\n\n\n\\n\n\\n\n\n\\n\n\n\\n\n\\n\n\n\\n\n\\n\n\\n\n\\n\n\\n\n\\n\\n\\n\n\\n\n\\n\n\\n\\n\n\\n\\n\\n\\n\\n\n\\n\\n\\n\\n\n\\n\\n\\n\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\n\\n\\n\\n\\n\\n\\n\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\n\\\n\\n\\n\\\n\\n\\n\\\n\\n\\n\\n\\n\\\n\\n\\n\\\n\\n\\\n\\n\\\n\\\n\\\n\\\n\\n\\\n\\\n\\\n\\n\\\n\\\n\\\n\\\n\\\n\\n\\\n\\\n\\n\\\n\\\n\\\n\\\n\\\n\\\\n\\\\n\\\\n\\\n\\\n\\\\n\\\n\\\n\\\\n\\\n\\\\n\\\\n\\\\n\\\\\n\\\n\\\\n\\\\\n\\\\n\\\\\n\\\\n\\\\n\\\n\\\\\n\\\\n\\\\n\\\\\n\\\\n\\\\\n\\\\n\\\\\n\\\n\\\\n\\\\\n\\\\\n\\\\n\\\\\n\\\\n\\\\\n\\\\\n\\\\n\\\\\n\\\\n\\\\\n\\\\\\n\\\\n\\\\\n\\\\\\n\\\\\n\\\\\\n\\\\\n\\\\\\n\\\\\\n\\\\\n\\\\\\n\\\\\\\n\\\\\\n\\\\\n\\\\\\\n\\\\\\n\\\\\\\n\\\\\\n\\\\\\\\\n\\\\\\\\n\\\\\\\n\\\\\\\\n\\\\\\\\\n\\\\\\\\\\n\\\\\\n\\\\\\\\\n\\\\\\\\\\n
round proof at interaction rounds \(K\), \(2K\), \(3K\) and so on, refining its generation in the remaining rounds. For example, when working with 200 attempts and setting \(K\) to 5, _Conjecture Correction_ partitions the 200 attempts into 40 patches. Each patch consists of the first proof derived from DSP, followed by four subsequent refined proofs that build upon the previous proof and incorporate the error message provided by the prover.
## 4 Experiment
### Dataset
In this study, we assess our approach using the miniF2F dataset (Zheng et al., 2021), which is a collection of \(488\) formal mathematical problems derived from high-school competitions and expressed in three formal languages: Lean (de Moura et al., 2015), HOL-Light (Bansal et al., 2019), and Isabelle (Paulson, 1994). The dataset is divided into validation and test sets, each containing \(244\) problems. These problems are sourced from three distinct categories, with \(260\) problems extracted from the MATH dataset (Hendrycks et al., 2021), \(160\) problems taken from actual high school mathematical competitions (AMC, AIME, and IMO), and \(68\) problems specially crafted to mirror the difficulty level of the aforementioned competitions.
**Evaluation.** The objective of our study is to generate formal sketches for the problems in the miniF2F dataset. We consider a proof valid if and only if (a) it does not have any "cheating" keywords (sorry and oops) that terminate a proof without completion, and (b) Isabelle must be capable of verifying the corresponding formal statement with the proof.
**Implementation details.** In our research, we utilized GPT-4 as the Language Model Model (LLM) for generating informal drafts and formal sketches. The temperature of GPT-4 was set to 0.7, with 200 attempts. The details of baselines are shown in Appendix.
### Main Results
Table 1 presents the distribution of successful formal proofs obtained from the miniF2F dataset using the interactive theorem prover Isabelle. An examination of the results presented in Table 1 reveals a conspicuous enhancement in the efficacy of the Sledgehammer automated prover, owing to the integration of \(11\) supplementary heuristic tactics (Jiang et al., 2023). Noteworthy achievements are also realized through deploying the DSP-based methods (DSP and Subgoal), attaining success rates of \(39.3\%\) and \(45.5\%\), respectively on the miniF2F test set.
\begin{table}
\begin{tabular}{l c c} \hline \hline Success rate & miniF2F-valid & miniF2F-test \\ \hline _Baselines_ & & \\ \hline Sledgehammer (Paulson, 2010) & \(9.9\%\) & \(10.4\%\) \\ Sledgehammer + heuristics (Jiang et al., 2023) & \(18.0\%\) & \(20.9\%\) \\ Thor (Jiang et al., 2022) & \(28.3\%\) & \(29.9\%\) \\ Thor + expert iteration (Wu et al., 2022) & \(37.3\%\) & \(35.2\%\) \\ \hline _Draft_, _Sketch_, _and Prove (100 attempts)_ (Jiang et al., 2023) & & \\ \hline Human informal proof & \(42.6\%\) & \(39.3\%\) \\ \(540\)B Minerva informal proof & \(42.6\%\) & \(38.9\%\) \\ \hline _Subgoal-Learning (100 attempts)_ (Zhao et al., 2023) & \(48.0\%\) & \(45.5\%\) \\ \hline _Lyra (Ours)_ & & \\ \hline GPT-4 informal proof (100 attempts) & \(52.8\%\) & \(44.2\%\) \\ GPT-4 informal proof (200 attempts) & \(54.9\%\) & \(47.9\%\) \\ Human informal proof (100 attempts) & \(52.0\%\) & \(47.1\%\) \\ Human informal proof (200 attempts) & \(\mathbf{55.3\%}\) & \(\mathbf{51.2\%}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Proving success rates on the miniF2F dataset with Isabelle. The table displays the success rates of previous works and the Lyra, using both human and GPT-4 informal proofs. The highest success rates for each set are highlighted in bold.**
By harnessing informal proofs generated by GPT-4, our proposed method achieves success rates of \(54.9\%\) and \(47.9\%\) on the validation and test sets of miniF2F respectively. This performance persists even when the attempt number is set at \(100\), affirming its robustness. When the attempt number is \(100\), compared to 540B Minerva informal proof with DSP, our proposed Lyra improves the performance on miniF2F validation set from \(42.6\%\) to \(52.8\%\) and miniF2F test set from \(38.9\%\) to \(44.2\%\). This outcome can be attributed to the _Tool Correction_ and _Conjecture Correction_.
In instances where human informal proofs are employed, our proposed method demonstrates impressive success rates of \(55.3\%\) and \(51.2\%\) on the validation and test sets of miniF2F. Comparative analysis against DSP reveals an improvement of \(12.7\%\) and \(11.9\%\) on the validation and test sets respectively for miniF2F. Furthermore, when contrasted with the previous state-of-the-art Subgoal-Learning model, our approach showcases an advancement of \(7.3\%\) and \(5.7\%\) on the miniF2F validation and test sets respectively.
The performance of human informal proofs surpasses that of GPT-4 generated counterparts, especially on the test set. This substantiates the notion that precision in informal proofs is important for generating formal sketches.
### Ablation Study
**GPT-4 is better than Codex, especially on miniF2F validation dataset.** In the absence of _Tool Correction_ and _Conjecture Correction_, our proposed method experiences degradation to DSP. Referring to Table 2, when considering the informal proof generated by LLM (GPT-4 or 540B Minerva), GPT-4 is better than Codex (Chen et al., 2021). When compared with the deployment of Codex for generating formal sketches, GPT-4 demonstrates improvements of \(5.3\%\) and \(0.4\%\) on the validation and test subsets of miniF2F, respectively, while utilizing the same attempt number \(100\) and human informal proof. This substantiates the notion that GPT-4 indeed enhances performance.
_Tool Correction_: consistently improve performance. As evident from Table 2 and Figure 2, the inclusion of _Tool Correction_ yields enhanced performance. Similarly, when assessing GPT-4-generated informal proofs on the miniF2F test set, _Tool Correction_ elicits improvements of \(4.1\%\) and \(7.0\%\) in the absence and presence of _Conjecture Correction_, respectively. When considering human informal proofs on the miniF2F test set, _Tool Correction_ showcases enhancements of \(3.3\%\) and \(8.2\%\) in scenarios devoid of and accompanied by _Conjecture Correction_, respectively. Therefore, regardless of whether the informal sketch is generated by GPT-4 or created manually by a human, _Tool Correction_ consistently enhances performance and can further benefit from the addition of _Conjecture Correction_.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Attempt & Formal Proof & Informal Proof & TC & CC & miniF2F-valid & miniF2F-test \\ \hline \multirow{4}{*}{100} & Codex & \(540\)B Minerva & ✗ & ✗ & \(42.6\%\) & \(38.9\%\) \\ & GPT-4 & GPT-4 & ✗ & ✗ & \(48.3\%\) & \(38.9\%\) \\ \cline{2-6} & Codex & Human & ✗ & ✗ & \(42.6\%\) & \(39.3\%\) \\ & GPT-4 & Human & ✗ & ✗ & \(47.9\%\) & \(39.7\%\) \\ \cline{2-6} & GPT-4 & GPT-4 & ✓ & ✓ & \(52.8\%\) & \(44.2\%\) \\ & GPT-4 & Human & ✓ & ✓ & \(52.0\%\) & \(47.1\%\) \\ \hline \multirow{4}{*}{200} & GPT-4 & GPT-4 & ✗ & ✗ & \(49.5\%\) & \(40.9\%\) \\ & GPT-4 & GPT-4 & ✓ & ✗ & \(55.3\%\) & \(45.0\%\) \\ & GPT-4 & GPT-4 & ✗ & ✓ & \(48.3\%\) & \(40.9\%\) \\ & GPT-4 & GPT-4 & ✓ & ✓ & \(54.9\%\) & \(47.9\%\) \\ \cline{2-6} & GPT-4 & Human & ✗ & ✗ & \(50.4\%\) & \(42.6\%\) \\ & GPT-4 & Human & ✓ & ✗ & \(52.8\%\) & \(45.9\%\) \\ & GPT-4 & Human & ✗ & ✓ & \(46.7\%\) & \(43.0\%\) \\ & GPT-4 & Human & ✓ & ✓ & \(\mathbf{55.3\%}\) & \(\mathbf{51.2\%}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Ablation results on the miniF2F dataset with Isabelle. There are three important conclusions: 1) GPT-4 is better than Codex for mathematical proving; 2) _Tool Correction_ can consistently improve performance; 3) _Conjecture Correction_ can improve performance but needs more attempts. **Our proposed method degrades to DSP**(Jiang et al., 2023) **when without _Tool Correction_ and _Conjecture Correction_.**
_Conjecture Correction:_**further improves performance, prefers more powerful prover and requires more attempts to be convergent.** The outcomes presented in Table 2 and illustrated in Figure 2 underscore the efficacy of integrating _Conjecture Correction_, albeit at the expense of requiring an increased number of attempts to achieve convergence. When considering human informal proofs on the miniF2F test set, _Conjecture Correction_ showcases enhancements of 0.4% and 5.3% in scenarios devoid of and accompanied by _Tool Correction_, respectively. This suggests that _Conjecture Correction_ improves proof quality, but needs a more powerful prover (e.g. with _Tool Correction_) to fill the formal gaps. _Conjecture Correction_ needs more attempts to be convergent because _Conjecture Correction_ modifies the initial proof to generate subsequent proofs, which strongly hinges on the quality of the initial proof. Specifically, _Conjecture Correction_ partitions the pool of 200 attempts into 40 patches, wherein the first proof originates from DSP, and the subsequent four are based on the initial proof. Furthermore, it's worth noting that, in theory, any problems solvable through DSP remain solvable using our approach, as DSP is equivalent to our initial proof generation without _Tool Correction_.
**Attempt number: Lyra benefits more with attempt number increment.** In the absence of _Tool Correction_ and _Conjecture Correction_, our proposed method reduces to DSP. Within the validation set with human informal proofs, when the number of attempts is escalated from \(100\) to \(200\) (shown in Table 2), the performance of DSP experiences a gain from \(47.9\%\) to \(50.4\%\), achieving a \(2.5\%\) improvement. Conversely, our proposed approach exhibits a performance improvement from \(52.0\%\) to \(55.3\%\), reflecting a more substantial \(3.3\%\) enhancement. For the test set, DSP's performance improves from \(39.7\%\) to \(42.6\%\), marking a \(2.9\%\) increment. In contrast, our method demonstrates an increment from \(47.1\%\) to \(51.2\%\), indicating a more \(4.1\%\) boost. This divergence implies that our proposed approach effectively surpasses the performance limitations of DSP, highlighting the potential efficacy of expanding the attempt number to further enhance performance differences.
### Case Study
We solve another IMO problem IMO_1959_p1 with GPT-4 informal proof, which is also solved via DSP with 540B Minerva. Furthermore, to present the effectiveness of our method, we provide
Figure 2: **Number of problems solved on miniF2F against the number of autoformalization attempts per problem. On miniF2F validation and test set, we have shown the results of _Tool Correction_ (TC) and _Conjecture Correction_ (CC) on human informal proof and GPT-4 informal proof respectively.**
a formal sketch of an IMO problem named IMO_1974_p5 that remains unproven by earlier state-of-the-art methods. As demonstrated in Figure 3, our Lyra successfully proves IMO_1974_p5 with _Tool Correction_ and _Conjecture Correction_. We have shown the interaction details of IMO_1974_p5 and IMO_1959_p1 in the Appendix.
## 5 Conclusion
In this paper, we introduced Lyra, a novel pipeline that takes advantage of _Tool Correction_ and _Conjecture Correction_. _Tool Correction_ employs prior knowledge to employ predefined tools (e.g. sledgehammer) to guide incorrect tool replacement. _Conjecture Correction_, interacting with the prover environment, integrates previous formal sketch and prover error messages for better sketch generation. We demonstrated the feasibility and effectiveness of Lyra by reaching state-of-the-art performance 55.3% and 51.2% on the miniF2F dataset validation and test, respectively, with the Isabelle theorem prover. Central to our method is the incorporation of prior knowledge and the development of a comprehensive GPT-4 refinement framework. Our ablations showed that both _Tool Correction_ and _Conjecture Correction_ are critical to the success of Lyra.
Figure 3: **A successful formal proof synthesized with human informal proof. With _Tool Correction_ and _Conjecture Correction_, we successfully solve an IMO problem IMO_1974_p5. The steps with the ATPWithTC delimiters are generated by an automated prover with _Tool Correction_. We also solve IMO_1959_p1 with GPT-4 informal proof, which is shown in the Appendix. |
2310.20706 | DDAM-PS: Diligent Domain Adaptive Mixer for Person Search | Person search (PS) is a challenging computer vision problem where the
objective is to achieve joint optimization for pedestrian detection and
re-identification (ReID). Although previous advancements have shown promising
performance in the field under fully and weakly supervised learning fashion,
there exists a major gap in investigating the domain adaptation ability of PS
models. In this paper, we propose a diligent domain adaptive mixer (DDAM) for
person search (DDAP-PS) framework that aims to bridge a gap to improve
knowledge transfer from the labeled source domain to the unlabeled target
domain. Specifically, we introduce a novel DDAM module that generates moderate
mixed-domain representations by combining source and target domain
representations. The proposed DDAM module encourages domain mixing to minimize
the distance between the two extreme domains, thereby enhancing the ReID task.
To achieve this, we introduce two bridge losses and a disparity loss. The
objective of the two bridge losses is to guide the moderate mixed-domain
representations to maintain an appropriate distance from both the source and
target domain representations. The disparity loss aims to prevent the moderate
mixed-domain representations from being biased towards either the source or
target domains, thereby avoiding overfitting. Furthermore, we address the
conflict between the two subtasks, localization and ReID, during domain
adaptation. To handle this cross-task conflict, we forcefully decouple the
norm-aware embedding, which aids in better learning of the moderate
mixed-domain representation. We conduct experiments to validate the
effectiveness of our proposed method. Our approach demonstrates favorable
performance on the challenging PRW and CUHK-SYSU datasets. Our source code is
publicly available at \url{https://github.com/mustansarfiaz/DDAM-PS}. | Mohammed Khaleed Almansoori, Mustansar Fiaz, Hisham Cholakkal | 2023-10-31T17:59:14Z | http://arxiv.org/abs/2310.20706v1 | # DDAM-PS: Diligent Domain Adaptive Mixer for Person Search
###### Abstract
Person search (PS) is a challenging computer vision problem where the objective is to achieve joint optimization for pedestrian detection and re-identification (ReID). Although previous advancements have shown promising performance in the field under fully and weakly supervised learning fashion, there exists a major gap in investigating the domain adaptation ability of PS models. In this paper, we propose a diligent domain adaptive mixer (DDAM) for person search (DDAP-PS) framework that aims to bridge a gap to improve knowledge transfer from the labeled source domain to the unlabeled target domain. Specifically, we introduce a novel DDAM module that generates moderate mixed-domain representations by combining source and target domain representations. The proposed DDAM module encourages domain mixing to minimize the distance between the two extreme domains, thereby enhancing the ReID task. To achieve this, we introduce two bridge losses and a disparity loss. The objective of the two bridge losses is to guide the moderate mixed-domain representations to maintain an appropriate distance from both the source and target domain representations. The disparity loss aims to prevent the moderate mixed-domain representations from being biased towards either the source or target domains, thereby avoiding overfitting. Furthermore, we address the conflict between the two subtasks, localization and ReID, during domain adaptation. To handle this cross-task conflict, we forcefully decouple the norm-aware embedding, which aids in better learning of the moderate mixed-domain representation. We conduct experiments to validate the effectiveness of our proposed method. Our approach demonstrates favorable performance on the challenging PRW and CUHK-SYSU datasets. Our source code is publicly available at [https://github.com/mustansarfiaz/DDAM-PS](https://github.com/mustansarfiaz/DDAM-PS).
## 1 Introduction
Person search aims to optimize two conflicting subtasks: detection and re-identification (ReID) [39, 15, 6]. Detection focuses on localizing pedestrians in a given scene, while ReID is responsible for uniquely identifying individuals. This research problem becomes extremely complex due to the utilization of real-world data sources (such as CCTV), which often contain uncropped images with varying specifications, resolutions, lighting conditions, and other variations. While person search has been extensively explored under the fully supervised learning [39, 15, 1, 42] and weakly supervised learning [40, 22] paradigms, adapting it for unsupervised domain adaptation (UDA) generalization remains challenging, as there is a significant disparity between the distributions of the source and target domains.
Unsupervised domain adaptation (UDA) has demonstrated promising results in various domains, including aerial tracking [45], nighttime semantic segmentation [17],
Figure 1: Demonstration of the impact of domain adaption with and without our proposed diligent domain adaptive mixer (DDAM) module for the person search problem. Suppose, the source and target feature points are localized in hyperspace. In order to better transfer the source knowledge to the target domain, our proposed DDAM finds moderate mixed-domain distribution to bridge the gap between the source and target distributions. Here various shapes and colors donate the different distributions and different person identities correspondingly.
visual recognition [44, 25, 48], and person ReID [49, 32, 7]. Unlike fully supervised learning and weakly supervised learning, UDA focuses on bridging the gap between the ideal training set and real-world scenarios by leveraging labeled source data and transferring learned knowledge to unlabeled target domains. Li et al. [28] are the first to apply UDA to person search and proposed DAPS, a method that employs implicit alignment modules and pseudo-labeling to reduce the discrepancy between source and target domains. However, DAPS suffers from a lack of an explicit bridge to determine which critical information, such as similarity or dissimilarity, should be utilized to mitigate the domain discrepancy. Moreover, the implicit alignment modules employed in challenging real-world scenarios, where the person search (PS) model encounters scene challenges like occlusion and pose variations, as well as environmental challenges such as diverse indoor and outdoor scene distributions, may deteriorate the region of interest. Existing PS [15, 6, 30] methods based on Faster-RCNN [35] strive to jointly optimize the conflicting subtasks of detection and ReID. In an effort to address this issue, Chen et al. [6] introduced norm-aware embedding (NAE) to disentangle the two tasks. However, it still utilizes shared weights for both detection and ReID. Therefore, directly utilizing shared NAE representations for domain adaptation may further increase the complexity of person search.
To address the challenges mentioned above, we propose a diligent domain adaptive bridging mechanism to learn domain-invariant feature representations by introducing a bridge that reduces or minimizes the discrepancy between the two domains. Inspired by [9], we aim to enhance knowledge transfer between the source and target domains by learning mixed-domain representations from both domains. As discussed earlier, a significant domain shift exists between the distributions of the two domains. In Fig. 1, we illustrate the region of interest (RoI) proposals from the source and target distributions in hyperspace. Our bridging mechanism introduces hidden representations, referred to as moderate mixed-domain representations, with the objective of smoothly transferring RoI knowledge from the source domain to the target domain. To achieve this, we enforce two bridge losses on the moderate domain representations, minimizing the distance between the source and target domain representations. Additionally, we employ a disparity loss that regularizes the diversity between the two domains by maximizing the standard deviation. This regularization helps to avoid overfitting to either of the domains and facilitates gradual domain adaptation. Depending on the ambient nature of the mixed-domain representations, the source RoI labels can dominate or the inherent distribution of the target domain can be more exposed. The bridge losses and disparity loss work together to learn mixed-domain representations, allowing the model to effectively transfer source RoI knowledge and enhance discriminability in the target domain for the ReID task. Furthermore, we propose to decouple the norm-aware embeddings to mitigate the conflict between detection and ReID, which in turn simplifies the process of domain adaptation. Through experiments, we demonstrate that our approach surpasses the state-of-the-art method DAPS on the PRW and CUHK-SYSU datasets.
**Contribution:** Our contributions can be summarized as follows: (1) We propose an explicit diligent domain adaptive mixing mechanism to reduce the gap between the source and target domains in the person search domain adaptation problem. Specifically, we learn mixed domain representations that bridge the discrepancy between the two domains and facilitate the swift transfer of source information to the target domain, thereby promoting UDA person search tasks. (2) To enhance domain adaptation ability and generate elegant mixed domain representations, we introduce two bridge losses and a disparity loss. (3) To alleviate the conflict between detection and ReID and further improve domain adaptation, we propose the decoupling of the NAE representation. (4) Experimental results demonstrate the promising performance of our method on two datasets, outperforming state-of-the-art methods. These results highlight the merits of our approach.
## 2 Related Work
### Person Search
Person search aims to unify the sub-tasks of localization of pedestrians [2, 34, 35] and re-identification of the person of interest [46, 21, 29] in an end-to-end model. The PS problem becomes a popular research topic, and methods start to focus on the challenges of the two contradictory objectives. The challenge comes when pedestrian detection aims to extract common features to improve localization, while ReID pushes to extract unique features of the same individual. PS problem can be classified as two-stage [26, 13, 19, 5] and one-stage [6, 15, 14, 47] methods. In two-stage methods, first detection is performed to locate the pedestrians employing off-the-shelf detectors, and later re-identification task is performed over the cropped pedestrians for identity discrimination. Although two-stage methods provide promising performance, they face immense computational costs.
On the contrary, one-stage methods perform both subtasks simultaneously in an end-to-end manner. These one-stage methods exploit the two-stage detector i.e., Faster RCNN [35], and combine additional ReID loss for pedestrian identity discrimination. For example, OIM [39, 50] utilized Faster RCNN to implement an end-to-end Person search model. NAE [6] disentangle the detection and the ReID into a norm and angle Euclidian representation, allowing to minimize the cross-task conflict. Furthermore, inher
ited disadvantages of Faster-RCNN affect the gains for PS, thus sequential models [30, 47, 15, 14, 27] mitigate the low-quality proposal of the RPN. Seqnet [30] sequential structure allowed the model to focus on reducing the cross-task conflict by getting a better proposal and for the final stage to focus more on the ReID. The COAT [47] utilized transformer encoders to shuffle patches of individuals with each other in other to generalize better for unseen images. Studies such as PS-ARM [15] introduce the attention-aware relation mixer to exploit the relations between different local regions within RoI of a person.
The works [1, 42] motivate to further disentangle the two-sub tasks, by moving away from Faster R-CNN structure due to limitation and computational resources. To address these issues, AlignPS [42] uses anchor free approach to eliminate the need for using low-quality proposals. In addition, utilizing an aligned feature aggregation module mitigates the issues of scale, region, and task alignment. Cao et al. [1] introduces Deformable Detr [53] for PS that simultaneously predicts the detection and ReID. However, these fully supervised methods (FSL) methods suffer from the issue of domain gap which, degrades the performance of the model. To minimize the domain shift issue, recent studies in weakly supervised person search (WSPS) [40, 22] have access to bounding boxes with ID annotations. Although these issue helps to reduce the domain gap, they still require label data. Another recent study is DAPS [28] which introduces the concept of UDA in PS. DAPS focuses on the domain alignment between the source and the target domain, and also the pseudo-labeling framework for the target domain. In contrast, we propose a novel bridging mechanism that enhances the discriminative learning for ReID by bridging the gap between the source and target domains as well as minimizing the cross-task conflict with localization to ease the domain generalization task.
### Domain Adaptation for Person ReID
Unsupervised domain adaptation (UDA) for person ReID approaches are exposed to labeled source domain and translate the learned knowledge to the underlying target domain in an unsupervised manner. The UDA person ReID approaches are classified into three categories based on their training strategies including GAN transferring [11, 37], joint training [52, 18], and fine-tuning [8, 16]. GAN transferring approaches utilize GAN models to disentangle the style discrepancy and transfer the learned information from the source to the target domain. For joint training, the approaches employ a memory bank that combines the source and target data and jointly trains without building a bridge between the two domains to improve the target domain features. However, for fine-tuning methods, they train the model for source data and fine-tune over target data using pseudo labels. The key component is to mitigate the effect of noisy pseudo labels. Nevertheless, UDA person ReID is based on cropped pedestrians and cannot be directly extended for the person search problem. The DAPS [28] proposed a clustering mechanism to provide high-quality pseudo labels to expedite the target domain training. However, DAPS implicitly utilizes the source and target data while ignoring the explicit bridge mechanism to alleviate the gap between the two domains. Therefore, inspired by [9], we introduced an explicit mechanism to learn what similar/dissimilar information can be employed to improve the target domain features for ReID.
## 3 Method
The overall framework of our proposed diligent domain adaptive mixer for person search, DDAM-PS, is illustrated in Fig. 2. It jointly takes input from both the source and target domains. The base network of our framework is DAPS [28], which incorporates an implicit domain alignment module (DAM) to reduce the gap between the two domains. The source and target domain images are fed into a ResNet50 [24] backbone network to extract feature embeddings. These embeddings are then input to the region proposal network (RPN) [35] to generate ROI-Aligned proposal candidates. To enhance the ReID task within the baseline, we introduce a diligent domain adaptive mixing (DDAM) mechanism. This mechanism aims to smooth out the extreme differences between the source and target domains, allowing for better domain adaptation. To achieve this, we fuse the source and target domain proposals to generate new mixed domain proposal representations. For the detection task, we employ a combination of box regression head (\(\mathcal{L}_{box}\)) and person vs background classification head (\(\mathcal{L}_{Bg/Person\_cls}\)) to compute detection losses. For the ReID task, we impose OIM [39] ReID loss, denoted as \(\mathcal{L}_{ReID}\), on the source and target domain features. Pseudo-labels for the target domain are generated using a clustering strategy. In order to generate moderate mixed-domain adaptive representations, we introduce two bridge losses, \(\mathcal{L}_{bridge}^{f}\) and \(\mathcal{L}_{bridge}^{\varphi}\). The \(\mathcal{L}_{bridge}^{f}\) loss is applied by utilizing the NAE embedding of the target-domain, source-domain, or mixed-domain representations to evaluate the distance across the domains. On the other hand, \(\mathcal{L}_{bridge}^{\varphi}\) is enforced using a hybrid memory projection module to measure the discrepancy between the mixed domain memory projections and the two domains. Additionally, we employ a disparity loss to regulate the domain mixing mechanism and prevent overfitting to either of the two extreme domains. As mentioned earlier, person search models face challenges in jointly optimizing the two subtasks of object detection and ReID. When adapting these models for UDA, the complexity further increases. To address this issue, we propose to decouple the norm-aware embeddings (NAE). This decoupling not only allevi
ates the conflict between the two subtasks but also improves the PS domain adaptation framework.
### Diligent Domain Adaptive Mixer (DDAM)
Inspired by [9], we propose an explicit mixed domain representation learning approach to enhance knowledge transfer between source and target domains for UDA PS.The DDAM module takes \(n\) pairs of RoI pooled features from both the source (\(F^{s}\)) and target (\(F^{t}\)) domains and generates domain constraint weights, denoted as \(c^{s}\) and \(c^{t}\), respectively. The source (\(F^{s}\)) and target (\(F^{t}\)) RoI features are realized with the average and maximum pooling operations. These pooled features are then concatenated for each domain and passed through a shared fully connected (FC) layer. The features from the FC layer are merged via element-wise summation operation and input to a multi-layer perceptron (MLP) followed by a Softmax activation function, yielding the domain constraint weights. The overall procedure to obtain the domain constraint weights is illustrated in Figure 2-(b). The two domain constraint weights \(c^{s}\) and \(c^{t}\) are represented as [\(c^{s}\), \(c^{t}\)] = \(c\), where \(c\in\mathbb{R}^{2}\). Finally, the RoI mixed domain representations are achieved by mixing the source RoI features and target RoI features using the two domain constraint weights as follows:
\[F^{mix}=c^{s}\cdot F^{s}+c^{t}\cdot F^{t}. \tag{1}\]
Figure 2: The illustration of our proposed diligent domain adaptive mixer person search (DDAM-PS) framework. The source and target stem features are computed using a backbone and input to the RPN [35] to compute RoI align features. These source and target RoI align features (\(F^{s},F^{t}\)) are fed to the diligent domain adaptive mixer (DDAM) module to generate the mixed domain representations (\(F^{mix}\)), to reduce the domain gap for unsupervised domain adaptation (UDA), as shown in (b). To generate moderate mixed domain representations, we employ two bridges losses (\(\mathcal{L}^{f}_{bridge}\) and \(\mathcal{L}^{\varphi}_{bridge}\)) and a disparity loss (\(\mathcal{L}_{disp}\)). The \(\mathcal{L}^{f}_{bridge}\) loss is applied using the NAE embedding of the target-domain, source-domain, or mixed-domain representations to evaluate the distance across the domains. While \(\mathcal{L}^{\varphi}_{bridge}\) is enforced using a hybrid memory projection module to measure the discrepancy between the mixed domain memory projections and the two domains. The disparity loss is enforced to regulate the mixed domain features, to avoid overfitting using constraint weights (\(c^{s},c^{t}\)), obtained from the DDAM module. In addition, we propose to decouple the NAE module and apply separate NAE for both conflicting subtasks i.e., detection and ReID. This decoupling facilitates to adopt it for the UDA ReID task.
### Moderate Domain Mixing
The effectiveness of domain mixing can be hindered by two factors: (1) The RoI pooled feature samples often exhibit diverse backgrounds, and individuals within both the intra-domain and inter-domain distributions may experience appearance variations. This includes challenges related to environmental factors such as indoor and outdoor scenes. (2) From Equation 1, we can generate an infinite number of mixed domain representations by exposing source and target domain's RoI features to the DDAM module. However, only a limited portion of these mixed domain representations is capable of effectively bridging the gap between the two extreme domains. These factors can potentially degrade the quality of the mixed domain representations.
In order to better learn the mixed domain distribution (\(P_{mix}\)), the source distribution (\(P_{s}\)) and target distribution (\(P_{t}\)) should be located on the shortest path [20] (see Fig. 1). Although the baseline learns the domain-invariant representations using the DAM module, this approach does not take into account the extreme classes for each domain which is more likely to affect the class distributions in each domain. Therefore, considering the shortest distance definition, the mixed domain representations should follow the two desired characteristics which are ensured by enforcing the dedicated losses. To bridge the extreme domains in the hyperspace, the distance \(d(.)\) should be proportional where \(c^{s}+c^{t}=1\) (using softmax function) and \(c^{s},c^{t}\in[0,1]\). Thus, the moderate mixed domain representation can be obtained utilizing domain constraint weights by identifying the closest points to both \(P^{s}\) and \(P^{t}\) as well as localized along the shortest path. The problem can be framed as loss minimization as follows:
\[\mathcal{L}_{bridge}=c^{s}\cdot d(P_{s},P_{mix}^{(c)})+c^{t}\cdot d(P_{t},P_{ mix}^{(c)}). \tag{2}\]
The enforced loss (Eq. 2) controls the gap between two domains by minimizing the shift between the two domains. The \(\mathcal{L}_{bridge}\) loss will punish more \(d(P_{t},P_{mix}^{(c)})\) if \(c^{t}>c^{s}\), else it will push more \(d(P_{s},P_{mix}^{(c)})\). The domain constraint weights (\(c^{t},c^{s}\)) in DDAM ensure a steady domain adaptive procedure to balance the minimization of the domain shifts from the source to the target domains.
We impose bridge losses on the mixed domain feature representations and feed them to NAE for the ReID task.
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|} \hline \multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{Method} & \multicolumn{2}{c|}{CUHK-SYSU} & \multicolumn{2}{c|}{PRW} \\ \cline{3-6} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{mAP} & \multicolumn{1}{c|}{top-1} & \multicolumn{1}{c|}{mAP} & \multicolumn{1}{c|}{top-1} \\ \hline \hline \multirow{4}{*}{Weakly-Supervised} & CLSA [26] & 87.2 & 88.5 & 38.7 & 65.0 \\ & IGPN [13] & 90.3 & 91.4 & 42.9 & 70.2 \\ & DPM [19] & - & - & 20.5 & 48.3 \\ & RDLR [23] & 93.0 & 94.2 & 42.9 & 70.2 \\ & MGTS [5] & 83.0 & 83.7 & 32.6 & 72.1 \\ & TCTS [36] & 93.9 & 95.1 & 46.8 & 87.5 \\ \hline \multirow{4}{*}{Weakly-Supervised} & OIM [39] & 75.5 & 78.7 & 21.3 & 49.9 \\ & RCAA [3] & 79.3 & 81.3 & - & - \\ & NPSM [31] & 77.9 & 81.2 & 24.2 & 53.1 \\ & IAN [38] & 76.3 & 80.1 & 23.0 & 61.9 \\ & QEEPS [33] & 88.9 & 89.1 & 37.1 & 76.7 \\ & CTXGraph [43] & 84.1 & 86.5 & 33.4 & 73.6 \\ & HOIM [4] & 89.7 & 90.8 & 39.8 & 80.4 \\ & BINet [12] & 90.0 & 90.7 & 45.3 & 81.7 \\ & APNet [51] & 88.9 & 89.3 & 41.2 & 81.4 \\ & AlignPS [41] & 93.1 & 93.4 & 45.9 & 81.9 \\ & AlignPS [42] & 94.0 & 94.5 & 46.1 & 85.8 \\ & NAE [42] & 91.5 & 92.4 & 43.3 & 80.9 \\ & SeqNet [30] & 93.8 & 94.6 & 46.7 & 83.4 \\ & PSTR [1] & 93.5 & 95.0 & 49.5 & 87.8 \\ & OIMNet++ [27] & 93.1 & 93.9 & 46.8 & 83.9 \\ \hline UDA & **Ours** & **79.5** & **81.3** & **36.7** & **81.2** \\ \hline \end{tabular}
\end{table}
Table 1: The quantitative comparison of our’s unsupervised domain adaptive (UDA) method with fully supervised state-of-the-art methods on both the CUHK-SYSU and PRW datasets. The performance is evaluated using mAP and top-1 accuracy. Our method scores are in bold.
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|} \hline \multicolumn{2}{|c|}{Methods} & \multicolumn{2}{c|}{PRW} & \multicolumn{2}{c|}{CUHK-SYSU} \\ \cline{3-6} \multicolumn{2}{|c|}{} & \multicolumn{1}{c|}{mAP} & \multicolumn{1}{c|}{top-1} & \multicolumn{1}{c|}{mAP} & \multicolumn{1}{c|}{top-1} \\ \hline \multirow{4}{*}{Weakly-Supervised} & CGPS [40] & 16.2 & 68.0 & 80.0 & 82.3 \\ & R-SiamNet [22] & 21.4 & 75.2 & 86.0 & 87.1 \\ & R-SiamNets [22] & 23.5 & 76.0 & 86.2 & 87.6 \\ \hline UDA & DAPS [28] & 34.7 & 80.6 & 77.6 & 79.6 \\ \cline{2-6} & **Ours** & **36.7** & **81.2** & **79.5** & **81.3** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of our method with weakly supervised methods and domain adaptive state-of-the-art PS methods over PRW and CUHK-SYSU datasets. The * indicates the training of R-SiamNet using both CUHK-SYSU and PRW. The best results are in bold.
Figure 3: Qualitative comparison between the DAPS [28] and our’s approach in three different challenging scenes. Our method predicts correct top-1 matching results. The orange, red, and green colors show the query, failure, and correct, respectively.
Since online instance matching (OIM) [39] utilizes memory to keep the features for the labeled and unknown identities using a lookup table (LUT) and a circular queue (CQ). The LUT is defined as \(V\in\mathds{R}^{D\times L}\) where D and L are the feature dimensions and IDs respectively, and CQ is represented as \(U\in\mathds{R}^{D\times Q}\) where Q is the queue size. It is impractical to directly utilize the OIM for the UDA PS task. Therefore, we extended the OIM for the UDA ReID and introduced a hybrid memory projection module to keep LUT for the known source IDs and pseudo-labeled target IDs. However, we keep a single CQ for both source and target unknown identities. Using the hybrid memory for the extended OIM, we calculate the similarity projection \(p_{k}\) for the input feature sample w.r.t. the LUT IDs as follows:
\[p_{k}=V_{i}^{T}f_{k}, \tag{3}\]
where \(k\) indicates the source domain, target domain, or mixed domain, and \(f_{k}\) denotes the feature embeddings from the \(k\)th domain. To quantify the discrepancy in the domain distribution between the mixed domain memory projections and the other two extreme domains, we make employ cross-entropy loss as in Eq. 4. This ensures that the dynamic properties of the hybrid memory projection module are compatible with the bridging method and allows it to work in the person search domain. We employ the \(L2-Norm\) loss for the feature space to evaluate the distance across the domains (in Eq. 5) to maintain the shortest path for the mixed feature w.r.t. the source domain and target domain. The proposed two bridge losses are as follows:
\[\mathcal{L}_{bridge}^{\varphi}=-\frac{1}{n}\sum_{i=1}^{n}\sum_{k\in[s,t]}c_{i} ^{k}\cdot[y_{k}^{i}log(p_{mix}^{i}))], \tag{4}\]
\[\mathcal{L}_{bridge}^{f}=-\frac{1}{n}\sum_{i=1}^{n}\sum_{k\in[s,t]}c_{i}^{k} \cdot||f_{i}^{k}-f_{i}^{mix}||_{2}, \tag{5}\]
where \(k\) represents the domain (i.e., source or target) and \(i\) indicates the index in the minibatch. The \(y_{k}^{i}\) shows the source label or target pseudo label, the \(f_{i}^{k}\) denotes the \(k\)th domain's representation, \(p_{i}^{mix}\) reflects the mixed domain similarity projection, and \(f_{i}^{mix}\) means the mixed domain features (using the \(f_{i}^{s}\) and \(f_{i}^{t}\)), from the proposed DDAM module, respectively.
Another important property is to ensure that the mixed domain is diverse enough so that the source or the target domain does not dominate each other. To maximize the diversity of the domain constraint weights, we utilize the disparity loss. Where within the mini-batch the standard deviation \(\sigma(\cdot)\) is used as follow:
\[\mathcal{L}_{disp}=-[\sigma(\{c_{i}^{s}\}_{i=1}^{n})+\sigma(\{c_{i}^{t}\}_{i= 1}^{n})], \tag{6}\]
where \(\sigma\) denotes the computation of standard deviations in a mini-batch. The imposed disparity loss guarantees that the mixed domain representations are as much diverse as possible to maintain the shortest geodesic path property, which can better bridge the domain gap between the source and target domains.
### Decoupled Norm-aware Embedding
As previously discussed, there is a fundamental conflict between the two subtasks, namely detection, and ReID, within the Faster RCNN-based [35] person search frameworks. These subtasks are exposed to the same backbone network, where detection focuses on capturing common features of pedestrians, while ReID aims to discriminate the uniqueness of individuals. In fully supervised person search settings, norm-aware embeddings (NAE) take the feature vector, pass it through a shared projection layer, and decouple it into two components: norm and angle in the polar coordinate system. However, the introduction of domain adaptation adds an additional layer of complexity to the process. Therefore, we intentionally decouple the NAE for both the detection and ReID tasks. This decoupling not only mitigates the cross-task conflict but also facilitates a more efficient handling of the ReID task in the context of UDA person search problems.
## 4 Experiments
### Implementation Details
We implemented our method in the PyTorch framework and all experiments are performed over NVIDIA RTX A6000 GPU. Our backbone is ResNet50 [24] pre-trained over ImageNet-1K [10]. We resize the input to \(1500\times 900\), adopt a random horizontal flip as augmentation, and trained our model using the Stochastic Gradient Descent (SGD) method. We train the model for 20 epochs when the target dataset is set to PRW witg batchsize of 6 and train for 10 epochs when the target dataset is set to CUHK-SYSU dataset with batch size of 4. The weight decay and momentum are set to \(5\times 10^{-4}\) and 0.9, respectively. Following [28], we set the learning rate 0.0024, which is reduced at epoch 16 by a factor of 0.1, and warms up at the first epoch. The annotations for the source domains are available during the training, whereas neither the bounding boxes of the pedestrians nor their identity information is accessible for the target domain during the training and test time. Following DAPS [28], we adopt an asynchronized training strategy and employ pseudo-bounding boxes after the \(\alpha\) epochs (\(\alpha\)=12 for target PRW and \(\alpha\)=1 for CUHK-SYSU) on the target branch to supervise the box regression and classification heads. This releases the complexity of the unlabeled target domain images for both detection and ReID training. We utilized DDAM,to generate domain invariant representations, at the training and relinquish it during the inference time.
### Datasets and Metrics
**Dataset** We evaluate our method over the following two datasets, CUHK-SYSU [39] and PRW [50]. **CUHK-SYSU[39]** is a large dataset for a person search with 8,432 ID individuals across 18,184 images accounting for 96,143 bounding boxes. In training, only 5,532 IDs are accessible through 11,206 images while the remaining 2900 IDs and corresponding 6,978 images are used for evaluation. The CUHK-SYSU contains two distinct data sources; 1) street view images that contain a series of variations focusing on viewpoints, lighting, resolutions, and occlusions. 2) Movies and drama serial videos that contain a variety of unique indoor and outdoor challenges. This allows the dataset to add more diversity to the scenes. For evaluation, the images are split into 2900 query persons and the 6978 images are utilized as the gallery set. **PRW[50]** is another dataset consisting of 932 IDs having 11,816 images with 43,110 bounding boxes. The dataset is sampled from videos that were captured from six CCTV university cameras. For training, only 482 IDs are available in 5702 images while the test set has 2057 query persons with a gallery size of 6112 images.
**Evaluation Protocols:** For the domain adaptation setting, we evaluate our method on the test set of the target domain. In order to quantify the localization/detection task, we use standard object detection protocols such as recall score and average precision. We adopt widely used metrics cumulative matching characteristics (CMC) curves and mean average precision (mAP) to measure the performance of the ReID task. Since ReID reflects the identity of the query person, it is the most challenging metric for the PS task.
### Comparison with State-of-the-art Methods
We compared our method with fully supervised, weakly supervised, and unsupervised domain adaption methods. First, we present a comparison of our UDA method with the fully supervised methods classified as two-stage and one-stage methods in Table 1. Surprisingly, our method outperforms several two-stage and one-stage fully supervised methods including DPM [19], MGTS [5], OIM [39], NPSM [31], IAN [38], and CTXGraph [43]. Second, we also compare our method with weakly supervised and UDA methods in Table 2. Compared to the top-performing weakly supervised method, our approach obtains an absolute gain of 13.2% over the PRW dataset. Compared to DAPS, our method demonstrates outstanding performance depicting the merits of our method and archives 2.0% and 1.9% gain in terms of mAP over both PRW and CUHK-SYSU datasets, respectively.
We present the qualitative comparison of our method with DAPS in Fig. 3 which depicts that our method is able to correctly identify the query person in complex scenes.
Figure 4: Qualitative analysis on CUHK-SYSU [39] (top 2 rows ) and PRW [50] (bottom 2 rows) datasets. We illustrate the top two matching results for different query persons. Our method can effectively bridge the gap using adaptive domain mixing which correctly detects and identifies.
Figure 5: Failure cases on CUHK-SYSU [39] (first row) and PRW [50] (second row) datasets. We demonstrate that our approach incorrectly identifies the query person due to heavy domain conflicts between the domains.
More examples from CUHK-SYSU and PRW datasets are shown in Fig. 5. This shows that our DDAM module facilitates correctly localizing and identifying the query person in challenging scenarios. In Fig. 5, we also present failure cases where there exist heavy domain differences.
### Ablation Study
We conduct an ablation study to validate the merits of our method in Table 3. As mentioned earlier, we adopt DAPS [28] as our baseline. For a fair comparison, we reproduce the baseline numbers and report in Table 3 (row 2). We integrated DDAM into the baseline and trained the model using introduced bridge losses (rows 3, 4, and 5) and disparity loss (low 6). We notice that combined bridge and disparity losses have more gain compared to individual bridge losses in terms of mAP for both datasets. When integrating our proposed DDAM (trained with three introduced losses) into the baseline (row 7), the mAP score is significant improved to 35.9% and 78.5% in terms of mAP over PRW and CUHK-SYSU datasets, respectively. This is attributed to the nature of DDAM since the objective is not to improve the quality of feature extraction but to minimize the disparity between the two domains without a label ID class as well as try to maintain diversity for both domains. Similarly, the decoupling of the NAE into the baseline (row 8) leads to improving the mAP scores over both PRW and CUHK-SYSU datasets. This is due to mitigating the issue of conflicting objectives of commoners, uniqueness, and adaption. Separating the NAE for detection and ReID eases the PS process. Finally, combining both contributions (row 9) leads to a significant improvement in performance and obtains mAP scores of 36.7% and 79.5% for both PRW and CUHK-SYSU datasets, respectively.
To further verify the impact of our new module, we studied how much reducing the number of training samples might affect the performance of the model. In Table 4, we see that the model obtains comparable results with the baseline model scores even when one-fourth of the target training set is removed (row 3).
## 5 Conclusion
We present a novel UDA person search framework that leverages a bridging mechanism to generate domain-invariant representations. Our approach introduces the DDAM module, which produces moderate mixed domain representations that effectively adapt the extremes of the two domains through an adaptive mixing mechanism, facilitating improved knowledge transfer from the source domain. To enhance the discriminability of the model on the target domain, we employ bridge and disparity losses. Additionally, we incorporate an NAE-decoupled module to mitigate the cross-task conflict, resulting in enhanced ReID quality and improved domain adaptation for the person search task. Our proposed contributions significantly enhance the model's domain adaptation abilities for person search. Our experimental studies validate the effectiveness of our proposed method.
## Acknowledgement
This work is partially supported by the MBZUAI-WIS Joint Program for AI Research (Project grant number- WIS P008)
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{Target-PRW} & \multicolumn{3}{c|}{Target CUHK-SYSU} \\ \hline Method / Sample Percentage & mAP & Top-1 & Recall & AP & mAP & Top-1 & Recall & AP \\ \hline Baseline 100\% & 34.4 & 78.4 & 92.1 & 87.5 & 77.1 & 78.2 & 72.8 & 67.9 \\ \hline Ours / 50\% & 32.7 & 77.6 & 91.4 & 87.0 & 76.5 & 77.1 & 72.5 & 66.9 \\ \hline Ours / 75\% & 34.8 & 78.9 & 92.4 & 87.3 & 77.2 & 79.0 & 74.5 & 67.8 \\ \hline Ours / 100\% & 36.7 & 81.2 & 93.3 & 88.6 & 79.5 & 81.3 & 76.5 & 68.8 \\ \hline \end{tabular}
\end{table}
Table 4: A study on how efficiently the proposed methods can adapt to a reduced-size target dataset.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{Experiments} & \multicolumn{5}{c|}{Target-PRW} & \multicolumn{3}{c|}{Target-CUHK-SYSU} \\ \hline Exp. No. & \(\mathcal{L}_{bridge}^{f}\) & \(\mathcal{L}_{bridge}^{\phi}\) & \(\mathcal{L}_{disp}\) & DC-NAE & mAP & Top-1 & Recall & AP & mAP & Top-1 & Recall & AP \\ \hline
1 (baseline) & ✗ & ✗ & ✗ & ✗ & ✗ & 34.7 & 80.6 & 97.2 & 90.9 & 77.6 & 79.6 & 77.7 & 69.9 \\ \hline
2 (baseline reproduced) & ✗ & ✗ & ✗ & ✗ & 34.4 & 78.4 & 92.1 & 87.5 & 77.1 & 78.2 & 72.8 & 67.9 \\ \hline
3 & ✓ & ✗ & ✗ & ✗ & 34.9 & 78.9 & 92.5 & 87.6 & 77.9 & 79.2 & 73.4 & 68.1 \\ \hline
4 & ✗ & ✓ & ✗ & ✗ & 34.7 & 78.6 & 92.4 & 87.8 & 77.4 & 79.3 & 73.9 & 68.5 \\ \hline
5 & ✓ & ✓ & ✗ & ✗ & 35.1 & 79.4 & 92.9 & 88.1 & 78.1 & 79.7 & 74.1 & 68.3 \\ \hline
6 & ✗ & ✗ & ✓ & ✗ & 35.7 & 79.0 & 92.6 & 88.0 & 78.3 & 79.8 & 74.9 & 68.1 \\ \hline
7 & ✓ & ✓ & ✓ & ✗ & 35.9 & 79.5 & 92.5 & 88.4 & 78.5 & 80.7 & 75.4 & 68.7 \\ \hline
8 & ✗ & ✗ & ✗ & ✓ & 35.5 & 79.4 & 93.1 & 88.2 & 78.6 & 80.3 & 74.7 & 68.2 \\ \hline
9 (Ours) & ✓ & ✓ & ✓ & ✓ & **36.7** & **81.2** & **93.3** & **88.6** & **79.5** & **81.3** & **76.5** & **68.8** \\ \hline \end{tabular}
\end{table}
Table 3: Ablation study on the PRW and CUHK-SYSU datasets. Here, we show the merits of our contributions introduced to the baseline (DAPS [28]). The \(\mathcal{L}_{bridge}^{f}\) & \(\mathcal{L}_{bridge}^{\phi}\) represent the bridge losses, \(\mathcal{L}_{disp}\) denotes the disparity loss, and DC-NAE indicates the decoupled NAE. We note that the integration of our noval bridge losses (row 5) and disparity loss (row 6) leads to consistent gain in terms of mAP for both datasets. Similarly, introduced losses (row 7) and decoupled NAE (row 8) obtained better results compared to the baseline. Our final approach (row 9) achieves significant performance gain compared to the baseline and its results are in bold. |
2301.02580 | Neuro-DynaStress: Predicting Dynamic Stress Distributions in Structural
Components | Structural components are typically exposed to dynamic loading, such as
earthquakes, wind, and explosions. Structural engineers should be able to
conduct real-time analysis in the aftermath or during extreme disaster events
requiring immediate corrections to avoid fatal failures. As a result, it is
crucial to predict dynamic stress distributions during highly disruptive events
in real-time. Currently available high-fidelity methods, such as Finite Element
Models (FEMs), suffer from their inherent high complexity and are
computationally prohibitive. Therefore, to reduce computational cost while
preserving accuracy, a deep learning model, Neuro-DynaStress, is proposed to
predict the entire sequence of stress distribution based on finite element
simulations using a partial differential equation (PDE) solver. The model was
designed and trained to use the geometry, boundary conditions and sequence of
loads as input and predict the sequences of high-resolution stress contours.
The performance of the proposed framework is compared to finite element
simulations using a PDE solver. | Hamed Bolandi, Gautam Sreekumar, Xuyang Li, Nizar Lajnef, Vishnu Naresh Boddeti | 2022-12-19T03:02:26Z | http://arxiv.org/abs/2301.02580v1 | # Neuro-DynaStress: Predicting **Dynamic Stress** Distributions in Structural Components
###### Abstract
Structural components are typically exposed to dynamic loading, such as earthquakes, wind, and explosions. Structural engineers should be able to conduct real-time analysis in the aftermath or during extreme disaster events requiring immediate corrections to avoid fatal failures. As a result, it is crucial to predict dynamic stress distributions during highly disruptive events in real time. Currently available high-fidelity methods, such as Finite Element Models (FEMs), suffer from their inherent high complexity and are computationally prohibitive. Therefore, to reduce computational cost while preserving accuracy, a deep learning model, Neuro-DynaStress, is proposed to predict the entire sequence of stress distribution based on finite element simulations using a partial differential equation (PDE) solver. The model was designed and trained to use the geometry, boundary conditions and sequence of loads as input and predict the sequences of high-resolution stress contours. The proposed framework's performance is compared to finite element simulations using a PDE solver.
**Keywords:** Deep Learning; Finite Element Analysis, Dynamic Stress Distribution, Structural Engineering
## 1 Introduction
Numerical analysis methods, such as Finite Element Analysis (FEA), are typically used to conduct stress analysis of various structures and systems for which it is impractical or hard to determine an analytical solution. Researchers commonly use FEA methods to evaluate the design, safety and maintenance of different structures in various fields, including aerospace, automotive, architecture and civil structural systems. The current workflow for FEA applications includes: (i) modeling the geometry and its components, (ii) specifying material properties, boundary conditions, meshing, and loading, (iii) dynamic analysis, which may be time-consuming based on the complexity of the model. The time requirement constraint and the complexity of the current FEA workflow make it impractical for real-time or near real-time applications, such as in the aftermath of a disaster or during extreme disruptive events that require immediate corrections to avoid catastrophic failures.
Based on the steps of FEA described above, performing a complete stress analysis with conventional FEA has a high computational cost. In order
Figure 1: **Overview:** Unlike FEM, our proposed Neuro-DynaStress is computationally efficient and facilitates real-time analysis. The existing workflow for FEM applications includes: (i) modeling the geometry and its components, (ii) specifying material properties, boundary conditions, meshing, and loading, (iii) dynamic analysis, which may be time-consuming based on the complexity of the model. Our Neuro-DynaStress takes geometry, boundary condition, and load as input and predicts the dynamic stress distribution at all time steps in one shot.
to overcome this problem, some recent works have proposed deep neural network (DNN)-based methods to predict stress distributions in both intact and damaged structural components [1, 2], bypassing the need for static finite element analysis. But these works are not suitable for dynamic finite element analysis. We propose an architecture that can act as a surrogate for FEA solvers for dynamic FEA while avoiding the computational bottlenecks involved. To demonstrate its utility, we model the stress distribution in gusset plates under dynamic loading. Bridges and buildings rely heavily on gusset plates as one of their most critical components. Gusset plates are designed to withstand lateral loads such as earthquakes and winds, which makes fast dynamic models valuable in avoiding catastrophic failures.
The main idea here is to train a model that can later be used when real-time estimations are needed, such as in the aftermath of extreme disruptive events. For example, focusing on critical structural components, there is a need for immediate assessment following a disaster or during extremely disruptive events to guide corrective actions. Engineers could rely on the proposed computationally efficient algorithms to determine stress distributions over damaged gusset plates and apply the proper rehabilitation actions. They need to be able to analyze gusset plates quickly and accurately, which is what our model can provide. To our knowledge, this work is the first to predict dynamic stress distribution in the specific domain of steel plates.
## 2 Related Work
The most recent works in data-driven applications of scientific machine learning have included design and topology optimization [3, 4], data-driven approaches in fluid dynamics [5, 6], molecular dynamics simulation [7, 8], and material properties prediction [9, 10, 11, 12]. Atalla et al. [13] and Levin et al. [14] have used neural regression for FEA model updating. More recently, DL has shown promise in solving traditional mechanics problems. Some researchers used DL for structural damage detection, a promising alternative to conventional structural health monitoring methods [15, 16].
Javadi et al. [17] used a typical neural network in FEA as a surrogate for the traditional constitutive material model. They simplified the geometry into a feature vector which approaches hard to generalize complicated cases. The numerical quadrature of the element stiffness matrix in the FEA on a per-element basis was optimized by Oishi et al. [18] using deep learning. Their approach helps to accelerate the calculation of the element stiffness matrix. Convolutional Neural Networks (CNN) are commonly used in tasks involving 2D information due to the design of their architecture. Recently, Madani et al. [19] developed a CNN architecture for stress prediction of arterial walls in atherosclerosis. Also, Liang et al. [20] proposed a CNN model for aortic wall stress prediction. Their method is expected to allow real-time stress analysis of human organs for a wide range of clinical applications.
Gulgec et al. [21] proposed a CNN architecture to classify simulated damaged and intact samples and localize the damage in steel gusset plates. Modares et al. [22] conducted a study on composite materials to identify the presence and type of structural damage using CNNs. Also, in order to detect concrete cracks without calculating the defect features, Cha et al. [23] proposed a vision-based method based on convolutional neural networks (CNNs). Do et al. [24] proposed a method for forecasting the crack propagation in risk assessment of engineering structures based on "long short-term memory" and "multi-layer neural network". An approach for predicting stress distribution on all layers of non-uniform 3D parts was presented by Khadilkar et al. [25]. More recently, Nie et al. [26] developed a CNN-based method to predict the low-resolution stress field in a 2D linear cantilever beam. Jiang et al. [27] developed a conditional generative adversarial network for low-resolution von Mises stress distribution prediction in solid structures.
Some studies have been conducted to develop methods of predicting structural response using ML models. Dong et al. [28] proposed a support vector machine approach to predict nonlinear structural responses. Wu et al. [29] Utilized deep convolutional neural networks to estimate the structural dynamic responses. Long short-term memory (LSTM) [30] was used by Zhang et al. [31] to predict nonlinear structural response under earthquake loading. Fang et al. [32] proposed a deep-learning-based structural health monitoring (SHM) framework capable of predicting a dam's structural dynamic responses once explosions are experienced using LSTM. Kohar et al. [33] used 3D-CNN-autoencoder and LSTM to predict the force-displacement response and deformation of the mesh in vehicle crash-worthiness. Schwarzer et al. [34] construct a neural network architecture that combines a graph convolutional neural network (GCN) with a recurrent neural network (RNN) to predict fracture propagation in brittle materials. Lazzara et al. [35] proposed a dual-phase LSTM Auto-encoder-based surrogate model to predict aircraft dynamic landing response over time. Jahanbakht et al. [36] presented an FEA-inspired DNN using an attention transformer to predict the sediment distribution in the wide coral reef.
The few models that studied stress predictions suffer from the problem of low-resolution predictions, making them unsuitable for decision-making after a catastrophic failure. To the best of our knowledge, this is the first work to predict dynamic stress distribution in the specific domain of steel plates with high accuracy and low latency. The algorithm takes the geometry, boundary conditions, and time histories as input and renders the dynamic von Mises stress distribution as an output. We modeled the steel plates as gusset plates with dynamic loading applied at different edges, different boundary conditions, and varying complex geometries.
## 3 Methods
### Data Generation
Two-dimensional steel plate structures with five edges, E1 to E5 denoting edges 1 to 5, as shown in Fig. 2, are considered homogeneous and isotropic linear elastic materials. Various geometries are generated by changing the position of each node in horizontal and vertical directions, as shown in Fig. 2, which led to 1024 unique pentagons. The material properties remain unchanged, isotropic for all samples. The 2D steel plates approach the geometry of gusset plates. Gusset plates connect beams and columns to braces in steel structures. The behavior and analysis of these components are critical since various reports have observed failures of gusset plates subject to lateral loads [37, 38, 39, 40]. The boundary conditions and time-history load cases are considered to simulate similar conditions in common gusset plate structures under external loading. Some of the most common gusset plates configurations in practice are shown in Fig. 3.
A total of 57,344 unique samples were created by combining 14 random time-history load cases and four most common boundary conditions in gusset plates. Boundary conditions are shown in Fig. 4, mimicking the real gusset plates' boundary conditions. All the translation and rotational displacements
Figure 3: Some of the most common gusset plates in practice.
Figure 2: Basic schematic topology for initializing the steel plate geometries.
were fixed at the boundary conditions. The range for width and height of the plates is from 30 cm to 60 cm. Each time history consists of 100 time steps generated with random sine and cosine frequencies. The frequencies range between 1 and 3 HZ, with amplitudes ranging from 2 to 10 kN at intervals of 2 kN. All time histories in horizontal and vertical directions are shown in Fig. 5. Considering 100 time steps, each interval is 0.01 seconds, making the total time equal to 1 second. All the details for the input variables used to initialize the population are shown in Table 1.
### Input Data
The geometry is encoded as a \(200\times 200\) matrix and, incidentally, a binary image. 0 (black) and 1 (white) denote outside and inside of the geometry, as
Figure 4: Different types of boundary conditions for initializing population.
Figure 5: Time histories (a) Horizontal direction (b) Vertical direction
shown in Fig. 6(a). The boundary condition is also represented by another \(200\times 200\) pixel binary image, where the constrained edges are defined by 1 (white) as shown in Fig. 6(b). Moreover, each time step of time histories for horizontal and vertical components is encoded in the load position of the corresponding frame. Load positions in each time step have values between 0 and 1, corresponding to each time step of time histories, and all remaining elements are zero. All the load frames of each sample in horizontal and vertical directions are saved as tensors of dimension \(100\times 200\times 200\). Figs. 6(c) and 6(d) show loads in the horizontal and vertical directions. The colored load positions in Figs. 6(c) and 6(d) are used only for visualization. Each row of Fig. 6 represents one of the simulated samples. Details of boundary conditions and their load positions are described in Table 1.
### Output Data
FEA was performed using the Partial Differential Equation (PDE) solver in the MATLAB toolbox to obtain the stress distributions of each sample. We used transient-planestress function of MATLAB PDE solver to generate
\begin{table}
\begin{tabular}{l c c c c c c} \hline Geometry & Boundary & Load & Frequencies & Load & Time & Total \\ & conditions & position & (HZ) & (kN) & steps & time (s) \\ \hline pentagon & E2 & E4E5 & 1,1.5,2,2.5,3 & 2,4,6,8,10 & 100 & 1 \\ pentagon & E2E3 & E5 & 1,1.5,2,2.5,3 & 2,4,6,8,10 & 100 & 1 \\ pentagon & E1E2 & E4 & 1,1.5,2,2.5,3 & 2,4,6,8,10 & 100 & 1 \\ pentagon & E3 & E2E5 & 1,1.5,2,2.5,3 & 2,4,6,8,10 & 100 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Input variable
Figure 6: Input and output representation for stress distribution prediction: (a) geometry, (b) boundary condition, (c) horizontal load, (d) vertical load, (e) output
dynamic stress contours as the ground truth of our model. We defined the geometry, boundary condition, material properties and time histories as input and PDE solver returns the sequence of stress distributions corresponding to the inputs. The MATLAB PDE toolbox mesh generator only generates unstructured triangulated meshes incompatible with CNN. The minimum and maximum triangulated mesh sizes are 5 and 10mm, respectively. Since each element should be represented by one pixel in an image, we develop a \(200\times 200\) grid surface equal to the dimensions of the largest possible geometry. Figs. 7(a) and 7(b) show the unstructured mesh and the \(200\times 200\) grid surface on top of a random sample. The stress values are then interpolated between the triangular elements and grids to determine a stress distribution compatible with our CNN network. The stress values of all the elements outside the material geometry are assigned to zero, as shown in Fig. 6(e).
The dimension of the largest sample is \(600\times 600\) mm, and the smallest is \(300\times 300\) mm. Using a mesh grid of \(200\times 200\) on top of samples made each element \(3\times 3\) mm, which means that each frame of output has 40000 pixels. This high-resolution dataset led to achieving significant accuracy. The maximum and minimum von Mises stress values for elements among the entire dataset are 279,370 and -980 MPa, respectively. We normalized all the output data between 0 and 1 to ensure faster convergence and encoded it to \(200\times 200\) for each frame.
### Stress Calculation
The steps for linear finite element analysis' stress calculation, which is part of phase (iii) of FEA's workflow elaborated in the introduction section, are as follows:
Figure 7: A sample of mesh generation: (a) unstructured triangular mesh, (b) structured gird surface
\[KQ=F \tag{1}\]
where \(K\) denotes a global stiffness matrix, \(F\) is the load vector applied at each node, and \(Q\) denotes the displacement. A stiffness matrix \(K\) consists of elemental stiffness matrices \(K_{e}\):
\[K_{e}=A_{e}B^{T}DB \tag{2}\]
where \(B\) represents strain-displacement matrix; \(D\) represents stress-strain matrix; and \(A_{e}\) represents area of element. Mesh geometry and material properties determine \(B\) and \(D\). This will be followed by adding the local stiffness matrix \(k_{e}\) to the global stiffness matrix. The displacement boundary conditions are encoded using the corresponding rows and columns in the global stiffness matrix \(K\). Solving \(Q\) can be achieved using direct factorization or iterative methods.
As a result of calculating the global displacement using equation 1, we can calculate the nodal displacements \(q\) then we can calculate the stress tensors of each element as follows:
\[\sigma=DBq \tag{3}\]
where \(\sigma\) specifies the tensor of an element. The 2-D von Mises Stress criterion is then used to calculate each element's von Mises Stress:
\[\sigma_{v_{m}}=\sqrt{\sigma_{x}^{2}+\sigma_{y}^{2}-\sigma_{x}\sigma_{y}+3\tau _{x_{y}}^{2}} \tag{4}\]
where \(\sigma_{v_{m}}\) denotes von Mises Stress, \(\sigma_{x}\), \(\sigma_{y}\) are the normal stress components and \(\tau_{x_{y}}\) is the shear stress component.
## 4 Proposed Methodology
We use convolutional layers to encode the spatial information from the input. Our hypothesis is that these layers will combine the information in geometry, boundary conditions, and load. A key characteristic of dynamic structural systems is the temporal dependence of their states. LSTM is a suitable architecture for modeling temporal information in sequence and hence is a good choice to model structural dynamic systems in our experiments. For high-quality 2D reconstructions, we use transposed convolutional layers in our decoder. For further improving training and performance, we use modules from the recently proposed feature-aligned pyramid networks (FaPN) [41]. FaPN allows the decoder to access information from the encoder directly. Overall, our network architecture consists of four modules: encoder consisting of convolutional layers, temporal module made using LSTM modules, decoder consisting of transposed convolutional layers, and alignment modules acting as connections between encoder and decoder. The number of layers in each module and the number of layers in LSTM modules were chosen based on their performance.
The architecture is illustrated schematically in Fig. 8. The size of layers and hyper-parameters used in the network are summarized in Table 2.
## 5 Loss Function and Performance Metrics
We use Mean Absolute Error (MAE), defined in Eq. 5 as the primary training loss and metric. To ensure that we do not overfit to a single metric, we also use Mean Relative Percentage Error (MRPE) to evaluate the overall quality of predicted stress distribution.
\[\text{MAE}=\frac{1}{NT}\sum_{N,T}^{n,t}\lvert S(n,t)-\hat{S}(n,t)\rvert \tag{5}\]
\[\text{MRPE}=\frac{\text{MAE}}{\max\left\lvert S(n,t),\hat{S}(n,t)\right\rvert} \times 100 \tag{6}\]
\begin{table}
\begin{tabular}{l c c c} \hline \hline Type of layers & Number of layers & First layer (H\(\times\)W\(\times\)C) & Last layer (H\(\times\)W\(\times\)C) \\ \hline Conv & 6 & 200\(\times\)200\(\times\)16 & 7\(\times\)7\(\times\)512 \\ LSTM & 4 & 1\(\times\)1\(\times\)512 & 1\(\times\)1\(\times\)512 \\ ConvT & 5 & 13\(\times\)13\(\times\)256 & 200\(\times\)200\(\times\)16 \\ FaPN & 4 & 13\(\times\)13\(\times\)256 & 100\(\times\)100\(\times\)32 \\ \hline Batch size & Learning rate & Weight decay & Loss function \\ \hline
8 & \(10^{-4}\) & \(10^{-5}\) & MAE \\ \hline \hline \end{tabular}
\end{table}
Table 2: Network layers and hyper-parameters
Figure 8: Architecture for the proposed Neuro-DynaStress. The convolutional encoder maps the raw input data to a latent space. LSTM layers processes the information across different time frames. The final output is obtained from the resulting latent representation using transposed convolutional layers.
where \(S(n,t)\) is the true stress value at a node \(n\) at time step \(t\), as computed by FEA, and \(\hat{S}(n,t)\) is the corresponding stress value predicted by our model, \(N\) is the total number of mesh nodes in each frame of a sample, and \(T\) is a total number of time steps in each sample. As mentioned earlier, we set \(T=100\) in our experiments.
## 6 Implementation and Computational Performance
We implemented our model using PyTorch [42] and PyTorch Lightning. AdamW optimizer [43] was used as the optimizer with a learning rate of \(10^{-4}\). We found that a batch size of 8 gave the best results. The computational performance of the model was evaluated on an AMD EPYC 7313 16-core processor and two NVIDIA A6000 48G GPUs. The time required during the training phase for a single sample with 100 frames and a batch size of 8 was 10 seconds. In the training phase, one forward and backward pass was considered. The inference time for one sample was less than 5 ms which can be considered a real-time requirement. The most powerful FE solvers take between 10 minutes to an hour to solve the same. Therefore, Neuro-DynaStress is about \(72\times 10^{4}\) times faster than conventional FE solvers. We consider the minimum time for all processes of modeling geometry, meshing, and analysis of one sample in FE solver to be about 10 minutes. MATLAB PDE solver does not use GPU acceleration. This demonstrates that our proposed approach can achieve the real-time requirement during the validating phase.
## 7 Results and Discussions
### Quantitative Evaluation
Our model is trained on the training dataset for 45 epochs and evaluated on the validation dataset using separate metrics. The training dataset consisted of 48,755, while the validating dataset contained 8,589 samples, together forming the 80%-20% split of the whole dataset. The model predicts five frames of output from a sequence of five previous inputs until all 100 frames are predicted. The best validation performance was obtained when we sequenced five frames during validation. The best checkpoint during validation, at epoch 40, is the basis for all error metrics. MRPE for the validating dataset is just 2.3%.
### Qualitative Evaluation
The prediction results for a few randomly selected samples from the validation dataset are visualized in Figs. 8(a) and 8(b). The first row represents 5 frames out of 100 frames of one reference sample. The second row illustrates the prediction corresponding to the frames in the first row, and the last row represents the error in the corresponding predictions. The columns represent the time steps 1, 25, 50, 75 and 100 seconds. We visualized frames at intervals
of 25 seconds to evaluate different ranges of dynamic stress prediction.
Figure 9: Successful predicted dynamic stress distribution and their corresponding errors in different time sequences for two samples. The top row corresponds to reference frames and the middle row shows the predictions. The bottom row shows the absolute error between corresponding frames (Unit = MPa)
For visualization purposes, the references and predictions in Figs. (a)a and (b)b are scaled to the same range using the maximum and the minimum of each sample. The errors are scaled independently. As it can be seen in Fig. (a)a, the predicted frames are quite similar to their corresponding references. Although the geometry contains sharp corners and edges, which are areas that are hard for CNN to reconstruct, our model is able to predict it. The errors, except for a small part of the first frame, are in an acceptable range which shows the prediction accuracy of our model. Fig. (b)b shows another successful reconstruction. Comparing references with their corresponding predicted frames demonstrates that our Neuro-DynaStress model can capture both load variations and maximum stress values at the same time. Furthermore, these results demonstrate that our model is able to predict a dynamic stress distribution with a high variation of distributed stress.
Fig.10 shows a random failure sample. In spite of the model's success in predicting most parts of the frames, it is not able to reconstruct high-stress concentrations at angles of 90 degrees. Since CNNs typically struggle in handling sharp edges, smoothening the sharp corners using Gaussian filters during data preprocessing may help the network to train better. Furthermore, as the loads in frames \(t=25\) and \(t=75\) are lower than in other frames, the prediction in those frames is acceptable.
It is also important that the predictions are temporally consistent. In order to qualitatively demonstrate the temporal consistency of the proposed method, Fig. (a)a shows a comparison of stress values across 100 frames for
Figure 10: Failed predicted dynamic stress distribution and their corresponding errors in different time sequences. (Unit = MPa)
successful predictions in a randomly selected element. As can be seen, the references and the predicted distributions are almost identical in most time sequences, with errors close to zero, despite the stress varying widely with time. Fig. (a)a illustrates how prediction fits with reference more closely when there is more temporal smoothness at peak points. For instance, a good match between prediction and reference can be seen in the rightmost graph in Fig. (a)a, where the stress variation follows a smooth Gaussian distribution in the last peak. However, in the remaining graphs, the prediction has good correlation with the reference despite a lack of smoothness in most peak stress values. Moreover, based on the graphs in Fig. (a)a, we can conclude that the model is better at predicting stress in valleys compared to peaks.
We have also illustrated some of the unsuccessful predictions in Fig. (b)b to identify the limitations of our proposed model. It can be seen that in all graphs with non-Gaussian stress distributions, the model finds it difficult to capture the peak stress values accurately. However, in the first two graphs from the
Figure 11: Comparison of stress values across 100 frames for predictions, references, and errors in a randomly selected element. (a) Successful predictions (b) Unsuccessful predictions (Units = MPa-T).
left in Fig. (b)b, the predictions perfectly fit later peaks of the reference since the stress values in the reference have Gaussian distributions at these points. Figs. (a)a and (b)b depict the MRPE of randomly selected samples across 100 frames and frames corresponding to the minimum and maximum MRPE. As can be seen for both samples, the minimum errors are around zero, with only a few frames exceeding the error by more than 2%.
### Ablation Study
The efficiency of architecture can be attributed to several design choices we have made. Our architecture models the temporal dependency between time frames and the relationship between different elements in an input. Even
Figure 12: Relative errors across 100 frames in the randomly selected sample. Graphs in the center represent the MRPE per frame. (a) and (c) in each figure represent the reference; (b) and (d) refer to their corresponding predictions. Arrows refer to the MRPE of the presented frame. (Units = MPa-T).
though self-attention has shown state-of-the-art performance in sequence modeling, they are not suitable for tasks without large amounts of data. Hence, we use LSTMs for sequence modeling. To demonstrate our claim, we compare our architecture against other baseline architectures. We compare against three architectures as shown in Table 3. The model with multi head self-attention is very similar to our architecture, except the LSTM modules in our model are replaced with self-attention modules. The details of the other models are represented in Table 3. We will refer to our architecture as Neuro-DynaStress. The results are shown in Table 3, and the best results are highlighted in bold.
## 8 Conclusion
We propose Neuro-DynaStress model equipped with Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM) to predict the entire sequence of dynamic stress distribution. The model was designed and trained to use the geometry, boundary conditions and the sequence of loads as input and predicts the sequence of high-resolution dynamic stress contours. The convolutional components are used to extract spatial features and the LSTM captures the temporal dependence between the frames. Feature alignment modules are used to improve the training and performance of our model. The model is trained using synthetic data generated using the PDE toolbox in MATLAB. Neuro-DynaStress can predict dynamic stress distribution with a mean relative percentage error of 2.3%, which is considered an acceptable error rate in engineering communities.
## Declarations
* This research was funded in part by the National Science Foundation grant CNS 1645783.
* There is no conflict of interest among the authors of this paper
* The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{5}{c}{Architecture for modeling temporal information} \\ \hline & Multi-headed self-attention & LSTM & LSTM & LSTM \\ \hline FaPN & ✓ & ✓ & ✓ & \(\times\) \\ Skip connection & ✓ & ✓ & \(\times\) & \(\times\) \\ \hline MRPE(\%) & 4.5 & **2.3** & 6.6 & 9.7 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Architecture comparison |
2309.08857 | Exploring orbital-charge conversion mediated by interfaces with copper
through spin-orbital pumping | We investigated how different materials affect the orbital-charge conversion
in heterostructures with the naturally oxidized cooper capping layer. When we
added a thin layer of $CuOx(3nm)$ onto yttrium iron garnet $(YIG)/W$ stacks, we
observed a significant reduction in the charge current signal measured by means
the spin pumping effect technique. This finding contrasts with the results of a
prior study conducted on YIG/Pt/CuOx, which reported the opposite effect. On
the other hand, when we added the same $CuOx(3nm)$ layer to $YIG/Ti(4nm)$
structures, there was not much change in the spin pumping signal. This occurred
because Ti does not generate much orbital current at the $Ti/CuOx$ interface,
unlike Pt, due to its weaker spin-orbit coupling. Interestingly, when we added
the $CuOx(3nm)$ layer to $SiO_{2}/Py(5nm)/Pt(4nm)$ structures, the spin pumping
signal increased. However, in $SiO_{2}/CuOx(3nm)/Pt(4nm)/Py(5nm)$ structures,
the signal decreased. Finally, we delve into a theoretical analysis of the spin
(orbital) Hall effect in YIG/Heavy-metal systems. These findings have the
potential to advance research in the innovative field of orbitronics and
contribute to the development of new technologies based on spin-orbital
conversion. | E. Santos, J. E. Abrão, A. S. Vieira, J. B. S. Mendes, R. L. Rodríguez-Suárez, A. Azevedo | 2023-09-16T03:23:01Z | http://arxiv.org/abs/2309.08857v2 | **Exploring orbital-charge conversion mediated by interfaces with copper through spin-orbital pumping**
###### Abstract
We investigated how different materials affect the orbital-charge conversion in heterostructures with the naturally oxidized cooper capping layer. When we added a thin layer of CuO\({}_{3}\)(3nm) onto yttrium iron garnet (YIG)/W stacks, we observed a significant reduction in the charge current signal measured by means the spin pumping effect technique. This finding contrasts with the results of a prior study conducted on YIG/Pt/CuO\({}_{s}\), which reported the opposite effect. On the other hand, when we added the same CuO\({}_{3}\)(3nm) layer to YIG/Ti(4nm) structures, there was not much change in the spin pumping signal. This occurred because Ti does not generate much orbital current at the Ti/CuO\({}_{s}\) interface, unlike Pt, due to its weaker spin-orbit coupling. Interestingly, when we added the CuO\({}_{3}\)(3nm) layer to SiO2/Py(5nm)Pt(4nm) structures, the spin pumping signal increased. However, in SiO\({}_{3}\)/CuO\({}_{3}\)(3nm)/Pt(4nm)/Py(5nm) structures, the signal decreased. Finally, we delve into a theoretical analysis of the spin (orbital) Hall effect in YIG/Heavy-metal systems. These findings have the potential to advance research in the innovative field of orbitronics and contribute to the development of new technologies based on spin-orbital conversion.
## 1. Introduction
Since the discovery of the giant magnetoresistance (GMR) phenomenon [1,2], spintronics has become increasingly important for modern electronics [3-6]. Spintronics is a very active field of study that involves the manipulation of the spin angular momentum of electron to create highly efficient devices. This cutting-edge explores a variety of phenomena and applications centered on the interconversion of spin current to charge current. The two most used effects are the spin Hall effect (SHE) [7-9], which mediates the spin-charge interconversion through extrinsic and intrinsic scattering processes in bulk materials, and the spin Rashba-Edelstein effect (SREE), which occurs at surfaces and interfaces due to inversion symmetry breaking (ISB) [10-13]. In the Rashba effect, the in-plane effective magnetic field \(\left(\vec{B}=-\left(\vec{v}\times\vec{E}\right)/c^{2}\right)\) couples to the spin of an electron that is moving near a surface, giving rise to the phenomenon of spin-momentum locking. Here, \(\vec{v}\) represents the in-plane electron velocity and \(\vec{E}\) denotes the perpendicular electric field resulting from the inverse-symmetry breaking [14]. Despite the differences between the SHE and the SREE, both effects are a direct consequence of the spin-orbit coupling (SOC) [9, 12-14].
While the spin degree of freedom has been a central focus of spintronics, the orbital angular momentum (OAM) can also play a key role in electron transport in solids. Theoretical predictions and recent experimental results [15-22] have shown that it is possible to have a non-equilibrium flow of orbital angular momentum perpendicular to a charge current, even with the quenching of orbital angular momentum in solids or in materials with a weak SOC. This effect, known as the orbital Hall effect (OHE), has the unique property of being independent of the SOC, thus being considered a more fundamental effect [15], while the SHE assumes a secondary role. Similar to SHE, OHE can be caused by bulk [23, 24] or interface phenomena [25], as shown in recent experiments where an electric current flows through an interface between a heavy-metal and a light-metal oxide. This intriguing effect, where
the orbital torque can exceed the SHE-induced torque, has been attributed to the generation of an orbital current due to the orbital Rashba-Edelstein effect (OREE) [26-29]. However, since the orbital magnetic moment does not directly exert torque on ferromagnets, orbital Hall current needs to be converted into spin current through the spin-orbit torque (SOT) to enable magnetization switching. Developing experimental schemes to effectively couple orbital and spin moments is a challenging task that aims to improve torque transfer to local magnetization, thus improving spin-orbit torque (SOT) efficiency. A promising approach involves inducing a spin current flow in a material with a large SOC, facilitating the transport of an L-S current, which may have a larger magnetic moment. This additional mechanism offers potential for advanced manipulation of magnetization [17].
The physical grounds behind the OHE have been discussed in several papers. See reference [30] for an updated review. Basically, the intrinsic mechanism assumes that unquenched OAM is induced by an external field \(\vec{E}\) in centrosymmetric materials, where additional interband transition creates an orbital texture in the reciprocal space (\(\vec{L}\propto\vec{E}\times\vec{k}\)), thus leading to the appearance of OHE. On the other hand, in non-centrosymmetric materials, due to the ISB, an intrinsic orbital angular moment is present in the Brillouin zone even with no applied electric field. For example, in two-dimensional materials such as transition metal dichalcogenides (TMDs) [31-35], calculations have shown that these materials are better suited for OHE generation. As ISB occurs in surfaces and interfaces, the orbital counterpart of the spin Rashba-Edelstein effect, the OREE, has been theoretically proposed and experimentally discovered on surfaces with negligible SOC [27, 36, 37]. Rashba-like coupling between the vectors \(\vec{L}\) and \(\vec{k}\) results in both orbital-dependent energy splitting and chiral OAM texture in k-space. Despite their similarities, the SKEE and OREE mechanisms differ because the OREE mechanism operates independently of the SOC. However, when considering the SOC, the SAM couples with the OAM generated by OREE, resulting in the coexistence of both effects. In Fig. 1(a), the upward charge current \(\vec{J}_{C}\), generates a perpendicular spin current (represented by the red symbols) induced by SHE, and a perpendicular orbital current (represented by the oriented circles) induced by OHE. Because of the significant strength of the SOC, both the spin and orbital currents intertwine to form a perpendicular spin-orbital current \(\vec{J}_{L,S}\). Fig. 1(b) illustrates the inverse effect, wherein an upward current \(\vec{J}_{L,S}\) induces a current \(\vec{J}_{C}\) the through inverse SHE (ISHE) and inverse OHE (IOHE). While Figs. 1(a) and 1(b) depict the occurrence of SHE and OHE within the volume, the interfacial counterparts are illustrated in Figs. 1(c) and 1(d). In Fig. 1(c) the presence of spin and orbital Rashba-like states at the interface results in the generation of a perpendicular current \(\vec{J}_{L,S}\), due to the flow of an interfacial charge current. On the other hand, Fig. 1(d) illustrates the inverse effect of that illustrated in Fig. 1(c). A bulk \(\vec{J}_{L,S}\) current will generate a perpendicular interfacial charge current \(\vec{J}_{C}\).
Recent demonstrations have shown the effectiveness of using light materials to generate enhanced spin-orbital torque transfer in heterostructures, coated with a thin layer of naturally oxidized CuO\({}_{\mathrm{x}}\)[28; 29; 38]. This incorporation of light materials into the existing repertoire of spintronic materials has significantly broadened the scope of spin manipulation mechanisms, allowing the use of less expensive materials. The spin-orbital torque enhancement has been demonstrated not only in bulk of light materials, but mainly in interfaces of Cu/CuO\({}_{\mathrm{x}}\) driven by OREE. The physics of the OAM phenomena is clearly demonstrated through several advances, such as the improved damping-like SOT in Permalloy (Py)/CuO\({}_{\mathrm{x}}\)[39], enhanced SOT efficiency in thulium iron garnet (TmIG)/Pt/CuO\({}_{\mathrm{x}}\)[28], and the observation of magnetoresistance driven by OREE in Py/oxidized Cu [29]. In Ref. [28], it is shown that the Pt/CuO\({}_{\mathrm{x}}\) interface generates an orbital current (\(\vec{J}_{L}\)), which then diffuses into the Pt layer. This leads to the emergence of an intertwined spin-orbital current (\(\vec{J}_{L}+\vec{J}_{S}\)), which subsequently reaches the TIG layer and exerts torque on the local magnetization.
The electrical detection of spin can be achieved through the conversion of spin current into charge current using the ISHE [40; 41], as well as through the interfacial mechanism known as the inverse spin Rashba-Edelstein effect (ISREE) [12; 42; 43]. These methods provide viable approaches to
Figure 1: Schemes illustrating the interaction between charge, spin, and orbital currents in a heterostructure with strong SOC. In the top, (a) and (b), the phenomenon occurs in the volume. In the bottom, (c) and (d), it is driven by the interface. In (a) the direct SHE-OHE is presented, where a charge current \(\tilde{J}_{c}\) is converted into a spin-orbital current \(\vec{J}_{L,S}\). In (b) the inverse SHE-OHE is presented, where \(\tilde{J}_{L,S}\) is converted into \(\tilde{J}_{c}\). In (c) and (d) the direct and inverse SREE-OREE conversion mechanisms, with the Rashba states characterized by the spin-orbital textures at the heavy-metal/normal-metal (HM/NM) interface.
detect spin electrically. However, the understanding and experimental results regarding the inverse effects of the OHE (IOHE) and OREE (IOREE) are not well established in the literature. Further research is needed to establish a comprehensive understanding of these phenomena. A remarkable demonstration of the IOREE is presented in Ref. [44], showing the production of orbital current using spin pumping technique driven by ferromagnetic resonance (FMR) on Y\({}_{3}\)Fe\({}_{5}\)O\({}_{12}\) (YIG)/Pt/CuO\({}_{x}\) heterostructures. The spin current injected at the YIG/Pt interface couples with the angular momentum of Pt, facilitated by the strong SOC, and subsequently diffuses to the Pt/CuOx interface, where the IOREE occurs. It was proposed that, due to the strong SOC of Pt, the pure spin current injected into Pt becomes intertwined with the local orbital states, resulting in the generation of an upward pure spin-orbital current (\(\vec{J}_{L,S}\)) with no flow of charge. A portion of this current is then converted within Pt into a transverse charge current through either the ISHE or the IOHE. The remaining spin-orbital current flowing upwards is transformed into a transverse charge current via the inverse OREE in Pt/CuO\({}_{x}\). The YIG/Pt/CuO\({}_{x}\) sample exhibits an ISHE-like voltage measurement that shows more than a fivefold gain compared to the sample without the CuO\({}_{x}\) coating. The same result was obtained by means of the thermal-driven spin pumping technique. In Ref. [45], Yong Xu and collaborators presented additional compelling evidence for the inverse orbital Hall effect (IOHE). They conducted measurements utilizing terahertz emission in free space on ferromagnet FM/NM and FM/Pt/NM samples. Their findings concluded a significant presence of IOHE in Pt/CuO\({}_{x}\) samples. Furthermore, they also deduced that the intermediate layer, specifically Pt in this case, plays a crucial role in the conversion process of spin current (\(\vec{J}_{S}\)) into orbital current (\(\vec{J}_{L}\)) and vice versa. A very compelling work has also been published, in which it has been shown that the spin-to-charge conversion at LaAlO\({}_{3}\)/SrTiO\({}_{3}\) interface is dominated by the orbital contribution [46].
In this study, we performed an extensive investigation on the interaction between spin, orbital, and charge in FM/HM/CuO\({}_{x}\), using YIG or Py as FM and Pt or W as HM. Through a comparison of experimental results between FM/HM with FM/HM/CuO\({}_{x}\) configurations, we found substantial changes in the ISHE-type signal, suggesting a pivotal role played by HM/CuO\({}_{x}\) interface. The paper is divided into the following sections: In the second section, we present experimental results. Subsection 2.1 provides results from the characterization of Pt/CuO\({}_{x}\) samples using Transmission Electron Microscopy (TEM), revealing the presence of an oxidized Cu layer in all samples. Subsection 2.2 presents spin pumping measurements in FM/NM/CuO\({}_{x}\) samples, with NM = Pt, W, and FM=YIG. Subsection 2.3 discusses the results of spin pumping experiments conducted on metallic heterostructures interfaced with CuO\({}_{x}\). Finally, in Section 3 offers both quantitative and qualitative explanations for the results obtained in Section 2.
## 2 Experimental results
The YIG films used in this study were grown via liquid phase epitaxy (LPE) on a 0.5 mm thick Gd\({}_{3}\)GaSO\({}_{12}\) (GGG) substrate with the out-of-plane axis aligned along the (111) crystalline direction. All other films were deposited by DC sputtering at room temperature with a working pressure of 2.8 mTorr and a base pressure of \(2.0\times 10^{-7}\) Torr or less. Each samples had lateral dimensions of 1.5 x 3.0 mm\({}^{2}\), and in all of them the CuO\({}_{\rm x}\) layer was obtained by leaving the samples in the open air at room temperature for two days.
An investigation of the chemical composition of the GGG/Pt/Cu sample was performed using TEM and atomic resolution energy-dispersive x-ray spectroscopy (EDS). The TEM and EDS results confirmed the existence of an oxidation layer on the surface of the Cu films. Fig. 2 (a) shows the cross-section TEM image of the GGG/Pt/Cu sample interface, where it is possible to distinguish the GGG substrate from the Pt and Cu films. The cap layers of Pt and Au on top of the images were grown afterward during the sample preparation for TEM analysis. To quantify the interfacial chemical diffusion, atomic resolution EDS mapping images were performed on the GGG/Pt/Cu interface areas, and the distribution of each atom element can be seen in figures 2(b), 2(c), 2(d), 2(e) and 2(f), corresponding respectively to specific elements: platinum (Pt) is represented by red, gadolinium (Gd) by purple, gallium (Ga) by blue, copper (Cu) by green, oxygen (O) by pink. The atomic percentage of each layer was confirmed by EDS line profile as shown in Fig. 2(g) and 2(h), revealing the presence of the Pt/Cu bilayer spanning a depth range of approximately 10 nm to 63 nm. Note that oxygen is observed in the Cu layer over the range of approximately 53 nm to 63 nm (see Fig. 2(i)). Hence, the TEM and EDS analyses suggest that O atoms diffuse into the Cu layer, implying that the oxidation region in Cu can extend to a depth of up to 10 nm over a two-day exposure to laboratory environmental conditions.
### 2.2 Spin pumping in FM/NM/CuO\({}_{\bf x}\)
The main mechanisms for investigation of the spin-to-charge current interconversion are the SHE (and its inverse effect ISHE), which operates on bulk materials, and the REE (and its inverse effect IREE), which operates on systems without spatial inversion symmetry, such as surfaces and interfaces with large SOC. While SHE and ISHE have been largely investigated in strong SOC materials such as Pt, W, Ta, Pd, etc. [9], REE and IREE have been investigated in 2D materials, topological surfaces, etc. [47]. Pt is known to have positive \(\theta_{SH}\), while Ta and W display a negative \(\theta_{SH}\)[48; 49]. Materials with
Figure 2: (a) Cross-sectional TEM image and (b-f) EDS mapping images of the GGG/Pt/Cu sample, displaying chemical element mapping that distinguishes between the GGG substrate (with Ga, Gd, and O elements) and the Pt and Cu films. The color scheme corresponds to specific elements: platinum (Pt) is represented by red, gadolinium (Gd) by purple, gallium (Ga) by blue, copper (Cu) by green, oxygen (O) by pink. (g-i) EDS line scan of atomic fraction of elements Pt, Gd, Ga, Cu and O. The distribution of each atom element is illustrated by their corresponding atomic percentages and the shaded region in (h) indicates the transition area where the O atom diffuses into the Cu layer, displaying a substantial presence of oxygen, with an approximate width of \(\sim\)10 nm.
positive \(\theta_{SH}\) exhibit a spin polarization \(\hat{\sigma}_{S}\) parallel to the orbital polarization \(\hat{\sigma}_{L}\). On the other hand, materials with negative \(\theta_{SH}\) present an antiparallel alignment between the spin polarization \(\hat{\sigma}_{S}\) and the orbital polarization \(\hat{\sigma}_{L}\). In the presence of strong SOC, both orbital and spin effects can occur simultaneously, leading to the intertwining of both degrees of freedom. Consequently, the resulting charge current comprises a multitude of effects.
The spin pumping (SP) technique was employed to investigate the effect on the ISHE signal in heterostructures consisting of YIG/W/CuO\({}_{\rm x}\) and YIG/Pt/CuO\({}_{\rm X}\). In these structures, the Cu(3nm) layer undergoes natural oxidation. Furthermore, Ti films were included in this study due to their weak SOC, which aids in gaining a more comprehensive understanding of the underlying physics involved. The SP technique [40, 41, 50] is characterized by the pumping of a pure spin current \(\vec{J}_{S}\) through an FM/NM interface by the uniform precession of the magnetization under ferromagnetic resonance (FMR) condition. The spin current \(\vec{J}_{S}\) is converted into transverse DC charge current through the ISHE. The resulting SP voltage (\(V_{SP}\)) was measured using a nanovoltmeter, with electrodes fixed to the sample edges using silver paint. The SP current is defined as \(I_{SP}=V_{SP}\) / \(R\), where \(R\) represents the electrical resistance along the NM layer. SP measurements were performed at a fixed radio frequency of 9.41 GHz. The charge current resulting from the SP experiment is described by the equation \(\vec{J}_{C}^{SP}=\left(\frac{2e}{h}\right)\theta_{L,S}(\vec{J}_{S}^{SP}\times \hat{\sigma}_{L,S}\) ), where \(\theta_{L,S}\) is the spin-orbital Hall angle, \(\hat{\sigma}_{L,S}\) is the effective spin-orbital polarization, and the angle between the DC charge current and the voltage measurement direction is given by \(\phi\). In Fig. 3 (a), the typical spin pumping signal for the YIG/Pt(4)/CuO\({}_{\rm x}\)(3) sample is depicted, where the numbers are the layer thicknesses in nm, and the YIG layer thickness is 400 nm. At \(\phi=0^{o}\) (blue symbols), a positive sign SP curve is observed, indicating a material with positive spin-orbital Hall angle (\(\theta_{L,S}>0\)). At \(\phi=180^{\circ}\) (red symbols), a sign inversion occurs, while at \(\phi=90^{\circ}\) the measured voltage is null. The inset of Fig. 3 (a) displays similar results for the YIG/Pt(4) sample, which agrees with the aforementioned equation, but with lower intensity than the signal with a CuO\({}_{\rm x}\) cover layer. Fig. 3 (b) presents the results for the YIG/W(4)/CuO\({}_{\rm x}\)(3) sample, which also obeys the same equation but exhibits an opposite sign compared to Pt, as W possesses \(\theta_{L,S}<0\). The same behavior can be observed in the YIG/W(4) sample, as shown in the inset of Fig. 3(b). Additionally, Fig. 3 (c) provides a comparison of the signals obtained with the samples of YIG/W(4)/CuO\({}_{\rm x}\)(3) and YIG/W(4) at \(\phi=0^{\circ}\), revealing a reduction in the signal when CuO\({}_{\rm x}\) cover layer is added. Fig. 3 (d) shows the behavior of \(I_{SP}\) as a function of the thickness of the layer W (\(t_{W}\)), which varied from 2 nm to 8 nm. Two sets of samples were prepared: the A series consists of YIG/W(\(t_{w}\)) (black symbols), while the B series consists of YIG/W(\(t_{w}\))/CuO\({}_{\rm x}\)(3) samples (the red symbols). The B series exhibits a different behavior, where thinner films yield smaller \(I_{SP}\) signals, while for larger thicknesses, the \(I_{SP}\) tends to approach that of the A series. The structural characteristics of the W films were analyzed by x-ray diffraction (XRD) measurements and can be seen in Appendix A.
The spin current \(\vec{j}_{S}\) injected upwards, induced by the precessing magnetization under FMR condition, interacts with the local orbital momentum, resulting in the generation of an ascending current \(\vec{j}_{L,S}\) in W. In this scenario, \(\hat{\sigma}_{S}\) and \(\hat{\sigma}_{L}\) are antiparallel. A portion of this spin-orbital current is subsequently converted into a charge current within the volume of W through the processes of ISHE and IOHE. The remaining current reaches the W/CuO\({}_{\text{x}}\) interface, where it generates 2D charge current parallel to the interface due to the IOREE. This 2D charge current reduces the original current (as it has the opposite polarity to the bulk charge current), resulting in a smaller signal. Thus, changing the heavy metal layer before depositing the oxide layer enables the manipulation of the SP signal.
To gain a better understanding of the role played by SOC in magnetic heterostructures, we fabricated samples of YIG/Ti and YIG/Ti/CuO\({}_{\text{x}}\). Like the previous experiment, the precessing magnetization generates a spin accumulation at the YIG/Ti interface, which diffuses upwardly as a pure spin current along the Ti layer. The insets of Figs. 4(a) and 4(b) depict the SP signal (for \(\phi=0\)\(\lx@math@degree\)) of
Figure 3: (a) Presents the typical \(I_{SP}\) signals for the samples with and without the CuO\({}_{\text{x}}\) cover layer (inset) at a fixed rf power of 14 mW and rf frequency of 9.41 GHz. In (a), Pt is used as NM, while in (b), W is used as the NM. These materials exhibit opposite Hall angles, resulting in opposite polarities of the measured signals. (c) Compares the SP signals of the samples with (light blue) and without (dark blue) the CuO\({}_{\text{x}}\) capping layer. (d) Demonstrates the dependence of \(I_{SP}\) on \(t_{Pt}\) for the YIG/W(\(t_{W}\))/CuOX(3) (red) and YIG/W(\(t_{W}\)) (black) samples. The solid lines is the experimental fit using from equations (7) and (8).
YIG/Ti and YIG/Ti/CuO\({}_{\rm x}\) samples, respectively. Solid lines of Figs. 4(a) and 4(b) depict the respective fits to the experimental data using a Lorentzian function, represented by blue curves (\(\phi=0^{\circ}\)) and red curves (\(\phi=180^{\circ}\)). Three important pieces of information can be obtained from these data: (i) The weak SP signal generated by Ti has inverse polarity when compared to Pt; (ii) The fit to the experimental data, shown in Figs 4 (a) and 4 (b), exhibit the similar values, meaning that the capping layer of CuO\({}_{\rm x}\) practically does not affect the detected signal; (iii) Since Ti exhibits a weak SOC, there was almost no generation of orbital current within the material, and no observable IOREE was detected at the Ti/CuO\({}_{\rm x}\) interface. As a result, there was no significant increase in the SP signal when comparing both samples. These findings support the hypothesis that the reduction in the SP signal in the YIG/W/CuO\({}_{\rm x}\) samples can be attributed to the orbital effect, particularly the IOREE occurring at the W/CuO\({}_{\rm x}\) interface. The key distinction between W/CuO\({}_{\rm x}\) and Ti/CuO\({}_{\rm x}\) lies in the absence of SOC in Ti. Therefore, no spin-orbital current propagates through the Ti layer, once YIG only injects pure spin current through the YIG/Ti interface.
Delving further into the subject, the chirality of the orbital texture in \(\vec{k}\)-space could be influenced by the potential electric field, which results from the break of the translational symmetry and is correlated with the work function of the interfaces. However, due to the intertwined \(d\)-states of W and Cu, and \(p\)-states of oxygen, it becomes challenging to simplify the conversion from orbital current to charge current into one or two parameters that solely rely on the electronic structure of individual subsystems. For example, when dealing with surfaces or interfaces of materials with complex orbital characteristics, like W or oxygenated Cu, it becomes difficult to describe the orbital properties in terms of a single Rashba-type constant. The value of this constant varies significantly in terms of both sign and magnitude, depending on the specific electronic band structures under consideration. Therefore, we can only attribute the origin of the interfacial charge current to either the interfacial spin Rashba-Edelstein effect (ISREE) or the interfacial orbital Rashba-Edelstein effect (IOREE). On the other hand, the phenomenon undergoes changes when the FM layer used to inject the spin current also injects orbital current, as occurs with Co [51]. Fig. 4(c) shows the symmetric curve of the measured SP signal, depicted in the inset of Fig. 4(c), obtained by fitting it with both symmetric and anti-symmetric components [52]. This measurement was performed for Si/Ti(20)/Co(10), where the Co(10) island is deposited on top of the Ti, thus allowing to fix the electrode detectors separated from the Co layer. The spin-orbital current injected by the Co layer, under FMR condition, through the Co/Ti interface, undergoes conversion into a charge current by the inverse OHE. The bottom inset is the symmetrical part of the SP signal measured in Co(12)/Pt(10), where the signal intensity is few times greater than the Ti(20)/Co(10) signal. Upon comparison between the black signal and the red signal (which is the Co self-induced voltage), the observed gain is more than eightfold.
### 2.3 Spin pumping in all metal heterostructures
Although 3d FM metals such as Fe, Co, Ni, and Py are more versatile and easier to prepare compared to ferrimagnetic insulators, like YIG, these materials exhibit a self-induced SP voltage [53; 54], which can potentially mask the SP signal. This self-induced voltage consists of both symmetric and anti-symmetric components. The anti-symmetric is typically associated with spin rectification effects, while the symmetric component is attributed to spin-Hall like effects [55]. To elucidate the interplay between spin and orbital moments, we investigate the spin pumping phenomena in two series of heterostructures: series A consists of Si/Py(5)/Pt(4)/CuO\({}_{\text{x}}\)(3) (with and without CuO\({}_{\text{x}}\) capping layer), while series B consists of Si/CuO\({}_{\text{x}}\)(3)/Pt(4)/Py(5) (with and without CuO\({}_{\text{x}}\) underlayer). For series B, we initially deposited the copper layer and allowed it to oxidize for two days. Subsequently, we placed the sample back into the sputtering chamber to deposit the Pt and Py films. In series A, the Cu layer, which partially covers the Pt layer, was the final deposition step. Afterward, it was left to oxidize for two days. The only distinction between series A and B is the direction of the spin current injection - upwards for series A and downwards for series B. If the conversion of spin current to charge current is solely given by the inverse SHE, the measured signals should be identical in magnitude but possess opposite polarities. Fig. 5(a) shows the SP signals measured for two samples: Si/Py(5)/Pt(4) and Si/Py(5)/Pt(4)/CuO\({}_{\text{x}}\)(3). In both samples, the spin current is injected upwards through the Py/Pt interface. When comparing the signals obtained from these two samples, a significant increase in the SP signal is observed for the CuO\({}_{\text{x}}\)-coated sample (represented by dark blue symbols) compared to the uncoated sample (represented by light blue symbols) at \(\phi=0^{\circ}\). This enhancement is consistent with previous findings for YIG/Pt/CuO\({}_{\text{x}}\)[44]. In Figure 5(a), it is evident that the injected spin current couples with the orbital momentum of Pt, resulting in the generation of a spin-orbital current that propagates upwards until it reaches the Pt/CuO\({}_{\text{x}}\) interface. At the interface, this spin-orbital current undergoes conversion into a charge current through the process of IOREE. The converted charge current combines with the
Figure 4: (a) and (b) shows the SP signals measured in YIG/Ti(4) and YIG/Ti(4)/CuO\({}_{\text{x}}\)(3), respectively. The weak signals were fitted by symmetrical Lorentzian curves, given by the solid lines. Notably, the amplitudes of the signals do not change, indicating that the capping layer of CuO\({}_{\text{x}}\) does not affect the SP signal. Due to the weak SOC of Ti, no L-S current is being generated within the Ti volume. Solid lines in (c) show the symmetrical component obtained by fitting the data of the SP signal of Si/Ti(20)/Co(10) and Si/Co(10). While the weak SP signal from Si/Co is self-induced, the strong SP signal from Si/Ti/Co is due to the bulk conversion of the orbital current injected into Ti and its conversion by OHE. The bottom inset is the symmetrical part of the SP signal measured in Co(12)/Pt(10).
bulk charge current, effectively increased the SP signal. The significant increase is clearly shown in Fig 5(b), which shows the symmetric component extracted from fitting to the experimental data of Fig. 5(a). By comparing the slopes of the SP signals as a function of the RF power for both samples, as shown in the inset of Fig. 5(b), the CuO\({}_{\rm x}\) sample coated exhibits a 2.6-fold increase compared to the uncoated sample. An increase of \(\sim\)20 nA, for a power of 110 mW is shown by the vertical black arrow of Fig. 5(b). However, Fig. 5(d) depicts intriguing results. When the stack order of the layers is inverted, causing the injected spin current from the Py to flow downwards, the SP signal of the sample with an underlayer of CuO\({}_{\rm x}\) exhibits a decrease compared to the SP signal of the sample without a CuO\({}_{\rm x}\) underlayer. This observation is opposite to the result shown in Fig. 5(a). From the fits to the experimental data, obtained for the symmetric component as shown in Fig. 5(e), the SP signal exhibits a reduction of 25 nA for the sample with the CuO\({}_{\rm x}\) underlayer in comparison with the sample without it. It is important to note that the SP signals of the samples with inverted stack order exhibit inverse polarities due to the change in the direction of the spin current. According to the equation \(\tilde{J}_{C}^{SP}\propto(\tilde{J}_{S}^{SP}\times\tilde{\sigma}_{S})\), while \(\tilde{J}_{S}^{SP}\) reverses its direction, \(\tilde{\sigma}_{S}\) (and \(\tilde{\sigma}_{L}\)) remains unchanged along \(+\hat{\mathfrak{X}}\) direction parallel to the external DC magnetic field. As a result, \(\tilde{J}_{C}^{SP}\) also reverses its polarity.
Explaining the reduction of the SP signal in the sample where the layer stack order is inverted appears to be a complex task, as it heavily relies on the influence of the static orbital texture at the Pt/CuO\({}_{\rm x}\) interface. Our results show that the charge current generated at the Pt/CuO\({}_{\rm x}\) interface does not reverse its direction when the spin current flows from top to bottom. This charge current opposes the charge current generated within the Pt layer, reducing the measured charge current along y direction, as illustrated in Figs. 5(c) and 5(f). The Rashba-type chiral orbital texture present at the Pt/CuO\({}_{\rm x}\) interface remains unchanged regardless of whether the CuO\({}_{\rm x}\) layer is deposited above or below the Pt layer. Consequently, the charge current generated by the IOREE flows parallel to \(+\)y (green arrows at Figs. 5(c) and (f)), while the charge current generated by the spin-orbital current within the Pt layer flows parallel to \(+\)y (blue arrow at Fig. (c)) when it is injected from the bottom and parallel to \(-\)y (blue arrow at Fig. (f)) when injected from the top. It is important to note that the orbital Rashba effect is not affected by the spin current propagation direction and instead depends only on the orbital polarization \(\hat{\sigma}_{L}\). Within the Pt layer, the orbital polarization consistently aligns parallel to the spin current polarization due to the strong SOC of Pt.
## 3 Phenomenological background
The quantitative interpretation of SHE and OHE presented in this section is based on recently published papers [18, 45, 56, 57]. Basically, the generation of spin and orbital angular momentum currents, along with their interconversion mediated by SOC, can be interpreted in terms of the out-of-equilibrium spin imbalance, which manifests as a shift in spin and orbital chemical potentials \(\mu_{S}(z)\) and \(\mu_{L}(z)\), respectively. These chemical potentials represent the spin and orbital accumulation, respectively. The accumulation of \(S\) or \(L\) quantities results in both spin flow and orbital angular momentum flow, and these phenomena can be further analyzed through coupled diffusion equations. A key finding presented in Ref. [18] was the introduction of a coupling parameter, \(\lambda_{LS}\), which accounts for the interaction between \(L\) and \(S\), mediated by the SOC of the material. In Ref. [18], the excitation of orbital current is obtained by applying an electric field, which is different from our approach. Here (and in Ref. [44]), we create a spin accumulation (\(\mu_{S}(z)\)), by means of the spin pumping technique, in a material with large
Figure 5: (a) Typical SP signals for the samples with and without the top layer of CuO\({}_{\rm x}\) at \(\phi=0^{\circ}\). The samples with the top layer of CuO\({}_{\rm x}\) are denoted by dark blue symbols, while those without it are represented by light blue symbols. The SP signals measured at \(\phi=180^{\circ}\) have reversed polarities, represented by red symbols (with the top layer of CuO\({}_{\rm x}\)) and pink symbols (without the top layer of CuO\({}_{\rm x}\)). The SP data measured at \(\phi=90^{\circ}\) show no detectable SP signal as expected. Fig. (b) displays the symmetrical component of the SP signal, obtained from fitting the measured data shown in (a), for samples with and without the CuO\({}_{\rm x}\) layer. The inset shows the linear relationship of \(I_{SP}\) and rf power. The vertical black arrow represents the increase of the SP signal resulting from the presence of the top layer of CuO\({}_{\rm x}\). (d) Typical SP signals for the samples with and without the bottom layer of CuO\({}_{\rm x}\) at \(\phi=0^{\circ}\) (dark and light blue symbols), and \(\phi=180^{\circ}\) (red and pink symbols). As the spin current is injected from top, the SP signals exhibit reverse polarity compared to the signals shown in (a). The curves in (e) depict the numerical fittings derived from the data shown in (d). The vertical black arrow represents the reduction of the SP signal resulting from the presence of the top layer of CuO\({}_{\rm x}\). Figures (c) and (f) illustrate the underlying mechanism responsible for the increase and decrease of the SP signal. In (c), the IORRE and SHE currents are parallel, whereas in (f), they are antiparallel. Insets of Figs. (a) and (d) show the derivative of the FMR absorption signal for the Py layer.
SOC, resulting in the simultaneous creation of an orbital accumulation (\(\mu_{L}(z)\)). Since materials with large SOC can exhibit two different polarizations of the spin-to-charge conversion processes, such as positive for Pt and negative for W, the time evolution of \(\mu_{S}(z)\) and \(\mu_{L}(z)\) can be expressed as \(\mu_{L}(t)=v_{LS}C\mu_{S}(t)\), where \(v_{LS}\) is a variable with only two possible values: \(v_{LS}=\pm 1\), and \(C\) is a proportionality constant. In our study, we inject a spin current through the YIG/HM interface, leading to distinct boundary conditions necessary for solving the diffusion equations describing \(\mu_{S}(z)\) and \(\mu_{L}(z)\), as outlined in Eqs. (6) and (7) of Ref. [18]. In our study, the boundary conditions are given by
\[\left\{\frac{d\mu_{S,L}(z)}{dz}\right|_{z=0}=\left(\frac{2}{\hbar ND}\right)J_{ S,L}(z)\bigg{|}_{z=0}, \tag{1}\]
\[\frac{d\mu_{S,L}(z)}{dz}\bigg{|}_{z=t_{NM}}=0,\]
where \(\mu_{S,L}(z)\) is the spin (orbital) chemical potential, \(D\) is the diffusion coefficient, and \(N\) represents the density of states per unit volume in the NM layer. To capture the process of spin-to-orbital current conversion, one must add a phenomenological term to spin (orbital) diffusion equation that is proportional to its orbital (spin) counterpart, i.e.
\[\frac{d^{2}\mu_{S}}{dz^{2}}=\frac{\mu_{S}}{\lambda_{S}^{2}}\pm\frac{\mu_{L}}{ \lambda_{LS}^{2}} \tag{2}\]
\[\frac{d^{2}\mu_{L}}{dz^{2}}=\frac{\mu_{L}}{\lambda_{L}^{2}}\pm\frac{\mu_{S}}{ \lambda_{LS}^{2}} \tag{3}\]
where \(+(-)\), sign corresponds to negative (positive) spin-orbit coupling. To solve the coupled equations (2) and (3) we substitute the former into the latter,
\[\frac{d^{4}\mu_{S}}{dz^{4}}-\left(\frac{1}{\lambda_{S}^{2}}+\frac{1}{\lambda_ {L}^{2}}\right)\frac{d^{2}\mu_{S}}{dz^{2}}+\left(\frac{1}{\lambda_{L}^{2} \lambda_{S}^{2}}-\frac{1}{\lambda_{LS}^{4}}\right)\mu_{S}=0 \tag{4}\]
The solution of Eq.(4) is
\[\mu_{S}(z)=Ae^{z/\lambda_{1}}+Be^{-z/\lambda_{1}}+Ce^{z/\lambda_{2}}+De^{-z/ \lambda_{2}} \tag{5}\]
similarly, the equation for \(\mu_{L}\) is obtained, \(\mu_{L}(z)=Ee^{z/\lambda_{1}}+Fe^{-z/\lambda_{1}}+Ge^{z/\lambda_{2}}+He^{-z/ \lambda_{2}}\). The polynomial characteristic leads to,
\[\frac{1}{\lambda_{12}^{2}}=\frac{1}{2}\Bigg{[}\Bigg{(}\frac{1}{\lambda_{S}^{2}}+ \frac{1}{\lambda_{L}^{2}}\Bigg{)}\pm\sqrt{\left(\frac{1}{\lambda_{S}^{2}}-\frac{ 1}{\lambda_{L}^{2}}\right)^{2}+4\frac{1}{\lambda_{LS}^{4}}}. \tag{6}\]
Solving the system of equations, we get the solutions
\[\begin{split}\mu_{S}(z)=\Big{(}\frac{2}{hND}\Big{)}\lambda_{1} \frac{\left(J_{S}(0)\mp\frac{J_{L}(0)}{\gamma_{2}\lambda_{LS}^{2}}\right)cosh \left[\left(t_{NM}-z\right)/\lambda_{1}\right]}{\left(1-\frac{\gamma_{1}}{Y_{2 }}\right)}\frac{cosh\left[\left(t_{NM}-z\right)/\lambda_{1}\right]}{sinh\left( t_{NM}/\lambda_{1}\right)}\\ +\left(\frac{2}{hND}\right)\lambda_{2}\frac{\left(J_{S}(0)\mp \frac{J_{L}(0)}{\gamma_{1}\lambda_{LS}^{2}}\right)cosh\left[\left(t_{NM}-z \right)/\lambda_{2}\right]}{\left(1-\frac{Y_{2}}{Y_{1}}\right)}\frac{cosh \left[\left(t_{NM}-z\right)/\lambda_{2}\right]}{sinh\left(t_{NM}/\lambda_{2} \right)}\end{split} \tag{7}\]
as \(\mu_{L}(z)=Cv_{LS}\mu_{S}(z)\), then
\[\begin{split}\mu_{L}(z)=Cv_{LS}\Bigg{\{}&\Big{(} \frac{2}{hND}\Big{)}\lambda_{1}\frac{\left(J_{S}(0)\mp\frac{J_{L}(0)}{\gamma_ {2}\lambda_{LS}^{2}}\right)}{\left(1-\frac{Y_{2}}{Y_{1}}\right)}\frac{cosh \left[\left(t_{NM}-z\right)/\lambda_{1}\right]}{sinh\left(\frac{t_{NM}}{ \lambda_{1}}\right)}\\ +&\Big{(}\frac{2}{hND}\Big{)}\lambda_{2}\frac{\left( J_{S}(0)\mp\frac{J_{L}(0)}{\gamma_{1}\lambda_{LS}^{2}}\right)}{\left(1-\frac{Y_{ 2}}{Y_{1}}\right)}\frac{cosh\left[\left(t_{NM}-z\right)/\lambda_{2}\right]}{ sinh\left(t_{NM}/\lambda_{2}\right)}\Bigg{\}}.\end{split} \tag{8}\]
where,
\[\begin{split} J_{S}(0)=\frac{G_{S}}{e}\mu_{S}(0),\\ J_{L}(0)=\frac{G_{L}}{e}\mu_{L}(0),\end{split} \tag{9}\]
\(G_{S,L}\)is the spin-orbital mixing conductance on the interface FM/HM. The charge current is give by,
\[\begin{split}\vec{J}_{C}^{SHE}=\theta_{S}\big{(}\vec{J}_{S}\times \hat{\sigma}_{S}\big{)},\\ \vec{J}_{C}^{0HE}=\theta_{L}\big{(}\vec{J}_{L}\times\hat{\sigma}_ {L}\big{)}\end{split} \tag{10}\]
where \(\theta_{S,L}\) is the spin Hall angle.
To explain our measured SP signals: the increase in YIG/Pt/CuO\({}_{x}\) and a decrease in YIG/W/CuO\({}_{x}\), we consider the contributions of both ISHE and IOHE, using the relationship \(J_{C}=\theta_{(LS)}J_{L,S}\), as presented in Refs. [56, 57]. This allows us to propose a phenomenological equation for the charge current density measured YIG/HM/CuO\({}_{x}\) as,
\[\vec{J}_{C}^{L,S}=\left(\frac{2e}{h}\right)\theta_{SH}\left(\vec{J}_{S}\times \hat{\sigma}_{S}\right)+\left(\frac{2e}{h}\right)\theta_{LH}\left(\vec{J}_{L} \times\hat{\sigma}_{L}\right)+\lambda_{IOREE}J_{L}(z=t_{NM})\hat{\sigma}_{L}. \tag{11}\]
The first term represents the conversion of the spin component of the intertwined current \(\vec{J}_{L,S}\) into charge current via ISHE within the HM. The second in equation (11) term represents the conversion of the induced orbital current into charge current via IOHE within the HM. This second term can be used, since it arises from the \(LS\) coupling, making it analogous to the equation for the ISHE. It shows that ISHE does not necessarily have the same polarity as IOHE due to the \(v_{LS}\) signal. The third term represents the conversion of the residual orbital current, which reaches the HM/CuO\({}_{\rm x}\) interface with Rashba states, into charge current, known as Inverse Orbital Rashba Effect. As a result, the Pt/CuO\({}_{\rm x}\) interface exhibits gain in the resulting charge current, while the W/CuO\({}_{\rm x}\) interface shows a reduction in the resulting charge current. Therefore, the polarity of the orbital texture of naturally surface-oxidized copper can be modified by changing the HM, leading to an interfacial charge current in the opposite direction to the total charge current. Furthermore, the results presented in Figs. 5 (a) and (d) demonstrate that the IOREE in HM/CuO\({}_{\rm x}\)(3) remains independent of the direction of the current \(\vec{J}_{L}\), expressed in equation (11).
In conclusion, our investigation of the interaction between spin and orbital currents has yielded significant findings. Through the injection of a pure spin current into a HM layer via the YIG/HM interface, we observed the emergence of orbital accumulation, facilitated by the strong SOC of the HM. This interplay between spin and orbital effects leads to the intriguing phenomenon of transporting orbital angular momentum along the HM layer. As the spin-orbital entangled \(J_{LS}\) current moves up to the interface of HM/CuO\({}_{\rm x}\), there occurs the ISHE-like conversion of \(J_{LS}\) into charge current. Moreover, the residual \(J_{LS}\) current that reaches the HM/CuO\({}_{\rm x}\) interface is further converted into a charge current by the interfacial IOREE phenomenon. Interestingly, we observed that while the charge current generated at the Pt/CuO\({}_{\rm x}\) interface exhibits a gain, the charge current at the W/CuO\({}_{\rm x}\) exhibits a decrease. This result is furthermore confirmed in heterostructure of CuO/Pt/Py and Py/Pt/CuO\({}_{\rm x}\), where the inversion of the layers stack shows a similar behavior. Overall, our work underscores the rich complexity of orbital and spin interactions in HM/CuO\({}_{\rm x}\) systems, offering valuable insight into potential applications of spintronics and orbital-based technologies. These compelling findings pave the way for further exploration and innovation in the field of quantum materials and nanoelectronics.
This research is supported by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq), Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES), Financiadora de Estudos e Projetos (FINEP), Fundacao de Amparo a Ciencia e Tecnologia do Estado de Pernambuco
(FACEPE), Fundacao de Amparo a Pesquisa do Estado de Minas Gerais (FAPEMIG) - Rede de Pesquisa em Materiais 2D and Rede de Nanomagnetismo, and INCT of Spintronics and Advanced Magnetic Nanostructures (INCT-SpinNanoMag), CNPq 406836/2022-1. This research used the facilities of the Brazilian Nanotechnology National Laboratory (LNNano), part of the Brazilian Centre for Research in Energy and Materials (CNPEM), a private nonprofit organization under the supervision of the Brazilian Ministry for Science, Technology, and Innovations (MCTI). Therefore, the authors acknowledge LNNano/CNPEM for advanced infrastructure and technical support. The TEM staff is acknowledged for their assistance during the experiments (Proposals No. 20210467 and 20230795, TEM-Titan facility).
## Data Availability Statement
The data that support the findings of this study are available from the corresponding author upon reasonable request.
## Appendix A XRD Measurements in SiOx/W(\(t_{W}\))
In order to obtain structural information of sputtered W- thin films, we carry out x-ray diffraction measurements in out-of-plane grazing incident x-ray diffraction (GIXRD). Since in this geometry the substrate signal is almost suppressed, the existence of two distinct crystalline phases (\(\alpha\)-W and \(\beta\)-W) as a function of film thickness can be addressed. Figure 6 shows the GIXRD scans for W films with thickness in the range of 5 nm (purple curve) to 20 nm (orange curve). The vertical blue dashed lines denotes the expected peak positions for (A15) \(\beta\)-W crystalline phase and the vertical red dashed lines denotes the expected peak position for body centered cubic (bcc) \(\alpha\)-W phase, according to (JCPDS #03-065-6453)* and (JCPDS #00-004-0806)* crystallographic data, respectively. Also as can be seen in Fig. 6, for 10 nm W film, the presence of a broad and low intensity peak at 20 \(\sim\) 40\({}^{\circ}\) suggests the coexistence of two crystalline phases. Indeed, this peak can be associated to both reflections (210) of \(\beta\)-W phase and (110) of \(\alpha\)-W phase, located at 20 \(\sim\) 39.88\({}^{\circ}\) and 20 \(\sim\) 40.26\({}^{\circ}\), respectively. On the other hand, for film thickness above 10 nm it is possible to observe three characteristic diffraction peaks. The first (most intense) located at 2\(\theta\)\(\sim\) 40. 44\({}^{\circ}\) is closer to the expected position for the reflection (110) of \(\alpha\)-W phase and does not exhibit an asymmetrical shape. Furthermore, the other two peaks located at 20 \(\sim\) 58.31\({}^{\circ}\) and 20 \(\sim\) 73.42\({}^{\circ}\) can only be assigned to (200) and (211) diffraction planes of bcc \(\alpha\)-W phase. Taking into account the absence of other \(\beta\)-W phase reflections and that the integrated intensity (area under diffraction curve) of the \(\alpha\) -W reflections are increasing with the film thickness, which means that the volume fraction of \(\alpha\) -W increases, we can infer that for thickness above 10 nm the films are predominantly \(\alpha\)-W phase. Indeed, this fact is in good agreement with previous results that predicts the existence of single \(\alpha\)-W phase for thicker films [58]. It is also important to observe in films with a
thickness of less than or equal to 10 nm the appearance of peaks between \(26\sim 52^{\circ}\) and \(20\sim 56^{\circ}\), which are related to the Si/SiO substrate, because the W diffraction peaks have very low intensities.
* Phase identification is made with reference to Powder Diffraction File compiled in International Center for Diffraction Data (ICDD) card system issued by JCPDS (Joint Committee on Powder Diffraction Standards). No. 03-065-6453 for \(\beta\) -W and No. 00-004-0806 for \(\alpha\) -W.
Figure 6: Measures of XRD in SiOx/W for different thicknesses of W. For low thicknesses (\(t_{W}<10nm\)) there is a predominance of the \(\beta\) phase, while for \(t_{W}>10\)\(nm\) the \(\alpha\) phase is predominant. |
2301.06080 | Comprehensive Literature Survey on Deep Learning used in Image
Memorability Prediction and Modification | As humans, we can remember certain visuals in great detail, and sometimes
even after viewing them once. What is even more interesting is that humans tend
to remember and forget the same things, suggesting that there might be some
general internal characteristics of an image to encode and discard similar
types of information. Research suggests that some pictures tend to be memorized
more than others. The ability of an image to be remembered by different viewers
is one of its intrinsic properties. In visualization and photography, creating
memorable images is a difficult task. Hence, to solve the problem, various
techniques predict visual memorability and manipulate images' memorability. We
present a comprehensive literature survey to assess the deep learning
techniques used to predict and modify memorability. In particular, we analyze
the use of Convolutional Neural Networks, Recurrent Neural Networks, and
Generative Adversarial Networks for image memorability prediction and
modification. | Ananya Sadana, Nikita Thakur, Nikita Poria, Astika Anand, Seeja K. R | 2022-12-14T16:53:26Z | http://arxiv.org/abs/2301.06080v2 | Comprehensive literature survey on deep learning used in image memorability prediction and modification
###### Abstract
As humans, we can remember certain visuals in great detail, and sometimes even after viewing them once. What is even more interesting is that humans tend to remember and forget the same things, suggesting that there might be some general internal characteristics of an image that make it easier for the brain to encode and discard certain types of information. Research suggests that some pictures tend to be memorized more than others. The ability of an image to be remembered by different viewers is one of its intrinsic properties. In visualization and photography, creating memorable images is a difficult task. Hence, to solve the problem, various techniques predict visual memorability and manipulate images' memorability. We present a comprehensive literature survey to assess the deep learning techniques used to predict and modify memorability. In particular, we analyze the use of Convolutional Neural Networks, Recurrent Neural Networks, and Generative Adversarial Networks for image memorability prediction and modification.
Keywords:Memorability, Deep Learning, Convolutional Neural Networks, Recurrent Neural Networks, Generative Adversarial Networks
## 1 Introduction
Every day we are exposed to many images, only a few of which are remembered, while most of them we tend to forget. Though the human cognitive system has an enormous storage capacity [1, 2], it may only be able to store some images as detailed as they are. Few images are remembered in great detail, even fewer in minor details, and the remainder is quickly forgotten [3]. Natural scenery photos, for example, are less likely to be remembered than images of animals, vehicles, and people [4]. According to previous research, images are consistently memorable to different viewers [5] and some images have better memorability than others. They also showed that memorability is an intrinsic and measurable property of an image. When we discuss memorability as a measurable property, the question of an artificial system successfully predicting the image memorability score comes along.
Previous works done in the domain of image memorability can be grouped into three categories - understanding features that affect image memorability, Prediction of images' memorability scores, and modifying images' memorability. Memorability was initially calculated as a probabilistic function through various experiments
conducted among people, and hence, the use of regression models like support vector regression and multi-view adaptive regression [7] was quite prevalent in the initial years of study. Deep learning models came into the picture later on but in recent times they are being widely used for this task ([4],[7],[16],[20],[23]-[26]). Modifying the memorability of images was initially done by classic photo editing software [8]. Later, Generative Adversarial Networks (GANs) [9] gained popularity for modifying image memorability ([28]-[30]). Generative models ([28],[29]) find various applications in creating images from text, modifying images, creating images based on a given category, and so on.
Image memorability has many applications [6], such as in education, where we try to create more memorable academic materials to help students memorize better. In this research paper, we present a comprehensive literature survey on various deep learning methods used for the prediction and modification of the visual memorability of images. In particular, we analyze the use of Convolutional Neural Networks, Recurrent Neural Networks, and Generative Adversarial Networks for image memorability prediction and modification. Our findings will aid others in understanding and researching recent trends in predicting and modifying image visual memorability using deep learning techniques. The following describes how the paper was put together. Section 2 describes the dataset used to compare various works. After this, section 3 describes the various deep learning techniques used in the prediction of memorability while section 4 discusses the deep learning techniques used in the modification of memorability. Section 5 discusses some general limitations noticed in the works reviewed for this survey and Section provides a summary of the survey and the future scope.
## 2 Dataset Details
Most works in predicting and modifying image memorability have made use of the LaMem dataset [4]. The authors created this dataset by introducing an optimized protocol of [10] memory games to find out the true memorability scores. In this game, the author shows the images in sequential order. A few of the images shown during the game were repeated. When observers encounter an image they have already seen, they are told to press a button. This experiment helped them to collect real-world data on how memorable images are. It is an enormous dataset containing 60,000 images with their memorability scores. Due to the large size of this dataset, it can be used for training deep neural networks.
## 3 Prediction of Image Memorability with Deep Learning
### Convolutional Neural Networks
Convolutional neural networks (CNNs) are deep learning models that can take images as input and perform linear and non-linear operations on them to map them onto the desired output type based on learnable weights and biases. Because CNN immediately learns the features, they eliminate the requirement for human feature extraction.
CNNs are particularly useful for discovering patterns in images to identify scenes, objects, and faces. As the popularity and success of CNNs have increased in computer vision in recent times, they have also been heavily implemented for memorability prediction. Some works have used these models to extract features, while others have proposed an end-to-end deep learning framework based on CNN regressions.
The very first attempt at using CNNs for the task of image memorability prediction was made by Khosla et al. [4] in 2015, where the authors introduced a deep neural network based on AlexNet [11] to extract features of an image which were passed through three fully connected layers and an additional Euclidean loss layer was added since memorability is a single real-valued output. The final prediction was made using SVR. The AlexNet model used in the proposed architecture, MemNet, was initially trained on two popular datasets commonly used for image classification, ILSVRC 2012 [12] and the Places dataset [13]. The final MemNet model was then trained on the LaMem dataset, the largest fully-annotated image memorability dataset available [4]. The authors reported MemNet having a Spearman's rank correlation of 0.64 on training and testing with the LaMem dataset. Although this work opened doors for the wide use of deep learning in image memorability prediction, this approach had a few drawbacks that did not age well with time. Many researchers found difficulty in reproducing the results obtained from the original MemNet model because the original model was implemented on the Caffe framework, which has now been discarded. Moreover, it was observed by researchers like Needell et al. [14] when generalizing the MemNet model on a new dataset, its performance is reduced.
The subsequent major work in this domain was done by Baveye et al. in 2016 [7]. The authors used transfer learning and used models trained to predict object and scene semantics by fine-tuning them for predicting memorability. They chose GoogleNet CNN as their baseline model [15]. They have fine-tuned this model by replacing the two auxiliary losses in the intermediate layers and the SoftMax activation in the final layer with one final fully-connected layer. Their model, MemoNet, achieved a Spearman's rank correlation of 0.636 when trained with 30K training iterations. Unlike in MemNet [4], MemoNet was tested on a mixed dataset and ensured a balance in the emotional feature distribution of the dataset's images. The model performed poorly on this testing dataset. The authors stated the reason for this as being that the negatively aroused images tend to have a more predictable memorability as compared to neutral or positively aroused ones.
In 2018, Squalli-Houssaini et al. [16] took a different approach than most of their peers at the time for predicting images' memorability. They considered not only the visual features but also the semantic features obtained from an image and its textual representation. For the extraction of visual features, the authors used a VGG16 CNN model [17] that had been pre-trained on the ImageNet dataset [18]. As for the extraction of semantic features, they used an Image Captioning (IC) system obtained from the model proposed by Kiros et al. [19]. This system consisted of a CNN and an LSTM network to encode simultaneous image-text embeddings. Using these visual and semantic features, they trained an SVR model for regression and a Multi-Layer Perceptron for classification. To classify the image in the order of its memorability,
they converted the memorability scores in the LaMem dataset into class labels. The regression model with SVR resulted in a Spearman's rank correlation at par with MemNet (0.64), while the classification approach resulted in a performance improvement. One major drawback of this work is that this model does not generalize well as it yields a poor performance when tested on some other datasets.
Perera et al. [20], in their work on memorability prediction, also took a similar approach in using transfer learning. However, their work differed as they noted that instead of fine-tuning an entire pre-trained CNN model, only fine-tuning the final layer of a CNN model yielded an improvement in overall model performance. This performance enhancement was suggested to be due to the likely overfitting of the models in previous works. Their model, MemBoost, follows a base setup of MemNet and implements it on different CNN models, regression algorithms, and datasets. The best performance was achieved when a ResNet-152 [21] was pre-trained on a combination of ImageNet [18] and Places datasets [13]. An ensemble regression algorithm, XGBoost [22], was used to predict the final memorability scores. This approach achieved a Spearman's rank correlation of 0.67 on the LaMem dataset.
A new approach using multiple instance learning-based CNNs was proposed by Basavaraju et al. [23] in 2019. Their proposed model, EMNet, considers the features corresponding to various emotions induced, as some emotions are correlated with a higher likelihood of being remembered. This model takes an ensemble of two deep CNNs. The first part of the model consists of a VGG-16 [17] to extract object features and predict their memorability scores. The second part of the model extracts the emotional features. This is executed by a novel framework that uses emotion and salience features for memorability prediction. Using the combined memorability scores from these two models, the final output is produced. As a result of this ensemble deep learning framework, a Spearman's rank correlation of 0.671 is achieved.
In 2020, Zhu et al. proposed a multi-task deep learning approach using aesthetic attributes [26]. The authors believed there to be a hidden relationship between the aesthetics of an image and its memorability. Thus, they proposed a framework that they alternatively trained on an aesthetics dataset (AADB dataset [27]) and a memorability dataset (LaMem [4]). In this approach, they also used a pixel-wise visual attention mechanism (PiCANet). They have used this model to generate attention maps at each pixel and embedded it in a CNN architecture. The authors report that their proposed approach obtained a Spearman's rank correlation of 0.67. This is the first attempt associating aesthetics assessment with memorability to predict memorability. The enhanced performance indicates that the two are correlated and that an analysis of aesthetics can be used to assist in the prediction of memorability.
### Recurrent Neural Networks
Recurrent Neural Networks are a particular type of neural network that takes as sequential input data, and at each step, the output from its previous layers is passed as input for the current step. RNNs are capable of remembering the sequence of data.
This property gives these networks a wide range of applications in natural language processing, speech recognition, and time series forecasting. Long Short-Term Memory is one of the most common types of RNNs. As a result of their ability to capture long-term dependencies in data, LSTMs are highly popular. In recent times, the task of predicting the visual memorability of images is greatly benefiting from the use of LSTMs.
Fajtl et al. [25], in their work on predicting the memorability of images, propose an end-to-end deep learning framework. Their proposed attention-based model, AMNet, can be divided into four elements - a deep CNN pre-trained on the ImageNet dataset; a soft attention mechanism to aid the network selectively focus on certain areas of an image; an LSTM for estimating the memorability of the image; lastly, a fully connected layer to output the memorability regression scores. This framework first extracts features of an image using ResNet-50 [21], then these features are passed through a soft attention layer that generates attention maps to highlight regions to focus more on. These maps are then passed to LSTM and a fully connected layer to obtain the prediction results. The authors report their proposed approach to have obtained a Spearman's rank correlation of 0.67 on the LaMem dataset.
A more recent work in memorability prediction using RNN is ResMem-Net, a model introduced by Praveen et al. [24]. Unlike most of the previous work, ResMem-Net does not just use the activations of the final layer to make the prediction. The framework is such that there is a pre-trained ResNet-50 [21] at the top being used as the baseline model. Global Average Pooling is used to pass the hidden layers of ResNet-50 to an LSTM recurrent unit. The output from the LSTM is passed through a fully connected layer to obtain the memorability score. LSTM units can retain all the vital information obtained from activating the hidden layers of the ResNet-50 model, which makes the framework robust. The authors report that their proposed architecture achieved a Spearman's rank correlation of 0.679, making ResMem-Net the current state-of-the-art model.
### Comparing the performance of different prediction models
Figure 1: Bar chart comparison for different prediction models based on their Spearman’s Rank Correlation between the predicted values and ground truth
Table 1 demonstrates Spearman's rank correlation values of the discussed memorability prediction models that use deep learning. It can be seen that ResMem-Net outperforms the rest of the prediction models. Figure 1 helps visualize the comparison between these models. A general trend can be noted that attention-based neural networks tend to perform better on the task of memorability prediction. This could be attributed to the visual attention mechanism's ability to focus on parts of the picture that provide more relevant information. The memorability of images is often determined by the memorability of the objects present in them, and thus, an attention mechanism can identify these objects and give an enhanced performance. Another trend that can be observed is that LSTMs, in general, give a better performance in this task. Only knowing the parts of the image to focus on is not enough for the task of memorability. The correlations between parts of the image must also be considered. This requires a memory unit to be aware of the entire image at a time. That is why LSTMs are becoming popular in memorability prediction, as they can model long-range dependencies.
## 4 Modification of Image Memorability with Deep Learning
### Generative Adversarial Networks
Generative Adversarial Networks (GAN) are a class of deep learning models that can be used to generate a wide variety of natural-looking pictures with subtle variations in their aesthetic characteristics. GANs are made up of two parts: a generator that tries to create actual data and a discriminator that seeks to distinguish between actual and generated data. GANs have been used for picture translation tasks as well as to create photorealistic pictures. Previously efforts have been made to extract the
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Method (on LaMem dataset)** & **Spearman’s Rank Correlation (p)** \\ \hline MemNet (Khosla et. al [4]) & 0.64 \\ \hline MemoNet (Baveye et. al [7]) & 0.636 \\ \hline IC features with SVR (Squalli-Houssaini et. al [16]) & 0.65 \\ \hline AMNet (Fajtl et. al [25]) & 0.677 \\ \hline MemBoost (Perera et. al [20]) & 0.67 \\ \hline EMNet (Basavaraju et. al [23]) & 0.671 \\ \hline CNN with PiCANet (Zhu et. al [26]) & 0.67 \\ \hline ResMem-Net (Praveen et. al [24]) & 0.679 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of different prediction models based on their Spearman’s Rank Correlation between the predicted values and ground truth
"memorability" component from already trained GANs, enabling the creation of pictures that are thought to be memorable. Such a method still relies on repetitive detection models and calls for a seed picture whose memorability is later modified.
Sidorov et al. [28] demonstrate how cutting-edge deep learning methods created for regulated image-to-image translation can enhance visual memorability. The actual media creators--photographers, designers, and other artists--cannot employ advanced mathematical or behavioral features that may affect image memorability in their work. Hence this work examines how commonly used photo editing tools that people use regularly affect visual memorability. This experiment used the data set to train algorithms like VAE/GAN, StarGAN, and AttGAN. However, only AttGAN produced a fruitful outcome. This method resulted in changes of up to 33% and operated effectively in the two directions (increasing and decreasing memorability). Further, the study demonstrated that essential image processing technologies could not cause changes in the memorability of an image expectedly. The analysis of the data showed that procedures like blurring, darkening, and discoloration that cause information loss also result in a reduction in memorability. Sharpening was the only method that consistently improved the memorability score when applied to an image.
In 2019, Goetschalckx et al. [29] proposed a new model using GANs to study the memorability of an image. The model architecture consisted of a transformer, an assessor, and a generator. The input taken for this model was the latent space of images, not the actual images. A set of 4,00,000 latent space vectors was used to implement this model. The GAN model used here was the BigGAN model pre-trained on the ImageNet [18] dataset, while the assessor model implemented was the CNN MemNet. This study provided a methodology for visualizing what a GAN-based model has discovered about its target picture attribute from another model (the assessor in this case). This study can be expanded by substituting the Assessor model in the present framework for any other image properties that are difficult to comprehend and visualize, for example, the emotional valence of an image. Through behavioral human memory testing using edited photos, it was confirmed that the model successfully changed GAN images to make them more (or less) memorable.
The first example of a GAN trained from the start specifically to create memorable landscape images was presented by Davidson et al. [30]. It made use of information on two-dimensional memorability gathered from human experiments. To produce eye-catching graphics, a powerful generative model, such as a Wasserstein GAN (WGAN), is adjusted. The study evaluated the model's output and looked at how memorability levels affected how "genuine" the visuals were that were produced. Using an independent memorability prediction network, the authors discovered that images designed to be recalled were more memorable than images designed to be less
memorable. Therefore, the formation of semantic features and the spatial relationships between these elements are controlled by the memorability of a picture.
The trends in recent studies on image memorability modification demonstrated the use of GANs for generating natural-looking pictures with subtle variations in their image characteristics. Image memorability is one of the characteristics that can be modified to produce a continuum of images. The studies demonstrated that image characteristics like blurring, darkening, sharpening, etc. affect the memorability of the images in different ways. The visualizations in the various previous studies presented several potential elements that might contribute to the understanding of why some images are more memorable than others. It was also found that memorability may sometimes affect the level of realness a picture holds.
## 5 Limitations
In the earliest works on predictability, the main issue faced by researchers was to find and keep the memorable elements of a picture while enhancing forgettable ones. They aimed at finding the parts of images that are memorable or forgettable so that information consumption can be made easier. Later it was found that an emotional bias also influences the memorability of any image. This proved memorability to be more subjective and dependent on the observer. If one is simply interested in the fundamental information of the pictures, the memorability forecast will be inaccurate. After some work provided sufficient results on the Lamem dataset, working on a larger dataset was the next challenge involved. Some factors were considered for the dataset creation. These factors included the number of observers, the validity of preserving one score per image, the necessity of reconsidering the sequence of the photos, etc. A larger dataset is said to perform better with respect to the predictions. This is because a large dataset provides ease in generalizing neural networks. The ResMemNet model is one of the latest CNN-based models for image memorability prediction. The main challenges that this model faced were that it was not deployable on any Mobile GPU hence constraining its viability and applications. Furthermore, the ResNet model and LSTM units were not the most latest and efficient in this study.
The research in image memorability modification majorly faced challenges in correlating the psychophysical features of images with their memorability scores. Researchers failed to establish a more general way of modifying the images by increasing or decreasing the image memorability by providing an image input to the model. The current models lack an image-to-image translation method of memorability score modification.
Conclusion and Future Scope
By performing a comparative study on the previous works in image memorability prediction, it can be established that ResMem-Net outperforms the rest of the prediction models. It is generally accepted that attention-based neural networks outperform other types of neural networks when it comes to predicting memorability. Another pattern that may be seen is that LSTMs do better in this job overall. The challenge of memorability requires more than simply understanding which portions of the image to concentrate on. It's also important to take into account how the image's components correlate. The use of GANs for creating natural-looking images with small variations in their visual attributes was shown by trends in recent studies on image memorability modification. The results showed that numerous picture qualities, such as blurring, darkening, and sharpening, had an impact on how memorable an image is. The visualizations used in the earlier research provide many potential components that might help explain why certain pictures are more remembered than others. Additionally, it was discovered that occasionally a picture's memorability might influence how realistic it seems.
Various works on image memorability prediction and modification using deep learning were reviewed in this survey, and some trends were noticed. To summarize, predicting image memorability has dramatically benefited from deep learning and is achieving near human-level consistency. However, there are some questions on the validity of current memorability scores, with only one significant large-scale image memorability dataset available. The progress of modifying image memorability is relatively slow-paced, even with the introduction of deep learning in this domain. Many challenges need to be addressed to modify the memorability of images successfully. Nevertheless, image-to-image translation GANs are showing promising results.
In the quest of using deep learning techniques to accurately predict and modify the memorability of images, despite its success, there are still areas where further research is required. Deep learning is heavily dependent on the size and diversity of data available. There is a need to create more large-scale datasets because there is currently just one substantial large-scale image memorability dataset available. In memorability prediction, it was noted that other cognitive attributes such as emotional bias also impact the memorability of images. A future direction could be to further explore the relationship between different cognitive attributes with memorability and take them into consideration while training the deep learning models. In memorability modification, GANs have shown impressive results and a further direction in this could be to carefully condition the network. Moreover, methods like stable diffusion can also be experimented with to yield more realistic images. |
2309.03731 | Using representation balancing to learn conditional-average dose
responses from clustered data | Estimating a unit's responses to interventions with an associated dose, the
"conditional average dose response" (CADR), is relevant in a variety of
domains, from healthcare to business, economics, and beyond. Such a response
typically needs to be estimated from observational data, which introduces
several challenges. That is why the machine learning (ML) community has
proposed several tailored CADR estimators. Yet, the proposal of most of these
methods requires strong assumptions on the distribution of data and the
assignment of interventions, which go beyond the standard assumptions in causal
inference. Whereas previous works have so far focused on smooth shifts in
covariate distributions across doses, in this work, we will study estimating
CADR from clustered data and where different doses are assigned to different
segments of a population. On a novel benchmarking dataset, we show the impacts
of clustered data on model performance and propose an estimator, CBRNet, that
learns cluster-agnostic and hence dose-agnostic covariate representations
through representation balancing for unbiased CADR inference. We run extensive
experiments to illustrate the workings of our method and compare it with the
state of the art in ML for CADR estimation. | Christopher Bockel-Rickermann, Toon Vanderschueren, Jeroen Berrevoets, Tim Verdonck, Wouter Verbeke | 2023-09-07T14:17:44Z | http://arxiv.org/abs/2309.03731v2 | # Learning continuous-valued treatment effects through representation balancing
###### Abstract
Estimating the effects of treatments with an associated dose on an instance's outcome, the "dose response", is relevant in a variety of domains, from healthcare to business, economics, and beyond. Such effects, also known as continuous-valued treatment effects, are typically estimated from observational data, which may be subject to dose selection bias. This means that the allocation of doses depends on pre-treatment covariates. Previous studies have shown that conventional machine learning approaches fail to learn accurate individual estimates of dose responses under the presence of dose selection bias. In this work, we propose CBRNet, a causal machine learning approach to estimate an individual dose response from observational data. CBRNet adopts the Neyman-Rubin potential outcome framework and extends the concept of balanced representation learning for overcoming selection bias to continuous-valued treatments. Our work is the first to apply representation balancing in a continuous-valued treatment setting. We evaluate our method on a newly proposed benchmark. Our experiments demonstrate CBRNet's ability to accurately learn treatment effects under selection bias and competitive performance with respect to other state-of-the-art methods.
_Keywords:_ Causal Machine Learning, Potential Outcomes, Neyman-Rubin, Balanced Representation Learning, Dose Response
## 1 Introduction
Across domains, understanding the effects of treatments with an associated dose on the outcome of an instance is of key interest for personalized decision-making [1, 2]. Applications span across, e.g., healthcare, public policy, economics, and manufacturing [3, 4, 5, 6]. However, learning such an effect (also known as the "individual dose response", or "continuous-valued treatment effect") is difficult for two main reasons. First, only factual observations are available for modeling, that is, every observation is observed in combination with just one "factual" dose and no other "counterfactuals". This is commonly known as the "fundamental problem of causal inference" [7]. Second, the training data is typically observational and therefore suffers from various types of bias [8].
In this work, we investigate the problem of learning individual dose responses in the presence of dose selection bias. Dose selection bias and, more generally, treatment selection bias occurs when treatment assignment follows some policy based on the characteristics of an observation [9]. In such an environment, finding an unbiased estimator of a treatment effect is a complicated task where conventional supervised learning methods may fail [10]. Therefore, previous work has proposed specialized methodologies to deal with the effect of observational data in settings with continuous-valued treatments [11, 12, 13, 14, 1, 15, 16, 17, 3, 4, 6]. Yet, many of these methods are suited only for specific use cases (e.g., policy evaluation), rely on strict parametric assumptions, or are increasingly complex in their architecture, potentially hindering real-world implementation.
When training conventional machine learning algorithms, dose selection bias could lead to overfitting training observations for a particular dose interval [3] and consequently low generalization performance for counterfactual inference. To tackle this problem, we propose CBRNet, a method for **C**ontinuously-valued treatment effect estimation through **B**alanced **R**epresentation learning with neural **Net**works. We build on recent advancements in representation learning and argue that, in order to find a well-functioning and unbiased model of individual dose response, we can find a balanced representation of the training data that is independent of the assigned dose. We hypothesize that this approach is also an effective solution to losing the influence of the dose on the outcome in high-dimensional latent space when training neural networks with multiple hidden layers[10, 3]. CBRNet clusters observations based on their pre-treatment covariates, and balances the data based on distances between clusters in latent space, hereby generalizing the approach presented in [10] the continuous-valued treatment setting.
To evaluate the capabilities of CBRNet, we develop a novel benchmark for simulating different levels of dose selection bias. We extend the experiments of Bica et al. [4], by shedding light on the drivers of selection bias that have not been under control in previous benchmarks. We test our methods against established methods for dose response estimation and show that CBRNet is performing competitively against the state of the art. Our contributions to the literature are as follows:
* we introduce CBRNet, a novel method for dose response estimation;
* we extend representation balancing to the continuous-valued treatment setting, and show how it may help to overcome the effects of selection bias;
* we propose a new benchmark for dose response learning, enabling a more comprehensive investigation of different aspects of dose selection bias in observational data;
* we conduct a series of experiments to understand the sensitivity of our method and existing methods and study the impacts of its different components on counterfactual inference and performance.
The remainder of this paper is organized as follows: First, in Section 2, we provide an overview of related literature. Then, in Section 3, we introduce the problem formulation. Subsequently, we illustrate and explain the workings of CBRNet in Section 4. Section 5 presents the results of extensive experiments to evaluate the proposed method. The paper is concluded in Section 7, which also provides an outlook on future work.
Related Works
The majority of literature on estimating treatment effects has been concerned with binary-valued treatments, that is, measuring the effect of applying a treatment, over not applying it [18, 19]. Less attention has been paid to settings with continuous-valued treatments. In these settings, an observation can be assigned an infinite amount of different treatment options by varying the dose of the treatment. Although less studied, such settings are of great importance in a variety of fields. These include personalized medicine [1, 4], public policy [20, 21], business and economics [22], manufacturing [5], or education [23].
The continuous-valued treatment setting further complicates learning treatment effects. Experimental evaluation of these effects through, for example, randomized controlled trials (RTCs) [24, 25, 26] is often infeasible due to both ethical concerns [27, 28] and the large number of possible treatment configurations compared to a limited number of candidates available for random assignment. As a result, the effects of continuous-valued treatments must usually be estimated from observational data, which comes with its own set of challenges.
We find two main fields of study with respect to continuous-valued treatments. First, the field of policy learning aims at deriving the optimal dose of a treatment, not necessarily considering the effects of other suboptimal doses. Recent work on policy evaluation for continuous-valued treatments includes [12, 13, 14, 15, 16, 17]. Second, and the focus of our research, is the estimation of individual treatment effects (ITE). We base our research on the Neyman-Rubin potential outcomes framework [24, 29, 30, 31], and aim to estimate the effect of any possible dose of a treatment for a specific instance. A detailed formulation of our problem statement will be provided in Section 3.
Previous works take different approaches to tackling the estimation of dose responses. [11, 32, 1] use generalized propensity scores (GPS) to aid in the estimation of continuous-valued treatment effects. The GPS is defined as the probability of being assigned a certain dose of a treatment conditional on an observation's pre-treatment covariates and is an extension to the propensity score proposed by [33] for binary-valued treatments. While not an estimate of the treatment effect itself, the GPS can be added to models in order to overcome dose selection bias. However, GPS calculation typically requires parametric assumptions that might not hold in real-life applications [34].
In recent years, we have seen the use of machine learning for the estimation of dose responses. [4] apply generative adversarial networks (GANs) to overcome dose selection bias. Their architecture SCIGAN works in three steps. First, a GAN is trained to generate counterfactuals for a given observation. Second, per training observation, the GAN produces a finite number of counterfactuals to augment the training data and to overcome selection bias. Third, a conventional machine learning method is trained to learn the dose response from the augmented data. Their approach generalizes the work of [35] for the case of continuous-valued treatments. However, as noted in [36], the application of generative methods is complicated, as, for example, the learning process requires optimizing intractable criteria. Combined with the general problem of evaluating and validating methods for treatment effect estimation (see again the "fundamental problem of causal inference" [7] and Section 5.2), we value the applicability of generative methods for causal inference as complicated, as convergence and correctness of a model is hard to achieve and verify. Schwab et al. [37] extend the work of Shalit et al. [10] in proposing a neural network architecture with a finite number of individual heads for estimating discretely-valued treatments. This idea is taken advantage of by [3] to propose DRNet. DRNet, for a single continuous-valued
treatment, trains a finite number \(E\) of head networks, all sharing a number of hidden layers. Each head network is trained to predict the outcome of one of \(E\) equally sized intervals of the observed range of doses. The strength of DRNet lies in the potential to learn highly flexible (and potentially different) models for different dose intervals [4]. Finally, Nie et al. [6] propose VCNet as a response to DRNet and address both DRNet's inflexible approach to dividing the dose space into equally-sized intervals and its tendency to produce discontinuous estimates of dose response. VCNet builds on a varying-coefficient model [38]. In this architecture, a first neural network with a number of shared layers is trained to estimate a generalized propensity score. Then, a second inference network is trained, which takes as input the generalized propensity score estimate and varies its network coefficients as a function of the assigned dose. Under correct specification, DRNet is expected to be a special case of VCNet.
None of the previously established approaches, to the best of our knowledge, leverages representation balancing as used in the counterfactual regression framework (CFR) in Shalit et al. [10]. Therefore, our method is the first to generalize the idea of representation balancing for counterfactual inference to the continuous-valued treatment setting. Additionally, we present an architecture that is significantly simpler than the previously established causal machine learning methods. Instead of added flexibility along different dose levels (as in DRNet and VCNet), using a GPS estimate (VCNet and HIE), or generative methods (SCIGAN), the proposed method addresses selection bias during training. Our method provides a trade-off between (eventually overly simplistic) parametric assumptions and architectural complexity.
## 3 Problem formulation
We consider a situation in which subjects may receive a treatment with a continuous-valued dose, and in which one is interested in learning the impact of such treatment on an outcome from observational data.
Suppose historical data is observed in the form of \((\mathbf{x}_{i},s_{i,f},y_{i,f})\) for \(i=1,...,N\). Each of \(N\) observations consists of pre-treatment covariates \(\mathbf{x}_{i}\), the assigned dose of the treatment \(s_{i,f}\), and the outcome of this treatment-dose combination \(y_{i,f}\). We make the following assumptions: \(\mathbf{x}_{i}\) is a realization of random variable \(\mathbf{X}\), defined as the vector of pre-treatment covariates in feature space \(\mathcal{X}\). \(s_{i,f}\) is the observed dose of the treatment associated with observation \(i\), realized from random variable \(S_{f}\in\mathcal{S}\). \(y_{i,f}\) is the observed outcome of the treatment with observed dose \(s_{i,f}\) associated with observation \(i\), realized from random variable \(Y_{f}\in\mathcal{Y}\). We assume dose selection bias, meaning that the pre-treatment covariates of an observation confound the distribution of the factual dose. We follow Richardson et al. [39] and express the causal dependencies in the data as a single world intervention graph (SWIG), visualized in Figure 1.
We will continue to refer to all observed data as "factual" observations (denoted by subscript \(f\)), and to all unobserved combinations of instances and doses as "counterfactuals".
We follow the Neyman-Rubin potential outcome framework [24, 29, 30, 31], with a unique outcome \(Y(s,\mathbf{x})\) for every combination of pre-treatment covariates \(\mathbf{x}\) and dose \(s\) in some domain of interest \(\mathcal{S}\). We are interested in finding an unbiased estimate of the outcome of a treatment-dose pair conditional on an observation's pre-covariates, that is, the individual dose response:
\[\mu(s,\mathbf{x})=\mathbb{E}\left(Y(s)|\mathbf{X}=\mathbf{x}\right). \tag{1}\]
We make three standard assumptions that are necessary for the identification of \(\mu(d,\mathbf{x})\) from observational data [1, 40]:
**Assumption 1**.: _Unconfoundedness: The assigned dose \(s\) and the potential outcome \(Y(s)\) are conditionally independent given the pre-treatment covariates \(\mathbf{X}\), or formally (cf. Figure 1):_
\[\{Y(s)|s\in\mathcal{S}\}\perp\!\!\!\perp S_{f}|\mathbf{X}\]
**Assumption 2**.: _Overlap: Every observable combination of pre-treatment covariates \(\mathbf{x}\) has a non-zero probability of being assigned any of the possible doses \(s\), or formally:_
\[\forall\mathbf{x}\in\mathcal{X}\text{ such that }\mathbb{P}(\mathbf{x})>0\text{, we have }0< \mathbb{P}(s|\mathbf{x})<1\text{ for each }s\in\mathcal{S}\]
**Assumption 3**.: _Consistency: The factually observed outcome of a pair of pre-treatment covariates \(\mathbf{x}\) is unaffected by the assignment of doses to other observations, i.e., the observed factual outcome is the true potential outcome, or formally:_
\[\forall\mathbf{x}\in\mathcal{X}:Y_{f}=(Y|\mathbf{X}=\mathbf{x})\]
## 4 CBRNet
observations that are not randomly sampled for specific dose levels. In particular, under the presence of strong dose selection bias, a machine learning method can learn to infer the dose (or the treatment option more generally) from an observation's pre-treatment covariates. This will result in suboptimal generalization for counterfactual inference [3]. Further, the impact of the dose parameter on the outcome could get lost in high-dimensional latent space [3, 10].
In the binary-valued treatment setting, the seminal counterfactual regression framework (CFR) of [10] addresses this issue by first learning a balanced representation of the pre-treatment covariates of the training data which is invariant of the treatment option. Their approach regularizes in latent space the distance between observations that were treated and those that were untreated. Distance is measured using an integral probability metric (IPM) [47, 48]. After this regularization, the treatment option can no longer be inferred from an observation's pre-treatment covariates, as such preventing overfitting.
The continuous-valued treatment setting complicates this approach, as observations cannot trivially be split into two (or multiple) groups by means of the treatment option. In fact, there is an infinite amount of possible groupings based on the assigned dose. Yet, as we expect similar instances to be assigned similar doses, we expect that clusters in the data are informative about the dose assignment, offering an alternative way of splitting training data for representation balancing.
Hence, we propose to find clusters in the data based on the pre-treatment covariates of an observation (as well as the assigned dose level). If the pre-treatment covariates of an observation are driving dose selection bias (i.e., if an established dose assignment policy was based on clusters, or could be approximated by such), finding clusters will enable us to find observations with similar doses. Reducing the distance between clusters in latent space should aid in dealing with bias for counterfactual inference. Such an approach will allow building models which are simple in their architecture, and which differ from conventional methods only in training. We propose CBRNet to be a standard feed-forward neural network, eliminating the need for multiple prediction head networks as in DRNet, or generative methods as in SCIGAN.
Figure 2: Illustrative visualization dose selection bias
### Architecture
We visualize the architecture of CBRNet in Figure 3. CBRNet consists of three parts \(\Phi\), \(\Delta\), and \(I\), based on our motivation for such a method above. \(\Phi:\mathcal{X}\rightarrow\mathbb{R}^{n}\) is a representation learning function that is in place to learn the balanced representation of the data. It is mapping the pre-treatment covariates into \(n\)-dimensional latent space. We use a standard feed-forward neural network for \(\Phi\), where both the number of layers and the number of hidden nodes per layer are hyperparameters. This is motivated by [10], yet other approaches are possible [49]. \(\Delta:\mathcal{X}\times\mathcal{S}\rightarrow\{1,\dots,k\}\) is a clustering function mapping an observation to one of \(k\) clusters by taking as input the concatenated pre-treatment covariates and doses of an observation. \(\Delta\) serves to identify clusters stemming from an assignment policy and could be any clustering function (for an overview see, e.g., [50]). We propose to use a k-means clustering [51], where \(k\) is a hyper-parameter. K-means clustering has previously been used in treatment effect estimation in, e.g., [52]. The motivation to use k-means for CBRNet lies in minimizing Euclidian distances between observations of a certain cluster, as well as in wide adoption in business and beyond [53], expecting that k-means can approximate well the clusters generated from a potential ground-truth dose assignment mechanism. We train \(\Delta\) on available training observations. \(\Delta\) is not altered during the training of the remaining network components (see paragraph below). \(I:\mathbb{R}^{n}\times S\rightarrow\mathbb{R}\) is an inference function taking as input the transformed pre-treatment covariates (as given by \(\Phi\)) and a dose. \(I\) is put in place to learn the final dose response model. As for \(\Phi\), we propose \(I\) to be a feed-forward neural network with flexible hyperparameters, as adopted in [3, 4, 10], and motivated by their flexibility for inference tasks [54]. However, in principle, any other inference method could be used (for an overview, see, e.g., [55]).
**Training:** CBRNet is trained by minimizing a loss \(L\) over a data set \(\mathcal{D}\) of training observations. \(L\) is defined as:
\[L(\mathbf{x},s,y)=MSE(y,\hat{y})+\lambda*R(\Phi,\Delta,\mathcal{D}) \tag{2}\]
\(MSE(y,\hat{y})\) is a standard mean squared error loss over all \(N\) training observations:
\[MSE(y,\hat{y})=\frac{1}{N}\sum_{i=1}^{N}(y_{i}-\hat{y}_{i})^{2} \tag{3}\]
Figure 3: Architecture of CBRNet
\(y_{i}\) is the true outcome of observation \(i\), \(\hat{y}_{i}\) is the estimated outcome and is calculated by the inference function \(I\). The representation balancing of CBRNet is enforced by a regularization term \(R\) with a tuning parameter \(\lambda\). We generalize previous work from [10] and calculate \(R\) as a combination of integral probability metrics over clusters identified by \(\Delta(\cdot)\):
\[R(\Phi,\Delta,\mathcal{D})=\frac{1}{k}\sum_{i=1}^{k}IPM\left(\{\Phi(\mathbf{x} _{j},s_{j})\}_{j:\Delta(\mathbf{x}_{j},s_{j})=i},\{\Phi(\mathbf{x}_{j},s_{j}) \}_{\mathbf{x}_{j},s_{j}\in\mathcal{D}}\right) \tag{4}\]
\(k\) is the number of clusters found by \(\Delta(\cdot)\), and \(IPM\) is any integral probability metric. In our proposal, \(R(\cdot)\) compares the distribution of each cluster with the distribution of the total training data set \(\mathcal{D}\), and regularizes accordingly. This approach is grounded in the assumption that the pre-treatment covariates in training data set \(\mathcal{D}\) are unbiased, and that regularizing cluster distributions to match the distribution of the total population will have an effective balancing effect. While any IPM could be used, e.g., a Wasserstein metric [56, 57], we propose using the mean maximum discrepancy (MMD) metric [58] due to the computational complexity of its empirical approximation. We combine the MMD with a radial basis function kernel for better balancing performance [59]. The regularizing effect of \(R(\cdot)\) on \(\Phi\) is visualized in Figure 4.
**Hyperparameter tuning:** Conventional machine learning research has established best practices for hyperparameter tuning and model selection, such as cross-validation [60]. However, these techniques cannot be applied immediately for the estimation of treatment effects, as predicting factual outcomes does not ensure performance on counterfactuals. As counterfactuals are unobserved and validation data are typically subject to the same levels of dose selection bias as training data, alternative methods for hyperparameter tuning and model selection must be used.
The previously discussed SCIGAN architecture [4] (cf. Section 2) circumvents this issue by generating counterfactuals to a validation set via a GAN. Alternatively, [3] propose a nearest-neighbor type method to calculate errors in estimating counterfactual outcomes. Their approach, however, may suffer from data sparsity for high dimensional problems, and likewise from increasing levels of selection bias.
Instead, we follow and extend [61]. For the binary-valued treatment setting, they propose to evaluate models based on a weighted mean squared error on a validation set, where observations
Figure 4: t-SNE visualization of pre-treatment covariates and their hidden representation
are weighted using an inverse of a propensity score estimate. This approach was chosen to give a higher weight to observations with a lower propensity, mimicking a counterfactual evaluation.
We extend to the continuous-valued treatment setting and propose to evaluate model performance via:
\[MSE_{val}(y,\hat{y},\pi)=\sum_{i=1}^{N}\pi_{i}\sum_{i=1}^{N}\frac{1}{\pi_{i}}(y_ {i}-\hat{y}_{i})^{2} \tag{5}\]
\(\pi_{i}\) is the generalized propensity score (GPS) of observation \(i\), used to reweight the importance of validation observations. CBRNet can be used with any GPS estimate. In our study, we choose the approach presented in [11] and calculate the generalized propensity score by modeling \(s\) as a linear function of the pre-treatment covariates, assuming normally-distributed errors.
## 5 Experimental Evaluation
In this section, we present an experimental evaluation of CBRNet. In Section 5.1, we present the experimental framework, including a novel process to generate semi-synthetic test data. Section 5.2 discusses evaluation metrics and the methods that are used for benchmarking. Section 5.3 will briefly discuss the implementation of methods and link to the source code. Our experimental results are presented in Section 6. Additionally, we evaluate the performance of CBRNets on previously established data sets in Section 6.1, and analyze hyperparameter robustness in Section 6.2.
### Data generation
In order to enable thorough testing and evaluation of our method, we make use of semi-synthetic data. Using a real and nonaugmented data set would prevent testing the performance of CBRNets for counterfactual inference [3, 4] due to the fundamental problem of causal inference, that is, the unobservability of counterfactuals. Previously, [4] proposed an approach to generating test data for the estimation of continuous-valued treatment effects that we will take advantage of. The approach starts from a set of pre-treatment covariates. Further, every observation gets assigned a mode of possibly assigned doses as a linear combination of its pre-treatment covariates. The strength of the selection bias is subsequently controlled by sampling the assigned dose from a Beta-distribution with the observation's mode, where variance is controlled via parameter \(\alpha\), the strength of the bias. If no bias is simulated, the Beta-distribution is equivalent to a uniform distribution over possible dose levels. Under maximum bias, every observation is assigned exactly the dose mode. For a full discussion see [4].
While arguably elegant, this data-generating process might not be representative of all real-world applications of treatment effect estimation, as the dose assignment is instance-dependent. Therefore, the distribution of observed doses is dependent on the distribution of the pre-treatment covariates in the input space \(\mathcal{X}\). We expect this to have a critical impact on the complexity (or "hardness") of the data for treatment effect estimation. We show in Figure 5 that for an elevated level of bias (\(\alpha=5\)), the dose distribution for the data generated in [4] is unimodal, which might not be realistic for a multitude of settings in reality (e.g., pricing, as discussed previously).
We present an extension of this data-generating process that allows for cluster-centric dose assignment and multimodal dose distributions in the training data, in order to imitate many
relevant real-world settings, and better control interlinked factors driving the effects of dose selection bias on treatment effect estimation. We also make use of the TCGA data set [62] due to its availability, high dimensionality, and wide adoption across research fields.
**Step 1 (Clustering):** The incoming pre-treatment covariates used for the data generation are clustered into a distinct number of clusters using k-means clustering. Specifically, we choose \(k=3\). For an observation \(i\), we denote the corresponding cluster as \(c_{i}\). This clustering mimics, for example, customer segmentation in a business context (see also the "News" data set first introduced in [63]), or patient segmentation in a medical context that might be used for a dose assignment.
**Step 2 (Dose assignment):** We assign doses to clusters, instead of individual observations. This is a key difference to previous approaches, and yet again inspired by real-life data-generating processes. We consider two interlinked phenomena in observational data:
First, we will assign a certain modal dose to each cluster. Instead of modeling the modal dose as a combination of the pre-treatment covariates of each observation, we assign doses to clusters and control for the difference in modes between clusters. In previous approaches, this property of the data-generating process could not be controlled and is a quality of the chosen pre-treatment covariates. We will refer to this difference in doses between clusters as the "_inter_-cluster dose variability" (or for the case of [4] as the "inter-observational dose variability"). Specifically, we randomly assign each of the \(k\) clusters in our data to one of the \(k\) modal doses. Modal doses are evenly distributed along \([(1-\beta)/2,(1+\beta)/2]\). \(\beta\) is the parameter varying the inter-cluster variability, with \(\beta=0\) resulting in no variability and all clusters being assigned the same modal dose, and \(\beta=1\) resulting in maximal variability. The mode of cluster \(i\) is also referred to as \(m_{i}\). Our approach allows to actively control the level of inter-cluster dose variability, while in [4] the variability is solely driven by the pre-treatment covariates.
Second, when assigned a certain dose, an observation can yet be assigned different levels of the latter due to randomness, or errors. This randomness is controlled in [4] with the parameter \(\alpha\), ranging from a completely random assignment of a dose unrelated to an assigned modal dose, to a fully deterministic assignment with every observation assigned the mode. Such variability is of crucial importance for the satisfaction of the overlap assumption discussed in Section 3. We will refer to this level of randomness as the "_intra_-cluster dose variability" (or for the case in [4] as the "intra-observational dose variability"). The intra-cluster dose variability is driven by the variable \(\alpha\), keeping the previously established notation. We enforce intra-cluster variability by
Figure 5: Distribution of doses in data of [4] with bias strength \(\alpha=5\)
sampling the dose for each observation within a cluster from a Beta-distribution \(\text{Beta}(1+\alpha,\omega_{i})\), with \(\alpha\geq 0\), and \(\omega_{i}=((\alpha-1)/m_{c_{i}})+2-\alpha\). \(\alpha=0\) results in a uniform distribution, random assignment of the dose, and maximal intra-cluster variability. For \(\alpha>0\) the mode of the distribution is \(m_{c_{i}}\). For \(\alpha\rightarrow\infty\) the dose assignment becomes fully deterministic, so every observation within a cluster is assigned the cluster mode1, and the intra-cluster variability is minimal.
Footnote 1: For a proof, see [4].
To the best of our knowledge, our approach is first in differentiating the factors influencing dose selection bias in a systematic way, and could be combined with other data sets for domain-specific assessments of methods. For a visualization of the different components to the dose selection bias in our data, we add Figure 6. Figure A1 in the Appendix visualizes the dose distributions for some selected values of \(\alpha\) and \(\beta\).
**Step 3 (Dose response calculation):** After the final dose is sampled, the effect of treatment or the dose response is calculated using a ground truth model. While any functional form of the dose response is possible, we follow [4] and define for our experiments:
\[\mu(s,\mathbf{x})=10\left((\mathbf{w}_{1})^{\intercal}\mathbf{x}+12(\mathbf{ w}_{2})^{\intercal}\mathbf{x}s-12(\mathbf{w}_{3})^{\intercal}\mathbf{x}s^{2} \right)+\epsilon, \tag{6}\]
\(\mathbf{w}_{i}\sim\mathcal{U}((0,1)^{s\times 1})\) for \(i\in\{1,2,3\}\) is a weight vector, \(\epsilon\sim\mathcal{N}(0,1)\) is an error term.
### Benchmarking
We compare CBRNet against four benchmarking methods: 1) A standard multilayer perceptron (MLP), i.e., a feed-forward neural network that takes both pre-treatment covariates and the dose level as input of the first layers. MLPs are flexible and potent machine learning methods, that have seen wide application across domains, motivating their selection as a benchmark; 2) The Hirano Imbens estimator (HIE) [11], as parametric causal method; 3) DRNet and 4) VCNet, representing the state-of-the-art in continuous-valued treatment effect modeling.
For our experiments, we generate 10 iterations of the data set for each combination of different levels of \(\alpha\in\{0,3,6,9\}\) and \(\beta\in\{0,0.2,0.4,0.6,0.8\}\). We then split the data set into a training
Figure 6: Components of dose selection bias
set (70%), a validation set (10%), and a test set (20%). We tune CBRNet and all benchmarking methods on each of the resulting 200 data sets and evaluate them via the mean integrated squared error (MISE) over all test observations and over all dose levels as proposed by [3]:
\[\text{MISE}=\frac{1}{N}\sum_{i=1}^{N}\int_{\mathcal{S}_{f}}\left(\mu(u,\mathbf{ x}_{i})-\hat{\mu}(u,\mathbf{x}_{i})\right)^{2}\mathrm{d}u \tag{7}\]
### Implementation
All experiments are implemented in Python 3.9. All neural network-based methods (MLP, DRNet, VCNet, and CBRNet) are implemented in _PyTorch_[64]. MLP and CBRNet use our own implementation, VCNet is built on the original code provided by [6]. The HIE model is implemented using the _statsmodels_ library [65] and is inspired by the implementation of the method in the _causaldrf_ package [66] for the statistical programming language _R_[67]. Hyperparameters for all methods but CBRNet are tuned based on their mean squared error (MSE) on a validation set. A list of hyperparameters considered per model can be found in Section A. All code used for this project is available online via:
[https://github.com/christopher-br/CBRNet](https://github.com/christopher-br/CBRNet)
## 6 Experimental results
In line with the data-generating process in Section 5, we present the performance of CRBNet along both a fixed level of intra-cluster dose variability (\(\alpha=3.0\), Table 1) and along a fixed level
\begin{table}
\begin{tabular}{l c c c c c} \multicolumn{6}{l}{\(\alpha=3.0\)} \\ \hline & & & & \(\beta\) & & \\ Model & 0.0 & 0.2 & 0.4 & 0.6 & 0.8 \\ \hline MLP & 1.42 \(\pm\) 0.31 & 1.25 \(\pm\) 0.30 & 1.19 \(\pm\) 0.23 & 0.91 \(\pm\) 0.18 & 0.97 \(\pm\) 0.26 \\ HIE & 2.06 \(\pm\) 0.04 & 2.05 \(\pm\) 0.03 & 2.12 \(\pm\) 0.10 & 2.34 \(\pm\) 0.13 & 2.43 \(\pm\) 0.11 \\ DRNet & 1.32 \(\pm\) 0.10 & 1.15 \(\pm\) 0.12 & 1.19 \(\pm\) 0.21 & 0.95 \(\pm\) 0.14 & 0.75 \(\pm\) 0.09 \\ VCNet & _1.24_ \(\pm\) 0.07 & _1.12_ \(\pm\) 0.17 & _0.90_ \(\pm\) 0.21 & _0.64_ \(\pm\) 0.23 & **0.37**\(\pm\) 0.16 \\ CBRNet & **0.97**\(\pm\) 0.20 & **0.77**\(\pm\) 0.15 & **0.57**\(\pm\) 0.25 & **0.53**\(\pm\) 0.13 & _0.39_ \(\pm\) 0.14 \\ \hline \end{tabular}
\end{table}
Table 1: MISE per method for \(\alpha=3.0\) and varying levels of \(\beta\)
\begin{table}
\begin{tabular}{l c c c c} \multicolumn{6}{l}{\(\beta=0.4\)} \\ \hline & & & \(\alpha\) & \\ Model & 0.0 & 3.0 & 6.0 & 9.0 \\ \hline MLP & 0.54 \(\pm\) 0.06 & 1.19 \(\pm\) 0.23 & 1.65 \(\pm\) 0.49 & 2.12 \(\pm\) 0.39 \\ HIE & 0.71 \(\pm\) 0.04 & 2.12 \(\pm\) 0.10 & 2.65 \(\pm\) 0.31 & 2.97 \(\pm\) 0.36 \\ DRNet & 0.64 \(\pm\) 0.02 & 1.19 \(\pm\) 0.21 & _1.37_ \(\pm\) 0.11 & _1.62_ \(\pm\) 0.16 \\ VCNet & **0.19**\(\pm\) 0.02 & _0.90_ \(\pm\) 0.21 & 1.38 \(\pm\) 0.28 & 1.68 \(\pm\) 0.27 \\ CBRNet & _0.29_ \(\pm\) 0.04 & **0.57**\(\pm\) 0.25 & **0.96**\(\pm\) 0.21 & **1.37**\(\pm\) 0.18 \\ \hline \end{tabular}
\end{table}
Table 2: MISE per method for \(\beta=0.4\) and varying levels of \(\alpha\)
of inter-cluster dose variability (\(\beta=0.4\), Table 2) averaged over 10 randomly initialized data sets per combination of \(\alpha\) and \(\beta\). Values in bold mark the best-performing model for a certain bias combination, and values in italics mark the second best. For performances under different levels of \(\alpha\) and \(\beta\), see Tables A4 and A5 in the appendix.
For a fixed level of intra-cluster dose variability, CBRNet outperforms benchmarks and is closely tied second only for the case of high inter-cluster variability. For the case of fixed inter-cluster dose variability, CBRNet is outperforming for all values of \(\alpha>0\). Only for the case of \(\alpha=0\), VCNet significantly outperforms CBRNet. This case mimics a randomized controlled trial. In aggregate, the results demonstrate that our approach of representation balancing for continuous-valued effect estimation is indeed effective, and competitive with the state of the art.
Additionally, the results allow to better understand the effects of the data-generating process, the impacts of inter- and intra-cluster variability on model performance, and the drivers of selection bias:
* For a fixed level of _inter_-cluster variability (fixing \(\beta\)), reducing the intra-cluster variability (i.e., increasing \(\alpha\)) will complicate learning a dose response. This is mimicking the data-generating process of [4], hence this observation is expected.
* For a fixed level of _intra_-cluster variability (fixing \(\alpha\)), varying inter-cluster variability (i.e., \(\beta\)) has a more complex effect. With no inter-cluster variability (\(\beta=0\)), but some intra-cluster variability (\(\alpha>0\)), the doses of all clusters are centered around a common mode, and for doses increasingly different from this mode, observations become increasingly sparse. In these situations, DRNet excels, due to its potential to learn potentially very different functions for separate dose intervals [3, 4]. The distribution of doses, however, is not driven by dose selection bias (i.e., confounding). Increasing inter-cluster variability now has two effects: 1) It _decreases_ sparseness of observations across all dose levels, aiding the estimation of treatment effects. 2) It _increases_ the effects of dose selection bias (confounding), deteriorating model performance. Figure A1 in the appendix is illustrating this phenomenon. This behavior results in first increasing, and later decreasing model performance for increasing levels of \(\beta\) for a fixed level of \(\alpha\).
### Results on previously established data sets
We train and test CBRNet on the previously established data-generating process from [4] to evaluate performance on a different, recently proposed data-generating process. In this process, observations are not assigned doses per cluster, but rather individually, by first generating a unique modal dose per observation \(i\):
\[s_{i}(\mathbf{x})=(\mathbf{w}_{4})^{\intercal}\mathbf{x} \tag{8}\]
with \(\mathbf{w}_{4}\sim\mathcal{U}((0,1)^{s\times 1}\) a weight vector. The assigned dose is subsequently sampled from a Beta-distribution with such mode, as discussed for the experiments in Section 5.1. For benchmarking, we set the inter-observational variation (the bias level) to \(\alpha=5\), corresponding to a high level of bias (compared to the experiments in [4]). For the ground truth dose response, we consider all three options proposed in the original manuscript:
\[\text{GT \#1: }\mu(s,\mathbf{x})=10\left((\mathbf{w}_{1})^{\intercal}\mathbf{x }+12(\mathbf{w}_{2})^{\intercal}\mathbf{x}s-12(\mathbf{w}_{3})^{\intercal} \mathbf{x}s^{2}\right)+\epsilon \tag{9}\]
\[\text{GT \#2: }\mu(s,\mathbf{x})=10\left((\mathbf{w}_{1})^{\intercal} \mathbf{x}+\sin(\pi(\frac{(\mathbf{w}_{2})^{\intercal}\mathbf{x}}{(\mathbf{w}_{ 3})^{\intercal}\mathbf{x}})s)\right)+\epsilon \tag{10}\] \[\text{GT \#3: }\mu(s,\mathbf{x})=10\left((\mathbf{w}_{1})^{ \intercal}\mathbf{x}+12s\left(s-0.75\frac{(\mathbf{w}_{2})^{\intercal} \mathbf{x}}{(\mathbf{w}_{3})^{\intercal}\mathbf{x}}\right)^{2}\right)+\epsilon \tag{11}\]
Note that the original data-generating process generates observations to each of these ground truth models simultaneously. CBRNet is not designed to handle multiple distinct treatment options (though such an extension is feasible). We hence test the performance on each of the ground truth models individually2.
Footnote 2: Note that we have not implemented the HIE, due to its low performance in the original manuscript, see for reference [4].
As before, we generate 10 versions of the data sets, train benchmarking models, and average the MISE over all runs. The results in Table 3 show the competitive performance of CBRNet on all ground truth models, performing second best in two out of three cases, and consistently beating the standard MLP.
### Hyperparameter robustness and sensitivity
For training CBRNet, two key hyperparameters need to be set: The strength of the IPM regularisation \(\lambda\), and the number of clusters to be identified by the k-means clustering \(k\). We analyze the impact of varying these parameters on model performance in two experiments.
Impact of regularization strength \(\lambda\):Figure 7 visualizes the impact of varying levels of regularization strength in the case without selection bias (\(\alpha=0\) and \(\beta=0\)), and for elevated levels of selection bias (\(\alpha=5.0\) and \(\beta=0.5\)). For each configuration, 10 data sets have been generated. Results are averaged over these iterations. We test two kinds of regularization. First, we test regularizing with a linear MMD. The linear MMD penalizes deviations of feature means across distributions but has little regularizing effects on a distribution's shape or variance. Second, we test regularizing with a radial basis function kernel, as in our main experiments. The goal is to investigate the effects of different IPMs on the performance of CBRNet.
In the case without bias (cf. Figure 6(a)), model performance is seemingly unaffected by any level of \(\lambda\). More so, increasing levels of \(\lambda\) indeed lower the MISE for both linear and kernel MMD regularization.
Suppose bias persists in the data (cf. Figure 6(b)). In that case we find the kernel MMD strictly improving model performance for \(\lambda>0\), whereas the linear MMD decreases model performance, especially for higher values of \(\lambda\). For extreme values (\(\lambda\in\{100,500\}\)) the linear MMD
\begin{table}
\begin{tabular}{l c c c} \hline Model & GT \#1 & GT \#2 & GT \#3 \\ \hline MLP & 2.045 & 1.240 & 3.147 \\ DRNet & **1.632** & 1.226 & _2.975_ \\ VCNet & 2.120 & **0.717** & **2.135** \\ CBRNet & _1.922_ & _0.919_ & 3.012 \\ \hline \end{tabular}
\end{table}
Table 3: MISE on data-generating process of [4] with \(\alpha=5\)
seemingly over-regularizes, resulting in model performance at par or below the conventional MLP benchmark.
These results support the need for model selection, as discussed in Section 4.
**Impact of assumed number of clusters \(k\):** Figure 8 visualizes the effect of varying the number of assumed clusters for the k-means clustering in CBRNet. For this experiment, again an elevated level of bias was simulated (\(\alpha=5.0\) and \(\beta=0.5\)) for 10 randomly generated data sets.
We observe performance improvements of CBRNet over the conventional MLP benchmark across all selected numbers of clusters. Note that, for \(k=1\), we do not regularize the latent representation at all. Hence for any level of \(k\) balancing has a positive effect on the learned dose response. This is to be expected, as CBRNet regularizes each cluster distribution for deviations from the population, hence regularizing for more than the actual 3 clusters (as generated in the data) should have no significant adverse effect. As the MMD loss is calculated empirically, however, \(k\) should be chosen sufficiently small with respect to the batch size for network training to allow for adequate computation. We refer the interested reader to the source code (cf. Section 4). For applications of CBRNet, \(k\) can either be tuned or set based on expert assessment
Figure 8: Impact of varying number of clusters \(k\)
Figure 7: Impact of varying regularization strength \(\lambda\) on MISE
Conclusion
We proposed CBRNet, a novel method for dose response learning from observational data. CBRNet builds on representation balancing and generalizes previously established architectures [10] to the continuous-valued treatment setting. CBRNet employs a simple feed-forward network architecture and achieves competitive performance by regularizing high-dimensional latent space with a custom loss over integral probability metrics.
Our approach enables representation balancing for continuous-valued treatments by clustering observations based on their pre-treatment covariates, instead of the assigned treatment option. Hereby, CBRNet is strongly inspired by real-world data-generating processes and in them, the reasons for dose selection bias and confounding.
In order to thoroughly evaluate CBRNet, we proposed a novel data-generating process that allows investigation of the interlinked factors that create dose selection bias. Hereby, we build and advance on the state of the art. Comparing CBRNet against both conventional machine learning benchmarks, as well as causal machine learning methods, we show increased robustness against dose selection bias and competitive performance of our model. We also conduct experiments on previously established benchmarking data sets, and evaluate the sensitivity of CBRNets to hyperparameter tuning, for which we provide selection criteria. All code is available online.
Compared to some of the existing dose response models (DRNet and SCIGAN), CBRNet is designed for data with only one treatment with a dose parameter in comparison to several, potentially very different treatment options, different versions of a treatment [68], or changes to treatment effects in time-dynamic environments [69]. Future research is to extend CBRNet to the multi-treatment setting, as well as on generating metrics that allow studying and understanding the presence and strength of selection bias in observational data.
## Funding
This work was supported by the Research Foundation - Flanders (FWO research projects G015020N and 11I7322N).
JB is funded by the W.D. Armstrong Trust Fund.
## Conflict of interest
The authors state no conflict of interest.
|
2309.05340 | Commutator nilpotency for somewhere-to-below shuffles | Given a positive integer $n$, we consider the group algebra of the symmetric
group $S_{n}$. In this algebra, we define $n$ elements
$t_{1},t_{2},\ldots,t_{n}$ by the formula \[
t_{\ell}:=\operatorname*{cyc}\nolimits_{\ell}+\operatorname*{cyc}\nolimits_{\ell,\ell+1}+\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ell+2}+\cdots+\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ldots,n},
\] where $\operatorname*{cyc}\nolimits_{\ell,\ell+1,\ldots,k}$ denotes the
cycle that sends $\ell\mapsto\ell+1\mapsto\ell+2\mapsto\cdots\mapsto
k\mapsto\ell$. These $n$ elements are called the *somewhere-to-below shuffles*
due to an interpretation as card-shuffling operators.
In this paper, we show that their commutators $\left[ t_{i},t_{j}\right]
=t_{i}t_{j}-t_{j}t_{i}$ are nilpotent, and specifically that \[ \left[
t_{i},t_{j}\right] ^{\left\lceil \left( n-j\right) /2\right\rceil +1}=0\ \ \ \
\ \ \ \ \ \ \text{for any }i,j\in\left\{ 1,2,\ldots,n\right\} \] and \[ \left[
t_{i},t_{j}\right] ^{j-i+1}=0\ \ \ \ \ \ \ \ \ \ \text{for any }1\leq i\leq
j\leq n. \] We discuss some further identities and open questions. | Darij Grinberg | 2023-09-11T09:37:33Z | http://arxiv.org/abs/2309.05340v2 | # Commutator nilpotency for somewhere-to-below shuffles
# Commutator nilpotency for somewhere-to-below shuffles
Darij Grinberg
version 2.0, September 20, 2023
**Abstract.** Given a positive integer \(n\), we consider the group algebra of the symmetric group \(S_{n}\). In this algebra, we define \(n\) elements \(t_{1},t_{2},\ldots,t_{n}\) by the formula
\[t_{\ell}:=\operatorname{cyc}_{\ell}+\operatorname{cyc}_{\ell,\ell+1}+ \operatorname{cyc}_{\ell,\ell+1,\ell+2}+\cdots+\operatorname{cyc}_{\ell,\ell+ 1,\ldots,n},\]
where \(\operatorname{cyc}_{\ell,\ell+1,\ldots,k}\) denotes the cycle that sends \(\ell\mapsto\ell+1\mapsto\ell+2\mapsto\cdots\mapsto k\mapsto\ell\). These \(n\) elements are called the _somewhere-to-below shuffles_ due to an interpretation as card-shuffling operators.
In this paper, we show that their commutators \(\big{[}t_{i},t_{j}\big{]}=t_{i}t_{j}-t_{j}t_{i}\) are nilpotent, and specifically that
\[\big{[}t_{i},t_{j}\big{]}^{\lceil(n-j)/2\rceil+1}=0\qquad\quad\text{for any $i,j\in\{1,2,\ldots,n\}$}\]
and
\[\big{[}t_{i},t_{j}\big{]}^{j-i+1}=0\qquad\quad\text{for any $1\leq i\leq j \leq n$.}\]
We discuss some further identities and open questions.
**Mathematics Subject Classifications:** 05E99, 20C30, 60J10.
**Keywords:** symmetric group, permutations, card shuffling, top-to-random shuffle, group algebra, filtration, nilpotency, substitutional analysis.
###### Contents
* 1 Introduction
* 2 Notations and notions
* 2.1 Basic notations
* 3
### Some elements of \(\mathbf{k}\left[S_{n}\right]\)
#### Commutators
**3. Elementary computations in \(S_{n}\)**
**3.1.** The cycles \(\left(v\Longrightarrow w\right)\)
**3.2.** Rewriting rules for products of cycles
**4. Basic properties of somewhere-to-below shuffles**
**5. The identities \(t_{i+1}t_{i}=\left(t_{i}-1\right)t_{i}=t_{i}\left(t_{i}-1\right)\) and \(\left[t_{i},t_{i+1}\right]^{2}=0\)**
**5.1.** The identity \(t_{i+1}t_{i}=\left(t_{i}-1\right)t_{i}=t_{i}\left(t_{i}-1\right)\)
**5.2.** The identity \(\left[t_{i},t_{i+1}\right]^{2}=0\)
**6. The identities \(t_{i+2}\left(t_{i}-1\right)=\left(t_{i}-1\right)\left(t_{i+1}-1\right)\) and \(\left[t_{i},t_{i+2}\right]\left(t_{i}-1\right)=t_{i+1}\left[t_{i},t_{i+1}\right]\)**
**6.1.** The identity \(t_{i+2}\left(t_{i}-1\right)=\left(t_{i}-1\right)\left(t_{i+1}-1\right)\)
**6.2.** The identity \(\left[t_{i},t_{i+2}\right]\left(t_{i}-1\right)=t_{i+1}\left[t_{i},t_{i+1}\right]\)
**7. The identity \(\left(1+s_{j}\right)\left[t_{i},t_{j}\right]=0\) for all \(i\leq j\)**
**7.1.** The identity \(\left(1+s_{j}\right)\left[t_{j-1},t_{j}\right]=0\)
**7.2.** Expressing \(\left[t_{i},t_{j}\right]\) via \(\left[t_{j-1},t_{j}\right]\)
**7.3.** The identity \(\left(1+s_{j}\right)\left[t_{i},t_{j}\right]=0\) for all \(i\leq j\)
**8. The identity \(\left[t_{i},t_{j}\right]^{\left(n-j\right)/2\right]+1}=0\) for all \(i,j\in\left[n\right]\)**
**8.1.** The elements \(s_{k}^{+}\) and the left ideals \(H_{k,j}\)
**8.2.** The fuse
**8.3.** Products of \(\left[t_{i},t_{j}\right]\)'s for a fixed \(j\)
**8.4.** The identity \(\left[t_{i},t_{j}\right]^{\left(n-j\right)/2\right]+1}=0\) for any \(i,j\in\left[n\right]\)
**8.5.** Can we lift the \(i_{1},i_{2},\ldots,i_{m}\in\left[j\right]\) restriction?
**9. The identity \(\left[t_{i},t_{j}\right]^{j-i+1}=0\) for all \(i\leq j\)**
**9.1.** The elements \(\mu_{ij}\) for \(i\in\left[j-1\right]\)
**9.2.** Products of \(\left[t_{i},t_{j}\right]\)'s for a fixed \(j\) redux
**9.3.** The identity \(\left[t_{i},t_{j}\right]^{j-i+1}=0\) for all \(i\leq j\)
**10. Further directions**
**10.1.** More identities?
**10.2.** Optimal exponents?
**10.3.** Generalizing to the Hecke algebra
**10.4.** One-sided cycle shuffles
## 1 Introduction
The _somewhere-to-below shuffles_\(t_{1},t_{2},\ldots,t_{n}\) (and their linear combinations, called the _one-sided cycle shuffles_) are certain elements in the group algebra of a symmetric group \(S_{n}\). They have been introduced in [10] by Lafreniere and the present author, and are a novel generalization of the top-to-random shuffle (also known as the _Tsetlin library_). They are defined by the formula
\[t_{\ell}:=\operatorname{cyc}_{\ell}+\operatorname{cyc}_{\ell,\ell+1}+ \operatorname{cyc}_{\ell,\ell+1,\ell+2}+\cdots+\operatorname{cyc}_{\ell,\ell+ 1,\ldots,n}\in\mathbf{k}\left[S_{n}\right],\]
where \(\operatorname{cyc}_{\ell,\ell+1,\ldots,k}\) denotes the cycle that sends \(\ell\mapsto\ell+1\mapsto\ell+2\mapsto\cdots\mapsto k\mapsto\ell\) (and leaves all remaining elements of \(\left\{1,2,\ldots,n\right\}\) unchanged).
One of the main results of [10] was the construction of a basis \(\left(a_{w}\right)_{w\in S_{n}}\) of the group algebra in which multiplication by these shuffles acts as an upper-triangular matrix (i.e., for which \(a_{w}t_{\ell}\) equals a linear combination of \(a_{u}\)'s with \(u\leq w\) for a certain total order on \(S_{n}\)). Consequences of this fact (or, more precisely, of a certain filtration that entails this fact) include an explicit description of the eigenvalues of each one-sided cycle shuffle, as well as analogous properties of some related shuffles.
Another consequence of the joint triangularizability of \(t_{1},t_{2},\ldots,t_{n}\) is the fact that the commutators \(\left[t_{i},t_{j}\right]:=t_{i}t_{j}-t_{j}t_{i}\) are nilpotent (since the commutator of two upper-triangular matrices is strictly upper-triangular and thus nilpotent). Explicitly, this means that \(\left[t_{i},t_{j}\right]^{n!}=0\), since the \(t_{1},t_{2},\ldots,t_{n}\) act on a free module of rank \(n!\). However, experiments have suggested that the minimal \(m\in\mathbb{N}\) satisfying \(\left[t_{i},t_{j}\right]^{m}=0\) is far smaller than \(n!\), and in fact is bounded from above by \(n\).
In the present paper, we shall prove this. Concretely, we will prove the following results (the notation \([m]\) means the set \(\left\{1,2,\ldots,m\right\}\)):
* **Corollary 8.18.** We have \(\left[t_{i},t_{j}\right]^{\lceil(n-j)/2\rceil+1}=0\) for any \(i,j\in[n]\).
* **Theorem 8.15.** Let \(j\in[n]\) and \(m\in\mathbb{N}\) be such that \(2m\geq n-j+2\). Let \(i_{1},i_{2},\ldots,i_{m}\) be \(m\) elements of \([j]\) (not necessarily distinct). Then, \[\left[t_{i_{1}},t_{j}\right]\left[t_{i_{2}},t_{j}\right]\cdots\left[t_{i_{m}}, t_{j}\right]=0.\]
* **Corollary 9.11.** We have \(\left[t_{i},t_{j}\right]^{j-i+1}=0\) for any \(1\leq i\leq j\leq n\).
* **Theorem 9.10.** Let \(j\in[n]\), and let \(m\) be a positive integer. Let \(k_{1},k_{2},\ldots,k_{m}\) be \(m\) elements of \([j]\) (not necessarily distinct) satisfying \(m\geq j-k_{m}+1\). Then, \[\left[t_{k_{1}},t_{j}\right]\left[t_{k_{2}},t_{j}\right]\cdots\left[t_{k_{m}}, t_{j}\right]=0.\]
Along the way, we will also prove the following helpful facts:
* **Theorem 7.5.** We have \(\left(1+s_{j}\right)\left[t_{i},t_{j}\right]=0\) for any \(1\leq i\leq j<n\), where \(s_{j}\) denotes the transposition swapping \(j\) with \(j+1\).
* **Theorem 5.1**.: For any \(i\in[n-1]\), we have \(t_{i+1}t_{i}=\left(t_{i}-1\right)t_{i}=t_{i}\left(t_{i}-1\right)\).
* **Theorem 6.1**.: For any \(i\in[n-2]\), we have \(t_{i+2}\left(t_{i}-1\right)=\left(t_{i}-1\right)\left(t_{i+1}-1\right)\).
* **Corollary 5.2**.: For any \(i\in[n-1]\), we have \(\left[t_{i},t_{i+1}\right]=t_{i}\left(t_{i+1}-\left(t_{i}-1\right)\right)\) and \(\left[t_{i},t_{i+1}\right]t_{i}=\left[t_{i},t_{i+1}\right]^{2}=0\).
These results can be regarded as first steps towards understanding the \(\mathbf{k}\)-subalgebra \(\mathbf{k}\left[t_{1},t_{2},\ldots,t_{n}\right]\) of \(\mathbf{k}\left[S_{n}\right]\) that is generated by the somewhere-to-below shuffles. So far, very little is known about this \(\mathbf{k}\)-subalgebra, except for its simultaneous triangularizability (a consequence of [1, Theorem 4.1]). One might ask for its dimension as a \(\mathbf{k}\)-module (when \(\mathbf{k}\) is a field). Here is some numerical data for \(\mathbf{k}=\mathbf{Q}\) and \(n\leq 8\):
\[\begin{array}{|c||c|c|c|c|c|c|c|c|}\hline n&1&2&3&4&5&6&7&8\\ \hline\dim\left(\mathbf{Q}\left[t_{1},t_{2},\ldots,t_{n}\right]\right)&1&2&4& 9&23&66&212&761\\ \hline\end{array} \tag{1}\]
As of September 14th, 2023, this sequence of dimensions is not in the OEIS. We note that each single somewhere-to-below shuffle by itself is easily understood using the well-known theory of the top-to-random shuffle1, but this approach says nothing about the interactions between two or more of the \(n\) somewhere-to-below shuffles.
Footnote 1: Indeed, the first somewhere-to-below shuffle \(t_{1}\) is known as the _top-to-random shuffle_, and has been discussed, e.g., in [1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 182, 184, 187, 188, 189, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 220, 221, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 259, 271, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 311, 320, 321, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 12, 13, 14, 15, 16, 17, 18, 19, 19, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 24, 29, 25, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 53, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 20, 22, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 71, 72, 73, 74, 75, 76, 79, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 1
For each \(n\in\mathbb{Z}\), let \([n]:=[1,n]=\{1,2,\ldots,n\}\).
Fix an integer \(n\in\mathbb{N}\). Let \(S_{n}\) be the \(n\)-th symmetric group, i.e., the group of all permutations of \([n]\). We multiply permutations in the "continental" way: that is, \(\left(\pi\sigma\right)(i)=\pi\left(\sigma\left(i\right)\right)\) for all \(\pi,\sigma\in S_{n}\) and \(i\in[n]\).
For any \(k\) distinct elements \(i_{1},i_{2},\ldots,i_{k}\) of \([n]\), we let \(\operatorname{cyc}_{i_{1},i_{2},\ldots,i_{k}}\) be the permutation in \(S_{n}\) that sends \(i_{1},i_{2},\ldots,i_{k-1},i_{k}\) to \(i_{2},i_{3},\ldots,i_{k},i_{1}\), respectively while leaving all remaining elements of \([n]\) unchanged. This permutation is known as a _cycle_. Note that \(\operatorname{cyc}_{i}=\operatorname{id}\) for any single \(i\in[n]\).
For any \(i\in[n-1]\), we let \(s_{i}:=\operatorname{cyc}_{i,i+1}\in S_{n}\). This permutation \(s_{i}\) is called a _simple transposition_, as it swaps \(i\) with \(i+1\) while leaving all other elements of \([n]\) unchanged. It clearly satisfies
\[s_{i}^{2}=\operatorname{id}. \tag{2}\]
Furthermore, two simple transpositions \(s_{i}\) and \(s_{j}\) commute whenever \(|i-j|>1\). This latter fact is known as _reflection locality_.
### Some elements of \(\mathbf{k}\left[S_{n}\right]\)
Consider the group algebra \(\mathbf{k}\left[S_{n}\right]\). In this algebra, define \(n\) elements \(t_{1},t_{2},\ldots,t_{n}\) by setting2
Footnote 2: We view \(S_{n}\) as a subset of \(\mathbf{k}\left[S_{n}\right]\) in the obvious way.
\[t_{\ell}:=\operatorname{cyc}_{\ell}+\operatorname{cyc}_{\ell,\ell+1}+ \operatorname{cyc}_{\ell,\ell+1,\ell+2}+\cdots+\operatorname{cyc}_{\ell,\ell +1,\ldots,n}\in\mathbf{k}\left[S_{n}\right] \tag{3}\]
for each \(\ell\in[n]\). Thus, in particular, \(t_{n}=\operatorname{cyc}_{n}=\operatorname{id}=1\) (where \(1\) means the unity of \(\mathbf{k}\left[S_{n}\right]\)). We shall refer to the \(n\) elements \(t_{1},t_{2},\ldots,t_{n}\) as the _somewhere-to-below shuffles_. These shuffles were studied in [10] (where, in particular, their probabilistic meaning was discussed, which explains the origin of their name).
### Commutators
If \(a\) and \(b\) are two elements of some ring, then \([a,b]\) shall denote their commutator \(ab-ba\). This notation clashes with our above-defined notation \([a,b]\) for the interval \(\{k\in\mathbb{Z}\ \mid\ a\leq k\leq b\}\) (when \(a\) and \(b\) are two integers), but we don't expect any confusion to arise in practice, since we will only use the notation \([a,b]\) for \(ab-ba\) when \(a\) and \(b\) are visibly elements of the ring \(\mathbf{k}\left[S_{n}\right]\) (as opposed to integers).
## 3 Elementary computations in \(S_{n}\)
In this section, we will perform some simple computations in the symmetric group \(S_{n}\). The results of these computations will later become ingredients in some of our proofs.
### The cycles \((v\Longrightarrow w)\)
**Definition 3.1**.: Let \(v,w\in[n]\) satisfy \(v\leq w\). Then, \((v\Longrightarrow w)\) shall denote the permutation \(\operatorname{cyc}_{v,v+1,\ldots,w}\).
The symbol "\(\Longrightarrow\)" in this notation \((v\Longrightarrow w)\) has nothing to do with logical implication; instead, it is meant to summon an image of a "current" flowing from \(v\) to \(w\). The symbol "\(\Longrightarrow\)" is understood to bind less strongly than addition or subtraction; thus, for example, the expression "\((v+1\Longrightarrow w)\)" means \(((v+1)\Longrightarrow w)\).
Every \(v\in[n]\) satisfies
\[(v\Longrightarrow v)=\operatorname{cyc}_{v}=\operatorname{id}=1. \tag{4}\]
The following is just a little bit less obvious:
**Proposition 3.2**.: Let \(v,w\in[n]\) satisfy \(v\leq w\). Then, \((v\Longrightarrow w)=s_{v}s_{v+1}\cdots s_{w-1}\).
Proof.: Easy verification.
**Proposition 3.3**.: Let \(v,w\in[n]\) satisfy \(v<w\). Then:
* We have \((v\Longrightarrow w)=s_{v}\,(v+1\Longrightarrow w)\).
* We have \((v\Longrightarrow w)=(v\Longrightarrow w-1)\,s_{w-1}\).
Proof.: Easy verification (easiest using Proposition 3.2).
### Rewriting rules for products of cycles
Next we recall how conjugation in \(S_{n}\) acts on cycles:
**Proposition 3.4**.: Let \(\sigma\in S_{n}\). Let \(i_{1},i_{2},\ldots,i_{k}\) be \(k\) distinct elements of \([n]\). Then,
\[\sigma\operatorname{cyc}_{i_{1},i_{2},\ldots,i_{k}}\sigma^{-1}=\operatorname{ cyc}_{\sigma(i_{1}),\sigma(i_{2}),\ldots,\sigma(i_{k})}. \tag{5}\]
Proof.: Well-known.
Proposition 3.4 allows us to prove several relations between the cycles \((v\Longrightarrow w)\). We shall collect a catalogue of such relations now in order to have them at arm's reach in later proofs.
**Lemma 3.5**.: Let \(i,j,v,w\in[n]\) be such that \(w\geq v>j\geq i\). Then,
\[(j+1\Longrightarrow v)\,(i\Longrightarrow w)=(i\Longrightarrow w)\,(j \Longrightarrow v-1)\,.\]
Proof.: Let \(\sigma:=\left(i\Longrightarrow w\right)\). We have \(i\leq j\) (since \(j\geq i\)) and \(v-1\leq w-1\) (since \(w\geq v\)). Thus, the numbers \(j,j+1,\ldots,v-1\) all belong to the interval \(\left[i,w-1\right]\). Hence, the permutation \(\sigma=\left(i\Longrightarrow w\right)=\operatorname{cyc}_{i,i+1,\ldots,w}\) sends these numbers to \(j+1,j+2,\ldots,v\), respectively. In other words,
\[\left(\sigma\left(j\right),\sigma\left(j+1\right),\ldots,\sigma\left(v-1 \right)\right)=\left(j+1,j+2,\ldots,v\right).\]
However, from \(\left(i\Longrightarrow w\right)=\sigma\) and \(\left(j\Longrightarrow v-1\right)=\operatorname{cyc}_{j,j+1,\ldots,v-1}\), we obtain
\[\left(i\Longrightarrow w\right)\left(j\Longrightarrow v-1 \right)\left(i\Longrightarrow w\right)^{-1}\] \[=\sigma\operatorname{cyc}_{j,j+1,\ldots,v-1}\sigma^{-1}= \operatorname{cyc}_{\sigma\left(j\right),\sigma\left(j+1\right),\ldots, \sigma\left(v-1\right)}\qquad\qquad\left(\text{by }\eqref{eq:1}\right)\] \[=\operatorname{cyc}_{j+1,j+2,\ldots,v}\qquad\qquad\left( \text{since }\left(\sigma\left(j\right),\sigma\left(j+1\right),\ldots,\sigma\left(v-1 \right)\right)=\left(j+1,j+2,\ldots,v\right)\right)\] \[=\left(j+1\Longrightarrow v\right).\]
In other words, \(\left(i\Longrightarrow w\right)\left(j\Longrightarrow v-1\right)=\left(j+1 \Longrightarrow v\right)\left(i\Longrightarrow w\right)\). Thus, Lemma 3.5 is proved.
**Lemma 3.6**.: Let \(i,v,w\in\left[n\right]\) be such that \(v>w\geq i\). Then,
\[\left(i+1\Longrightarrow v\right)\left(i\Longrightarrow w\right)=\left(i \Longrightarrow w+1\right)\left(i\Longrightarrow v\right).\]
Proof.: We have \(i<v\) (since \(v>i\)). Thus, Proposition 3.3**(a)** yields
\[\left(i\Longrightarrow v\right)=s_{i}\left(i+1\Longrightarrow v\right). \tag{6}\]
On the other hand, from \(v>w\), we obtain \(v\geq w+1\), so that \(w+1\leq v\leq n\) and therefore \(w+1\in\left[n\right]\). Furthermore, \(v\geq w+1>w\geq i\geq i\). Thus, Lemma 3.5 (applied to \(i\), \(w+1\) and \(v\) instead of \(j\), \(v\) and \(w\)) yields
\[\left(i+1\Longrightarrow w+1\right)\left(i\Longrightarrow v\right) =\left(i\Longrightarrow v\right)\left(i\Longrightarrow\underbrace{ \left(w+1\right)-1}_{=w}\right)\] \[=\left(i\Longrightarrow v\right)\left(i\Longrightarrow w\right). \tag{7}\]
However, Proposition 3.3**(a)** yields \(\left(i\Longrightarrow w+1\right)=s_{i}\left(i+1\Longrightarrow w+1\right)\) (since \(i\leq w<w+1\)). Hence,
\[\underbrace{\left(i\Longrightarrow w+1\right)}_{=s_{i}\left(i+1 \Longrightarrow w+1\right)}\left(i\Longrightarrow v\right) =s_{i}\underbrace{\left(i+1\Longrightarrow w+1\right)\left(i \Longrightarrow v\right)}_{=\left(i\Longrightarrow v\right)\left(\text{by }\eqref{eq:1}\right)}=s_{i} \underbrace{\left(i\Longrightarrow v\right)}_{=s_{i}\left(i+1\Longrightarrow v \right)}\left(i\Longrightarrow w\right)\] \[=\underbrace{s_{i}s_{i}}_{=s_{i}^{2}=\operatorname{id}}\left(i+1 \Longrightarrow v\right)\left(i\Longrightarrow w\right)=\left(i+1\Longrightarrow v \right)\left(i\Longrightarrow w\right).\]
This proves Lemma 3.6.
**Lemma 3.7**.: Let \(i,u,v\in[n]\) be such that \(i<u<v\). Then,
\[s_{u}\left(i\Longrightarrow v\right)=\left(i\Longrightarrow v\right)s_{u-1}.\]
Proof.: Let \(\sigma:=\left(i\Longrightarrow v\right)\). Then, \(i\leq u-1\) (since \(i<u\)) and \(u\leq v-1\) (since \(u<v\)). Therefore, the numbers \(u-1\) and \(u\) both belong to the interval \([i,v-1]\). Hence, the permutation \(\sigma=\left(i\Longrightarrow v\right)=\operatorname{cyc}_{i,i+1,\ldots,w}\) sends these numbers to \(u\) and \(u+1\), respectively. In other words,
\[\sigma\left(u-1\right)=u\qquad\quad\text{and}\qquad\quad\sigma\left(u\right)= u+1.\]
However, from \(\left(i\Longrightarrow v\right)=\sigma\) and \(s_{u-1}=\operatorname{cyc}_{u-1,u}\), we obtain
\[\left(i\Longrightarrow v\right)s_{u-1}\left(i\Longrightarrow v \right)^{-1} =\sigma\operatorname{cyc}_{u-1,u}\sigma^{-1}=\operatorname{cyc}_{ \sigma\left(u-1\right),\sigma\left(u\right)}\qquad\quad\text{(by \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq: eq:eq:eq: eq:eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq:eq: eq: eq:eq: eq: eq:eq: eq: eq:eq: eq: eq:eq: eq: eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq:
**Proposition 4.1**.: Let \(\ell\in[n]\). Then,
\[t_{\ell}=\sum\limits_{w=\ell}^{n}\left(\ell\Longrightarrow w\right).\]
Proof.: From (3), we have
\[t_{\ell} =\text{cyc}_{\ell}+\text{cyc}_{\ell,\ell+1}+\text{cyc}_{\ell,\ell +1,\ell+2}+\cdots+\text{cyc}_{\ell,\ell+1,\ldots,n}\] \[=\sum\limits_{w=\ell}^{n}\underbrace{\text{cyc}_{\ell,\ell+1, \ldots,w}}_{=\left(\ell\Longrightarrow w\right)}=\sum\limits_{w=\ell}^{n} \left(\ell\Longrightarrow w\right).\]
This proves Proposition 4.1.
**Corollary 4.2**.: Let \(\ell\in[n-1]\). Then, \(t_{\ell}=1+s_{\ell}t_{\ell+1}\).
Proof.: Proposition 4.1 yields
\[t_{\ell} =\sum\limits_{w=\ell}^{n}\left(\ell\Longrightarrow w\right)= \underbrace{\left(\ell\Longrightarrow\ell\right)}_{=1}+\sum\limits_{w=\ell+1}^ {n}\underbrace{\left(\ell\Longrightarrow w\right)}_{=s_{\ell}\left(\ell+1 \Longrightarrow w\right)}\] \[=1+\sum\limits_{w=\ell+1}^{n}s_{\ell}\left(\ell+1\Longrightarrow w \right)=1+s_{\ell}\sum\limits_{w=\ell+1}^{n}\left(\ell+1\Longrightarrow w \right).\]
Comparing this with
\[1+s_{\ell}\underbrace{t_{\ell+1}}_{=\sum\limits_{w=\ell+1}^{n}\left(\ell+1 \Longrightarrow w\right)}=1+s_{\ell}\sum\limits_{w=\ell+1}^{n}\left(\ell+1 \Longrightarrow w\right),\]
we obtain \(t_{\ell}=1+s_{\ell}t_{\ell+1}\), qed.
We state another simple property of the \(t_{\ell}\)'s:
**Lemma 4.3**.: Let \(\ell\in[n]\). Let \(\sigma\in S_{n}\). Assume that \(\sigma\) leaves all the elements \(\ell,\ell+1,\ldots,n\) unchanged. Then, \(\sigma\) commutes with \(t_{\ell}\) in \(\mathbf{k}\left[S_{n}\right]\).
Proof.: The permutation \(\sigma\) leaves all the elements \(\ell,\ell+1,\ldots,n\) unchanged, and thus commutes with each cycle \(\text{cyc}_{\ell,\ell+1,\ldots,w}\) with \(w\geq\ell\) (because the latter cycle permutes only elements of \(\{\ell,\ell+1,\ldots,n\}\)). Hence, the permutation \(\sigma\) also commutes with the sum \(\sum\limits_{w=\ell}^{n}\text{cyc}_{\ell,\ell+1,\ldots,w}\) of these cycles. Since the definition of \(t_{\ell}\) yields
\[t_{\ell}=\text{cyc}_{\ell}+\text{cyc}_{\ell,\ell+1}+\text{cyc}_{\ell,\ell+1, \ell+2}+\cdots+\text{cyc}_{\ell,\ell+1,\ldots,n}=\sum\limits_{w=\ell}^{n} \text{cyc}_{\ell,\ell+1,\ldots,w}.\]
we can rewrite this as follows: The permutation \(\sigma\) commutes with \(t_{\ell}\). This proves Lemma 4.3.
Specifically, we will need only the following particular case of Lemma 4.3:
**Lemma 4.4**.: Let \(i,k,j\in[n]\) be such that \(i\leq k<j\). Then,
\[(i\Longrightarrow k)\;t_{j}=t_{j}\,(i\Longrightarrow k) \tag{8}\]
and
\[\left[\left(i\Longrightarrow k\right),\;t_{j}\right]=0. \tag{9}\]
Proof.: The permutation \((i\Longrightarrow k)=\operatorname{cyc}_{i,i+1,\ldots,k}\) leaves all the elements \(k+1,k+2,\ldots,n\) unchanged, and thus leaves all the elements \(j,j+1,\ldots,n\) unchanged (since the latter elements are a subset of the former elements (because \(k<j\))). Hence, Lemma 4.3 (applied to \(\ell=j\) and \(\sigma=(i\Longrightarrow k)\)) shows that \((i\Longrightarrow k)\) commutes with \(t_{j}\) in \(\mathbf{k}\left[S_{n}\right]\). In other words, \((i\Longrightarrow k)\;t_{j}=t_{j}\,(i\Longrightarrow k)\). This proves (8).
Now, the definition of a commutator yields
\[\left[\left(i\Longrightarrow k\right),\;t_{j}\right]=(i\Longrightarrow k)\;t _{j}-t_{j}\,(i\Longrightarrow k)=0\]
(since \((i\Longrightarrow k)\;t_{j}=t_{j}\,(i\Longrightarrow k)\)). This proves (9). Thus, Lemma 4.4 is completely proved.
The identities \(t_{i+1}t_{i}=\left(t_{i}-1\right)t_{i}=t_{i}\,(t_{i}-1)\) and \(\left[t_{i},t_{i+1}\right]^{2}=0\)
### The identity \(t_{i+1}t_{i}=\left(t_{i}-1\right)t_{i}=t_{i}\,(t_{i}-1)\)
We are now ready to prove the first really surprising result:
**Theorem 5.1**.: Let \(i\in[n-1]\). Then,
\[t_{i+1}t_{i} =\left(t_{i}-1\right)t_{i} \tag{10}\] \[=t_{i}\,(t_{i}-1)\,. \tag{11}\]
Proof.: From Proposition 4.1, we obtain
\[t_{i} =\sum_{w=i}^{n}\,(i\Longrightarrow w) \tag{12}\] \[=\underbrace{(i\Longrightarrow i)}_{=\text{id}=1}+\sum_{w=i+1}^{n }\,(i\Longrightarrow w)\qquad\quad\left(\begin{array}{c}\text{here, we have split off the}\\ \text{addend for $w=i$ from the sum}\end{array}\right)\] \[=1+\sum_{w=i+1}^{n}\,(i\Longrightarrow w)\,.\]
In other words,
\[t_{i}-1=\sum\limits_{w=i+1}^{n}\left(i\Longrightarrow w\right). \tag{13}\]
Moreover, (12) becomes
\[t_{i}=\sum\limits_{w=i}^{n}\left(i\Longrightarrow w\right)=\sum\limits_{v=i}^{ n}\left(i\Longrightarrow v\right). \tag{14}\]
Also, Proposition 4.1 (applied to \(\ell=i+1\)) yields
\[t_{i+1}=\sum\limits_{w=i+1}^{n}\left(i+1\Longrightarrow w\right)=\sum\limits _{v=i+1}^{n}\left(i+1\Longrightarrow v\right).\]
Multiplying this equality by (12), we obtain
\[t_{i+1}t_{i} =\sum\limits_{v=i+1}^{n}\left(i+1\Longrightarrow v\right)\cdot \sum\limits_{w=i}^{n}\left(i\Longrightarrow w\right)\] \[=\sum\limits_{v=i+1}^{n}\underbrace{\sum\limits_{w=i}^{n}\left(i +1\Longrightarrow v\right)\left(i\Longrightarrow w\right)}_{=\sum\limits_{w= i}^{n}\left(i+1\Longrightarrow v\right)\left(i\Longrightarrow w\right)}\] \[=\sum\limits_{v=i+1}^{n}\left(\sum\limits_{w=i}^{v-1}\left(i+1 \Longrightarrow v\right)\left(i\Longrightarrow w\right)+\sum\limits_{w=v}^{ n}\left(i+1\Longrightarrow v\right)\left(i\Longrightarrow w\right)\right)\] \[=\sum\limits_{v=i+1}^{n}\sum\limits_{w=i}^{v-1}\underbrace{ \left(i+1\Longrightarrow v\right)\left(i\Longrightarrow w\right)}_{=(i \Longrightarrow w+1)\left(i\Longrightarrow v\right)\left(i\Longrightarrow v \right)}\quad+\sum\limits_{v=i+1}^{n}\sum\limits_{w=v}^{n}\underbrace{\left(i +1\Longrightarrow v\right)\left(i\Longrightarrow w\right)}_{=(i \Longrightarrow w)\left(i\Longrightarrow v-1\right)}\] \[=\sum\limits_{v=i+1}^{n}\sum\limits_{w=i+1}^{v-1}\left(i \Longrightarrow w+1\right)\left(i\Longrightarrow v\right)\quad+\underbrace{ \sum\limits_{v=i+1}^{n}\sum\limits_{w=v}^{n}\left(i\Longrightarrow w\right) \left(i\Longrightarrow v-1\right)}_{=\sum\limits_{v=i}^{n-1}\sum\limits_{w=v+ 1}^{n}\left(i\Longrightarrow w\right)\left(i\Longrightarrow v\right)}\] \[=\sum\limits_{v=i+1}^{n}\sum\limits_{w=i+1}^{v}\left(i \Longrightarrow w\right)\left(i\Longrightarrow v\right)+\sum\limits_{v=i}^{n-1 }\sum\limits_{w=v+1}^{n}\left(i\Longrightarrow w\right)\left(i\Longrightarrow v \right).\]
Comparing this with
\[(t_{i}-1)\,t_{i}=\sum\limits_{w=i+1}^{n}\,(i\Longrightarrow w)\cdot \sum\limits_{v=i}^{n}\,(i\Longrightarrow v)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
**Corollary 5.2**.: Let \(i\in[n-1]\). Then,
\[\left[t_{i},t_{i+1}\right]=t_{i}\left(t_{i+1}-\left(t_{i}-1\right)\right) \tag{15}\]
and
\[\left[t_{i},t_{i+1}\right]t_{i}=0 \tag{16}\]
and
\[\left[t_{i},t_{i+1}\right]^{2}=0. \tag{17}\]
Proof.: The definition of a commutator yields
\[\left[t_{i},t_{i+1}\right]=t_{i}t_{i+1}-\underset{\text{\footnotesize(by \eqref{eq:t_i})}}{\underbrace{t_{i+1}t_{i}}}=t_{i}t_{i+1}-t_{i}\left(t_{i}-1 \right)=t_{i}\left(t_{i+1}-\left(t_{i}-1\right)\right).\]
This proves the equality (15). Multiplying both sides of this equality by \(t_{i}\) on the right, we obtain
\[\left[t_{i},t_{i+1}\right]t_{i}=t_{i}\underset{\text{\footnotesize(by \eqref{eq:t_i})}}{\underbrace{\left(t_{i+1}-\left(t_{i}-1\right)\right)t_{i}}} =t_{i}\underset{\text{\footnotesize(by \eqref{eq:t_i})}}{\underbrace{\left(t_{i+1}t_{i}-\left(t_{i}-1 \right)t_{i}\right)}}=0.\]
This proves (16). Now,
\[\left[t_{i},t_{i+1}\right]^{2}=\left[t_{i},t_{i+1}\right]\underset{\text{ \footnotesize(by \eqref{eq:t_i})}}{\underbrace{\left[t_{i},t_{i+1}\right]}}=\underset{\text{ \footnotesize(by \eqref{eq:t_i})}}{\underbrace{\left[t_{i},t_{i+1}\right]t_{i}}}\left(t_{i+1}- \left(t_{i}-1\right)\right)=0.\]
This proves (17). Thus, Corollary 5.2 is proved.
The identities \(t_{i+2}\left(t_{i}-1\right)=\left(t_{i}-1\right)\left(t_{i+1}-1\right)\) and \(\left[t_{i},t_{i+2}\right]\left(t_{i}-1\right)=t_{i+1}\left[t_{i},t_{i+1}\right]\)
### The identity \(t_{i+2}\left(t_{i}-1\right)=\left(t_{i}-1\right)\left(t_{i+1}-1\right)\)
The next theorem is a "next-level" analogue of Theorem 5.1:
**Theorem 6.1**.: Let \(i\in[n-2]\). Then,
\[t_{i+2}\left(t_{i}-1\right)=\left(t_{i}-1\right)\left(t_{i+1}-1\right).\]
Proof of Theorem 6.1.: From \(i\in[n-2]\), we obtain \(i+1\in[2,n-1]\subseteq[n-1]\). Hence, (11) (applied to \(i+1\) instead of \(i\)) yields \(t_{\left(i+1\right)+1}t_{i+1}=t_{i+1}\left(t_{i+1}-1\right)\). In view of \(\left(i+1\right)+1=i+2\), we can rewrite this as
\[t_{i+2}t_{i+1}=t_{i+1}\left(t_{i+1}-1\right). \tag{18}\]
Furthermore, Corollary 4.2 (applied to \(\ell=i\)) yields \(t_{i}=1+s_{i}t_{i+1}\) (since \(i\in[n-2]\subseteq[n-1]\)). Hence,
\[t_{i}-1=s_{i}t_{i+1}. \tag{19}\]
The definition of \(\left(i\Longrightarrow i+1\right)\) yields \(\left(i\Longrightarrow i+1\right)=\text{cyc}_{i,i+1}=s_{i}\). However, (8) (applied to \(k=i+1\) and \(j=i+2\)) yields \(\left(i\Longrightarrow i+1\right)t_{i+2}=t_{i+2}\left(i\Longrightarrow i+1\right)\). In view of \(\left(i\Longrightarrow i+1\right)=s_{i}\), we can rewrite this as \(s_{i}t_{i+2}=t_{i+2}s_{i}\). In other words, \(t_{i+2}s_{i}=s_{i}t_{i+2}\).
Now,
\[\begin{array}{l}t_{i+2}\underbrace{\left(t_{i}-1\right)}_{=s_{i}t_{i+1}}= \underbrace{t_{i+2}s_{i}}_{=s_{i}t_{i+2}}t_{i+1}=s_{i}\underbrace{t_{i+2}t_{i+ 1}}_{=t_{i+1}\left(t_{i+1}-1\right)}=\underbrace{s_{i}t_{i+1}}_{=t_{i}-1} \left(t_{i+1}-1\right)=\left(t_{i}-1\right)\left(t_{i+1}-1\right).\\ \text{(by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:
However, we have
\[t_{i}\underbrace{t_{i+2}\left(t_{i}-1\right)}_{\begin{subarray}{c} =(t_{i}-1)(t_{i+1}-1)\\ \text{(by Theorem 6.1)}\end{subarray}} =\underbrace{t_{i}\left(t_{i}-1\right)}_{\begin{subarray}{c}=t_ {i+1}t_{i}\\ \text{(by (\ref{eq:t_i+1}))}\end{subarray}}\left(t_{i+1}-1\right)=t_{i+1}t_{i} \left(t_{i+1}-1\right)\] \[=t_{i+1}t_{i}t_{i+1}-t_{i+1}t_{i} \tag{21}\]
and
\[t_{i+2}\underbrace{t_{i}\left(t_{i}-1\right)}_{\begin{subarray}{c} =t_{i+1}t_{i}\\ \text{(by (\ref{eq:t_i+1}))}\end{subarray}} =\underbrace{t_{i+2}t_{i+1}}_{\begin{subarray}{c}=t_{i+1}(t_{i+1 }-1)\\ \text{(by (\ref{eq:t_i+1}))}\end{subarray}} t_{i}=t_{i+1}\left(t_{i+1}-1\right)t_{i}\] \[=t_{i+1}t_{i+1}t_{i}-t_{i+1}t_{i}. \tag{22}\]
Now, the definition of a commutator yields \(\left[t_{i},t_{i+1}\right]=t_{i}t_{i+1}-t_{i+1}t_{i}\) and \(\left[t_{i},t_{i+2}\right]=t_{i}t_{i+2}-t_{i+2}t_{i}\). Hence,
\[\underbrace{\left[t_{i},t_{i+2}\right]}_{=t_{i}t_{i+2}-t_{i+2}t_{i}} \left(t_{i}-1\right) =\left(t_{i}t_{i+2}-t_{i+2}t_{i}\right)\left(t_{i}-1\right)\] \[=t_{i}t_{i+2}\left(t_{i}-1\right)-t_{i+2}t_{i}\left(t_{i}-1\right)\] \[=\left(t_{i+1}t_{i}t_{i+1}-t_{i+1}t_{i}\right)-\left(t_{i+1}t_{i+ 1}t_{i}-t_{i+1}t_{i}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
From \(i\in[n-2]\), we obtain \(i+1\in[2,n-1]\subseteq[n-1]\). Hence, Corollary 4.2 (applied to \(\ell=i+1\)) yields \(t_{i+1}=1+s_{i+1}t_{(i+1)+1}=1+s_{i+1}t_{i+2}\) (since \((i+1)+1=i+2\)). Hence, \(b=t_{i+1}=1+s_{i+1}t_{i+2}\). Therefore, \(b-1=s_{i+1}t_{i+2}\). Thus,
\[s_{i+1}\underbrace{(b-1)}_{=s_{i+1}t_{i+2}}=\underbrace{s_{i+1}s_{i+1}}_{=s_{i +1}^{2}=1}t_{i+2}=t_{i+2}. \tag{24}\]
However, Theorem 6.1 yields
\[t_{i+2}\left(t_{i}-1\right)=\left(t_{i}-1\right)\left(t_{i+1}-1\right).\]
In view of \(a=t_{i}\) and \(b=t_{i+1}\), we can rewrite this as
\[t_{i+2}\left(a-1\right)=\left(a-1\right)\left(b-1\right).\]
Hence,
\[\left(a-1\right)\left(b-1\right)=\underbrace{t_{i+2}}_{\begin{subarray}{c}=s _{i+1}\left(b-1\right)\\ \text{(by \eqref{eq:24})}\end{subarray}}\left(a-1\right)=s_{i+1}\left(b-1 \right)\left(a-1\right). \tag{25}\]
Now, (23) becomes
\[\left[a,b\right] =\left[a-1,\ b-1\right]=\underbrace{\left(a-1\right)\left(b-1 \right)}_{=s_{i+1}\left(b-1\right)\left(a-1\right)}-\left(b-1\right)\left(a-1\right)\] \[=s_{i+1}\left(b-1\right)\left(a-1\right)-\left(b-1\right)\left(a -1\right)\] \[=\left(s_{i+1}-1\right)\left(b-1\right)\left(a-1\right).\]
Multiplying both sides of this equality by \(1+s_{i+1}\) from the left, we obtain
\[\left(1+s_{i+1}\right)\left[a,b\right]=\underbrace{\left(1+s_{i+1}\right) \left(s_{i+1}-1\right)}_{=\left(s_{i+1}+1\right)\left(s_{i+1}-1\right)}\left( b-1\right)\left(a-1\right)=0.\]
In view of \(a=t_{i}\) and \(b=t_{i+1}\), we can rewrite this as \(\left(1+s_{i+1}\right)\left[t_{i},t_{i+1}\right]=0\). This proves Lemma 7.1.
The following is just a restatement of Lemma 7.1:
**Lemma 7.2**.: Let \(j\in[2,n-1]\). Then,
\[\left(1+s_{j}\right)\left[t_{j-1},t_{j}\right]=0.\]
Proof.: We have \(j-1\in[n-2]\) (since \(j\in[2,n-1]\)). Hence, Lemma 7.1 (applied to \(i=j-1\)) yields
\[\left(1+s_{(j-1)+1}\right)\left[t_{j-1},t_{(j-1)+1}\right]=0.\]
In view of \((j-1)+1=j\), we can rewrite this as \(\left(1+s_{j}\right)\left[t_{j-1},t_{j}\right]=0\). This proves Lemma 7.2.
### Expressing \(\left[t_{i},t_{j}\right]\) via \(\left[t_{j-1},t_{j}\right]\)
The following lemma is useful for reducing questions about \(\left[t_{i},t_{j}\right]\) to questions about \(\left[t_{j-1},t_{j}\right]\):
**Lemma 7.3**.: Let \(i,j\in\left[n\right]\) satisfy \(i<j\). Then:
* We have \[\left[t_{i},t_{j}\right]=\left[s_{i}s_{i+1}\cdots s_{j-1},t_{j}\right]\,t_{j}.\]
* We have \[\left[t_{i},t_{j}\right]=\left(s_{i}s_{i+1}\cdots s_{j-2}\right)\,\left[t_{j -1},t_{j}\right].\]
Proof.: A well-known identity for commutators says that if \(R\) is a ring, then any three elements \(a,b,c\in R\) satisfy
\[\left[ab,c\right]=\left[a,c\right]b+a\left[b,c\right]. \tag{26}\]
Hence, if \(R\) is a ring, then any two elements \(a,b\in R\) satisfy
\[\left[ab,b\right] = \left[a,b\right]b+a\underbrace{\left[b,b\right]}_{=0}\qquad \qquad\text{(by (\ref{eq:R}), applied to $c=b$)} \tag{27}\] \[= \left[a,b\right]b+a0=\left[a,b\right]b.\]
**(a)** Proposition 4.1 yields
\[t_{i} =\sum_{w=i}^{n}\left(i\Longrightarrow w\right)=\sum_{k=i}^{n}\left( i\Longrightarrow k\right)\] \[=\sum_{k=i}^{j-1}\left(i\Longrightarrow k\right)+\sum_{k=j}^{n} \underbrace{\left(i\Longrightarrow k\right)}_{\begin{subarray}{c}\left(i \Longrightarrow j\right)\left(j\Longrightarrow k\right)\\ \text{by Lemma \ref{lem:2}, }\\ \text{since }i\leq j\leq k\end{subarray}}\qquad\qquad\left(\text{since }i<j\leq n\right)\] \[=\sum_{k=i}^{j-1}\left(i\Longrightarrow k\right)+\underbrace{ \sum_{k=j}^{n}\left(i\Longrightarrow j\right)\left(j\Longrightarrow k \right)}_{\begin{subarray}{c}\left(i\Longrightarrow j\right)\sum\limits_{k= j}^{n}\left(j\Longrightarrow k\right)\\ \text{since Proposition \ref{lem:2} }\\ \text{yields }t_{j}\underset{w=j}{=}\left(j\Longrightarrow w\right) \end{subarray}}\] \[=\sum_{k=i}^{j-1}\left(i\Longrightarrow k\right)+\left(i \Longrightarrow j\right)t_{j}.\]
Thus,
\[\left[t_{i},t_{j}\right] =\left[\sum_{k=i}^{j-1}\left(i\Longrightarrow k\right)+\left(i \Longrightarrow j\right)t_{j},\ t_{j}\right]\] \[=\sum_{k=i}^{j-1}\underbrace{\left[\left(i\Longrightarrow k \right),\ t_{j}\right]}_{\begin{subarray}{c}\left(0\right)\left(\text{since }i\leq k<j\right) \end{subarray}}+\left[\left(i\Longrightarrow j\right)t_{j},\ t_{j}\right]\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
However, \(i\leq j-1\) (since \(i<j\)). Hence, Proposition 3.2 (applied to \(v=i\) and \(w=j-1\)) yields \((i\Longrightarrow j-1)=s_{i}s_{i+1}\cdots s_{(j-1)-1}=s_{i}s_{i+1}\cdots s_{j-2}=a\) (since \(a=s_{i}s_{i+1}\cdots s_{j-2}\)).
Now, \(i\leq j-1<j\). Hence, (9) (applied to \(k=j-1\)) yields \(\left[\left(i\Longrightarrow j-1\right),\ t_{j}\right]=0\). In view of \((i\Longrightarrow j-1)=a\) and \(t_{j}=c\), we can rewrite this as \([a,c]=0\). Hence, (26) becomes
\[[ab,c]=\underbrace{[a,c]}_{=0}b+a\left[b,c\right]=a\left[b,c\right]. \tag{29}\]
On the other hand, applying Lemma 7.3**(a)** to \(j-1\) instead of \(j\), we obtain
\[\left[t_{j-1},t_{j}\right]=\left[s_{j-1},t_{j}\right]t_{j}. \tag{30}\]
However, Lemma 7.3**(a)** yields
\[\left[t_{i},t_{j}\right] =\left[\underbrace{s_{i}s_{i+1}\cdots s_{j-1}}_{\begin{subarray}{ c}=ab\\ \text{(by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq
### The identity \(\left(1+s_{j}\right)\left[t_{i},t_{j}\right]=0\) for all \(i\leq j\)
We are now ready to prove the following surprising result:
**Theorem 7.5**.: Let \(i,j\in\left[n-1\right]\) satisfy \(i\leq j\). Then,
\[\left(1+s_{j}\right)\left[t_{i},t_{j}\right]=0.\]
Proof.: If \(i=j\), then this is obvious (since \(i=j\) entails \(\left[t_{i},t_{j}\right]=\left[t_{j},t_{j}\right]=0\)). Hence, we WLOG assume that \(i\neq j\). Thus, \(i<j\) (since \(i\leq j\)).
The transpositions \(s_{i},s_{i+1},\ldots,s_{j-2}\) all commute with \(s_{j}\) (by reflection locality, since the numbers \(i,i+1,\ldots,j-2\) differ by more than \(1\) from \(j\)). Thus, their product \(s_{i}s_{i+1}\cdots s_{j-2}\) commutes with \(s_{j}\) as well. In other words,
\[s_{j}\left(s_{i}s_{i+1}\cdots s_{j-2}\right)=\left(s_{i}s_{i+1}\cdots s_{j-2} \right)s_{j}.\]
Thus, in \(\mathbf{k}\left[S_{n}\right]\), we have
\[\left(1+s_{j}\right)\left(s_{i}s_{i+1}\cdots s_{j-2}\right) =s_{i}s_{i+1}\cdots s_{j-2}+\underbrace{s_{j}\left(s_{i}s_{i+1} \cdots s_{j-2}\right)}_{=\left(s_{i}s_{i+1}\cdots s_{j-2}\right)s_{j}}\] \[=s_{i}s_{i+1}\cdots s_{j-2}+\left(s_{i}s_{i+1}\cdots s_{j-2} \right)s_{j}\] \[=\left(s_{i}s_{i+1}\cdots s_{j-2}\right)\left(1+s_{j}\right). \tag{31}\]
However, Lemma 7.3**(b)** yields \(\left[t_{i},t_{j}\right]=\left(s_{i}s_{i+1}\cdots s_{j-2}\right)\left[t_{j-1},t_{j}\right]\) (since \(i<j\)). Hence,
\[\left(1+s_{j}\right)\underbrace{\left[t_{i},t_{j}\right]}_{= \left(s_{i}s_{i+1}\cdots s_{j-2}\right)\left[t_{j-1},t_{j}\right]} =\underbrace{\left(1+s_{j}\right)\left(s_{i}s_{i+1}\cdots s_{j-2} \right)\left(1+s_{j}\right)}_{=\left(s_{i}s_{i+1}\cdots s_{j-2}\right)\left(1 +s_{j}\right)}\left[t_{j-1},t_{j}\right]\] \[=\left(s_{i}s_{i+1}\cdots s_{j-2}\right)\underbrace{\left(1+s_{j }\right)\left[t_{j-1},t_{j}\right]}_{=0}=0.\]
This proves Theorem 7.5.
**Corollary 7.6**.: Let \(n\geq 2\) and \(i\in\left[n\right]\). Then, \(t_{n-1}\left[t_{i},t_{n-1}\right]=0\).
Proof.: This is true for \(i=n\) (because \(t_{n}=1\) and thus \(\left[t_{n},t_{n-1}\right]=\left[1,t_{n-1}\right]=1t_{n-1}-t_{n-1}=0\) and therefore \(t_{n-1}\underbrace{\left[t_{n},t_{n-1}\right]}_{=0}=0\)). Hence, we WLOG assume that \(i\neq n\). Therefore, \(i\in\left[n\right]\setminus\left\{n\right\}=\left[n-1\right]\). Also, \(n-1\in\left[n-1\right]\) (since \(n\geq 2\)).
The definition of \(t_{n-1}\) yields \(t_{n-1}=\underbrace{\operatorname{cyc}_{n-1}}_{=1}+\underbrace{\operatorname {cyc}_{n-1,n}}_{=s_{n-1}}=1+s_{n-1}\).
However, Theorem 7.5 (applied to \(j=n-1\)) yields \(\left(1+s_{n-1}\right)\left[t_{i},t_{n-1}\right]=0\). In view of \(t_{n-1}=1+s_{n-1}\), we can rewrite this as \(t_{n-1}\left[t_{i},t_{n-1}\right]=0\). This proves Corollary 7.6.
## 8 The identity \(\left[t_{i},t_{j}\right]^{\lceil(n-j)/2\rceil+1}=0\) for all \(i,j\in[n]\)
### The elements \(s_{k}^{+}\) and the left ideals \(H_{k,j}\)
We now introduce two crucial notions for the proof of our first main theorem:
**Definition 8.1**.: We set \(\mathbf{A}:=\mathbf{k}\left[S_{n}\right]\). Furthermore, for any \(i\in[n-1]\), we set
\[s_{i}^{+}:=s_{i}+1\in\mathbf{A}.\]
We also set \(s_{i}^{+}:=1\in\mathbf{A}\) for all integers \(i\notin[n-1]\). Thus, \(s_{i}^{+}\) is defined for all integers \(i\).
**Definition 8.2**.: Let \(k\) and \(j\) be two integers. Then, we define
\[H_{k,j}:=\sum\limits_{\begin{subarray}{c}u\in[j,k];\\ u\equiv k\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}.\]
This is a left ideal of \(\mathbf{A}\). Note that
\[H_{k,j}=0\qquad\text{ whenever }k<j. \tag{32}\]
**Example 8.3**.: We have
\[H_{7,3} =\sum\limits_{\begin{subarray}{c}u\in[3,7];\\ u\equiv 7\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}=\mathbf{A}s_{3}^{+}+ \mathbf{A}s_{5}^{+}+\mathbf{A}s_{7}^{+}\qquad\quad\text{ and }\] \[H_{7,2} =\sum\limits_{\begin{subarray}{c}u\in[2,7];\\ u\equiv 7\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}=\mathbf{A}s_{3}^{+}+ \mathbf{A}s_{5}^{+}+\mathbf{A}s_{7}^{+},\]
so that \(H_{7,2}=H_{7,3}\). Similarly, \(H_{7,\mathcal{A}}=H_{7,5}=\mathbf{A}s_{5}^{+}+\mathbf{A}s_{7}^{+}\) and \(H_{7,6}=H_{7,7}=\mathbf{A}s_{7}^{+}\).
Let us prove some basic properties of the left ideals \(H_{k,j}\):
**Remark 8.4**.: Let \(k\) be an integer such that \(k\notin[n-1]\). Let \(j\in[n]\) satisfy \(j\leq k\). Then, \(H_{k,j}=\mathbf{A}\).
Proof.: Since \(k\notin[n-1]\), we have \(s_{k}^{+}=1\) (by the definition of \(s_{k}^{+}\)). Also, \(k\in[j,k]\) (since \(j\leq k\)).
Recall that \(H_{k,j}\) is defined as the sum \(\sum\limits_{\begin{subarray}{c}u\in[j,k];\\ u\equiv k\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}\). But this sum contains the addend \(\mathbf{A}s_{k}^{+}\) (since \(k\in[j,k]\) and \(k\equiv k\bmod 2\)). Hence,
\(\mathbf{A}1=\mathbf{A}\). Now,
\[H_{k,j}=\sum\limits_{\begin{subarray}{c}u\in[j,k];\\ u\equiv k\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}\supseteq\mathbf{A},\]
so that \(H_{k,j}=\mathbf{A}\). This proves Remark 8.4.
**Lemma 8.5**.: Let \(k\) and \(j\) be two integers. Then, \(H_{k,j}\subseteq H_{k,j-1}\).
Proof.: Definition 8.2 yields \(H_{k,j}=\sum\limits_{\begin{subarray}{c}u\in[j,k];\\ u\equiv k\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}\) and \(H_{k,j-1}=\sum\limits_{\begin{subarray}{c}u\in[j-1,k];\\ u\equiv k\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}\). But clearly, any addend of the former sum is an addend of the latter sum as well (since each \(u\in[j,k]\) satisfies \(u\in[j,k]\subseteq[j-1,k]\)). Thus, the former sum is a subset of the latter. In other words, \(H_{k,j}\subseteq H_{k,j-1}\). This proves Lemma 8.5.
**Lemma 8.6**.: Let \(v\), \(w\) and \(j\) be three integers such that \(v\leq w\) and \(v\equiv w\bmod 2\). Then, \(H_{v,j}\subseteq H_{w,j}\).
Proof.: Definition 8.2 yields
\[H_{v,j} =\sum\limits_{\begin{subarray}{c}u\in[j,v];\\ u\equiv v\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}\qquad\quad\text{and} \tag{33}\] \[H_{w,j} =\sum\limits_{\begin{subarray}{c}u\in[j,w];\\ u\equiv w\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}. \tag{34}\]
However, each \(u\in[j,v]\) satisfying \(u\equiv v\bmod 2\) is also an element of \([j,w]\) (since \(u\leq v\leq w\)) and satisfies \(u\equiv w\bmod 2\) (since \(u\equiv v\equiv w\bmod 2\)). Thus, any addend of the sum \(\sum\limits_{\begin{subarray}{c}u\in[j,v];\\ u\equiv v\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}\) is also an addend of the sum \(\sum\limits_{\begin{subarray}{c}u\in[j,w];\\ u\equiv v\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}\). Therefore, the former sum is a subset of the latter sum. In other words, \(\sum\limits_{\begin{subarray}{c}u\in[j,v];\\ u\equiv v\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}\subseteq\sum\limits_{ \begin{subarray}{c}u\in[j,w];\\ u\equiv v\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}\). In view of (33) and (34), we can rewrite this as \(H_{v,j}\subseteq H_{w,j}\). This proves Lemma 8.6.
**Lemma 8.7**.: Let \(k\) and \(j\) be two integers such that \(k\equiv j\bmod 2\). Then, \(H_{k,j-1}=H_{k,j}\).
Proof.: Definition 8.2 yields
\[H_{k,j} =\sum\limits_{\begin{subarray}{c}u\in[j,k];\\ u\equiv k\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}\qquad\quad\text{ and} \tag{35}\] \[H_{k,j-1} =\sum\limits_{\begin{subarray}{c}u\in[j-1,k];\\ u\equiv k\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}. \tag{36}\]
However, each element \(u\in[j,k]\) satisfying \(u\equiv k\operatorname{mod}2\) is also an element of \([j-1,k]\) (since \(u\in[j,k]\subseteq[j-1,k]\)). Conversely, each element \(u\in[j-1,k]\) satisfying \(u\equiv k\operatorname{mod}2\) is also an element of \([j,k]\) (since otherwise, it would equal \(j-1\), so that we would have \(j-1=u\equiv k\equiv j\operatorname{mod}2\), but this would contradict \(j-1\not\equiv j\operatorname{mod}2\)). Therefore, the elements \(u\in[j-1,k]\) satisfying \(u\equiv k\operatorname{mod}2\) are precisely the elements \(u\in[j,k]\) satisfying \(u\equiv k\operatorname{mod}2\). In other words, the sum on the right hand side of (36) ranges over the same set as the sum on the right hand side of (35). Therefore, the right hand sides of the equalities (36) and (35) are equal. Hence, their left hand sides must also be equal. In other words, \(H_{k,j-1}=H_{k,j}\). This proves Lemma 8.7.
The following easy property follows from Lemma 3.7:
**Lemma 8.8**.: Let \(i,u,v\in[n]\) be such that \(i<u<v\). Then,
\[s_{u}^{+}\left(i\Longrightarrow v\right)=\left(i\Longrightarrow v\right)s_{u -1}^{+}.\]
Proof.: We have \(u\in[n-1]\) (since \(u<v\leq n\) and \(u>i\geq 1\)) and thus \(s_{u}^{+}=s_{u}+1\).
Also, \(u>i\geq 1\), so that \(u-1>0\). Hence, \(u-1\in[n-1]\) (since \(u-1<u<v\leq n\) and \(u-1>0\)) and thus \(s_{u-1}^{+}=s_{u-1}+1\). Hence,
\[\underbrace{s_{u}^{+}}_{=s_{u}+1}\left(i\Longrightarrow v\right) -\left(i\Longrightarrow v\right)\underbrace{s_{u-1}^{+}}_{=s_{u-1}+1}\] \[=\left(s_{u}+1\right)\left(i\Longrightarrow v\right)-\left(i \Longrightarrow v\right)\left(s_{u-1}+1\right)\] \[=s_{u}\left(i\Longrightarrow v\right)+\left(i\Longrightarrow v \right)-\left(i\Longrightarrow v\right)s_{u-1}-\left(i\Longrightarrow v\right)\] \[=\underbrace{s_{u}\left(i\Longrightarrow v\right)}_{=\left(i \Longrightarrow v\right)s_{u-1}}-\left(i\Longrightarrow v\right)s_{u-1}- \left(i\Longrightarrow v\right)s_{u-1}=0.\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
Thus,
\[\underbrace{s_{j-1}^{+}}_{=1+s_{j-1}}\underbrace{\left(t_{j}-\left(t_{j-1}-1 \right)\right)}_{=\left(1-s_{j-1}\right)t_{j}}=\underbrace{\left(1+s_{j-1} \right)\left(1-s_{j-1}\right)}_{=1-s_{j-1}^{2}=0}t_{j}=0.\]
This proves Lemma 8.9.
### The fuse
The next lemma will help us analyze the behavior of the ideals \(H_{k,j}\) under repeated multiplication by \(t_{j}\)'s:
**Lemma 8.10**.: Let \(j\in[n]\) and \(k\in[n+1]\) be such that \(j<k\). Then:
1. If \(k\not\equiv j\bmod 2\), then \(s_{k}^{+}t_{j}\in H_{k-1,j}\).
2. If \(k\equiv j\bmod 2\), then \(s_{k}^{+}\left(t_{j}-1\right)\in H_{k-1,j}\).
Proof.: If \(k=n+1\), then both parts of Lemma 8.10 hold for fairly obvious reasons3. Hence, for the rest of this proof, we WLOG assume that \(k\neq n+1\). Therefore, \(k\in[n+1]\setminus\{n+1\}=[n]\), so that \(k\leq n\).
Footnote 3: Proof.: Assume that \(k=n+1\). Then, \(k-1=n\notin[n-1]\) and \(j\leq k-1\) (since \(j<k\)), so that \(H_{k-1,j}=\mathbf{A}\) (by Remark 8.4, applied to \(k-1\) instead of \(k\)). Thus, both elements \(s_{k}^{+}t_{j}\) and \(s_{k}^{+}\left(t_{j}-1\right)\) belong to \(H_{k-1,j}\) (since they both belong to \(\mathbf{A}\)). Therefore, both parts of Lemma 8.10 hold. Qed.
Recall that \(H_{k-1,j}\) is a left ideal of \(\mathbf{A}\), therefore an additive subgroup of \(\mathbf{A}\).
We have \(j<k\), so that \(j\leq k-1\). Hence, \(k-1\in[j,k-1]\).
The definition of \(H_{k-1,j}\) yields
\[H_{k-1,j}=\sum_{\begin{subarray}{c}u\in[j,k-1];\\ u\equiv k-1\bmod 2\end{subarray}}\mathbf{A}s_{u}^{+}. \tag{37}\]
Since \(\mathbf{A}s_{k-1}^{+}\) is an addend of the sum on the right hand side here (because \(k-1\in[j,k-1]\) and \(k-1\equiv k-1\bmod 2\)), we thus conclude that \(\mathbf{A}s_{k-1}^{+}\subseteq H_{k-1,j}\).
Proposition 4.1 yields
\[t_{j}=\sum_{w=j}^{n}\left(j\Longrightarrow w\right).\]
Hence,
\[s_{k}^{+}t_{j} =s_{k}^{+}\sum_{w=j}^{n}\left(j\Longrightarrow w\right)=\sum_{w=j} ^{n}s_{k}^{+}\left(j\Longrightarrow w\right)\] \[=s_{k}^{+}\sum_{w=j}^{k}\left(j\Longrightarrow w\right)+\sum_{w=k +1}^{n}\left(j\Longrightarrow w\right)s_{k-1}^{+}. \tag{38}\]
We now need a better understanding of the sums on the right hand side. For this purpose, we observe that every \(w\in[j,n-1]\) satisfies
\[\left(j\Longrightarrow w\right)+\underbrace{\left(j \Longrightarrow w+1\right)}_{\begin{subarray}{c}=\left(j\Longrightarrow w \right)s_{w}\\ \text{(by Proposition \ref{prop:
Now,
\[\sum_{w=j}^{k}\left(j\Longrightarrow w\right) =\left(j\Longrightarrow j\right)+\left(j\Longrightarrow j+1\right) +\left(j\Longrightarrow j+2\right)+\cdots+\left(j\Longrightarrow k\right)\] \[=\underbrace{\left(\left(j\Longrightarrow j\right)+\left(j \Longrightarrow j+1\right)\right)}_{\in\mathbf{A}_{s_{j}^{+}}}\] \[\qquad+\underbrace{\left(\left(j\Longrightarrow j+2\right)+ \left(j\Longrightarrow j+3\right)\right)}_{\in\mathbf{A}_{j+2}^{+}}\] \[\qquad+\underbrace{\left(\left(j\Longrightarrow j+4\right)+ \left(j\Longrightarrow j+5\right)\right)}_{\in\mathbf{A}_{j+4}^{+}}\] \[\qquad+\cdots\] \[\qquad+\underbrace{\left(\left(j\Longrightarrow k-1\right)+ \left(j\Longrightarrow k\right)\right)}_{\in\mathbf{A}_{s_{k-1}^{+}}}\] \[\qquad\qquad\qquad\left(\begin{array}{c}\text{here, we have split our sum into pairs of}\\ \text{consecutive addends, since }k-j+1\text{ is even}\end{array}\right)\] \[\in\mathbf{A}s_{j}^{+}+\mathbf{A}s_{j+2}^{+}+\mathbf{A}s_{j+4}^ {+}+\cdots+\mathbf{A}s_{k-1}^{+}=\sum_{\begin{subarray}{c}u\in\left[j,k-1 \right];\\ u\equiv k-1\,\text{mod}\,2\end{subarray}}\mathbf{A}s_{u}^{+}\] \[=H_{k-1,j}\qquad\qquad\left(\text{by (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:
Now,
\[\sum_{w=j+1}^{k}\left(j\Longrightarrow w\right) =\left(j\Longrightarrow j+1\right)+\left(j\Longrightarrow j+2 \right)+\left(j\Longrightarrow j+3\right)+\cdots+\left(j\Longrightarrow k\right)\] \[=\underbrace{\left(\left(j\Longrightarrow j+1\right)+\left(j \Longrightarrow j+2\right)\right)}_{\begin{subarray}{c}\in\mathbf{As}_{j+1}^{+}\\ \text{(by (\ref{eq:2}))}\end{subarray}}\] \[\qquad\qquad+\underbrace{\left(\left(j\Longrightarrow j+3\right) +\left(j\Longrightarrow j+4\right)\right)}_{\begin{subarray}{c}\in\mathbf{As}_{j+3}^{+} \\ \text{(by (\ref{eq:2}))}\end{subarray}}\] \[\qquad\qquad+\underbrace{\left(\left(j\Longrightarrow j+5\right) +\left(j\Longrightarrow j+6\right)\right)}_{\begin{subarray}{c}\in\mathbf{As}_{j+5}^{+} \\ \text{(by (\ref{eq:2}))}\end{subarray}}\] \[\qquad\qquad+\cdots\] \[\qquad\qquad+\underbrace{\left(\left(j\Longrightarrow k-1\right) +\left(j\Longrightarrow k\right)\right)}_{\begin{subarray}{c}\in\mathbf{As}_{k- 1}^{+}\\ \text{(by (\ref{eq:2}))}\end{subarray}}\] \[\qquad\qquad\qquad\left(\begin{array}{c}\text{here, we have split our sum into pairs of }\\ \text{consecutive addends, since }k-j\text{ is even }\end{array}\right)\] \[\in\mathbf{As}_{j+1}^{+}+\mathbf{As}_{j+3}^{+}+\mathbf{As}_{j+5}^ {+}+\cdots+\mathbf{As}_{k-1}^{+}=\sum_{\begin{subarray}{c}u\in\left[j,k-1 \right];\\ u\equiv k-1\,\text{mod}\,2\end{subarray}}\mathbf{As}_{u}^{+}\] \[=H_{k-1,j}\qquad\qquad\text{(by (\ref{eq:2}))}\,. \tag{40}\]
But
\[\sum_{w=j}^{k}\left(j\Longrightarrow w\right)=\underbrace{\left(j \Longrightarrow j\right)}_{=\text{id}=1}+\sum_{w=j+1}^{k}\left(j\Longrightarrow w\right)\] \[\qquad\qquad\qquad\qquad\qquad\left(\begin{array}{c}\text{here, we have split off the }\\ \text{addend for }w=j\text{ from the sum }\right)\] \[=1+\sum_{w=j+1}^{k}\left(j\Longrightarrow w\right).\]
Now, (38) becomes
\[s_{k}^{+}t_{j} =s_{k}^{+}\underbrace{\sum\limits_{w=j}^{k}\left(j\Longrightarrow w \right)}_{=1+\overset{k}{\underset{w=j+1}{\sum}}\left(j\Longrightarrow w \right)}+\sum\limits_{w=k+1}^{n}\left(j\Longrightarrow w\right)s_{k-1}^{+}\] \[=s_{k}^{+}\left(1+\sum\limits_{w=j+1}^{k}\left(j\Longrightarrow w \right)\right)+\sum\limits_{w=k+1}^{n}\left(j\Longrightarrow w\right)s_{k-1}^ {+}\] \[=s_{k}^{+}+s_{k}^{+}\sum\limits_{w=j+1}^{k}\left(j\Longrightarrow w \right)+\sum\limits_{w=k+1}^{n}\left(j\Longrightarrow w\right)s_{k-1}^{+}.\]
Subtracting \(s_{k}^{+}\) from both sides of this equality, we find
\[s_{k}^{+}t_{j}-s_{k}^{+} =s_{k}^{+}\underbrace{\sum\limits_{w=j+1}^{k}\left(j \Longrightarrow w\right)}_{\overset{\in H_{k-1,j}}{\text{(by \eqref{eq:s_k})}}}+\sum \limits_{w=k+1}^{n}\underbrace{\left(j\Longrightarrow w\right)s_{k-1}^{+}}_{ \in\mathbf{As}_{k-1}^{+}\subseteq H_{k-1,j}}\] \[\in s_{k}^{+}H_{k-1,j}+\sum\limits_{w=k+1}^{n}H_{k-1,j}\] \[\subseteq H_{k-1,j}\qquad\quad\left(\text{since $H_{k-1,j}$ is a left ideal of $\mathbf{A}$}\right).\]
Hence, \(s_{k}^{+}\left(t_{j}-1\right)=s_{k}^{+}t_{j}-s_{k}^{+}\in H_{k-1,j}\). This proves Lemma 8.10 **(b)**.
From Lemma 8.10 and Lemma 8.9, we can easily obtain the following:
**Lemma 8.11**.: Let \(j\in[2,n]\) and \(u\in[n]\) be such that \(u\geq j-1\) and \(u\equiv j-1\bmod 2\). Then,
\[s_{u}^{+}\left(t_{j}-\left(t_{j-1}-1\right)\right)\in H_{u-1,j}.\]
Proof.: If \(u=j-1\), then this follows from
\[s_{j-1}^{+}\left(t_{j}-\left(t_{j-1}-1\right)\right) =0\qquad\quad\left(\text{by Lemma \ref{lem:s_k})}\right.\] \[\in H_{(j-1)-1,j}\qquad\quad\left(\text{since $H_{(j-1)-1,j}$ is a left ideal}\right).\]
Thus, for the rest of this proof, we WLOG assume that \(u\neq j-1\). Combining this with \(u\geq j-1\), we obtain \(u>j-1\). Therefore, \(u\geq(j-1)+1=j\). Moreover, \(u\neq j\) (since \(u\equiv j-1\not\equiv j\bmod 2\)). Combining this with \(u\geq j\), we obtain \(u>j\). Thus, \(u\geq j+1\).
Also, from \(j\in[2,n]\), we obtain \(j-1\in[n-1]\subseteq[n]\). From \(u>j\), we obtain \(j<u\). Moreover, \(u\equiv j-1\not\equiv j\bmod 2\). Hence, Lemma 8.10 **(a)** (applied to \(k=u\)
yields \(s_{u}^{+}t_{j}\in H_{u-1,j}\) (since \(j<u\)). Furthermore, Lemma 8.10**(b)** (applied to \(u\) and \(j-1\) instead of \(k\) and \(j\)) yields \(s_{u}^{+}\left(t_{j-1}-1\right)\in H_{u-1,j-1}\) (since \(u\equiv j-1\operatorname{mod}2\) and \(j-1<j<u\)). Moreover, \(H_{u-1,j}\subseteq H_{u-1,j-1}\) (by Lemma 8.5, applied to \(k=u-1\)).
Finally, from \(u\equiv j-1\operatorname{mod}2\), we obtain \(u-1\equiv(j-1)-1=j-2\equiv j\operatorname{mod}2\). Hence, Lemma 8.7 (applied to \(k=u-1\)) yields \(H_{u-1,j-1}=H_{u-1,j}\).
Altogether, we now have
\[s_{u}^{+}\left(t_{j}-\left(t_{j-1}-1\right)\right)=\underset{\in H_{u-1,j}}{ \underbrace{s_{u}^{+}t_{j}}}-\underset{\in H_{u-1,j-1}=H_{u-1,j}}{\underbrace{s _{u}^{+}\left(t_{j-1}-1\right)}}\in H_{u-1,j}-H_{u-1,j}\subseteq H_{u-1,j}\]
(since \(H_{u-1,j}\) is a left ideal of \(\mathbf{A}\)). This proves Lemma 8.11.
Using Lemma 8.11 with Lemma 8.10**(a)**, we can obtain the following:
**Lemma 8.12**.: Let \(j\in[n]\) and \(k\in[n+1]\) be such that \(1<j\leq k\) and \(k\equiv j\operatorname{mod}2\). Then,
\[s_{k}^{+}\left[t_{j-1},t_{j}\right]\in H_{k-2,j}.\]
Proof.: From \(j\in[n]\) and \(1<j\), we obtain \(j\in[2,n]\). Thus, \(j-1\in[n-1]\).
Hence, (15) (applied to \(i=j-1\)) yields
\[\left[t_{j-1},t_{j}\right]=t_{j-1}\left(t_{j}-\left(t_{j-1}-1\right)\right). \tag{41}\]
Multiplying this equality by \(s_{k}^{+}\) from the left, we obtain
\[s_{k}^{+}\left[t_{j-1},t_{j}\right]=s_{k}^{+}t_{j-1}\left(t_{j}-\left(t_{j-1}- 1\right)\right). \tag{42}\]
However, \(k-1\equiv j-1\operatorname{mod}2\) (since \(k\equiv j\operatorname{mod}2\)). Furthermore, we have \(j-1\in[n-1]\subseteq[n]\) and \(j-1<j\leq k\) and \(k\equiv j\not\equiv j-1\operatorname{mod}2\). Thus, Lemma 8.10**(a)** (applied to \(j-1\) instead of \(j\)) yields
\[s_{k}^{+}t_{j-1}\in H_{k-1,j-1}=\underset{u\equiv k-1\operatorname{mod}2}{ \underbrace{u\in[j-1,k-1];}}\mathbf{A}s_{u}^{+}\qquad\quad\left(\text{by the definition of }H_{k-1,j-1}\right).\]
In other words, we can write \(s_{k}^{+}t_{j-1}\) in the form
\[s_{k}^{+}t_{j-1}=\underset{u\equiv k-1\operatorname{mod}2}{\underbrace{u\in[ j-1,k-1];}}a_{u}s_{u}^{+}, \tag{43}\]
where \(a_{u}\in\mathbf{A}\) is an element for each \(u\in[j-1,k-1]\) satisfying \(u\equiv k-1\operatorname{mod}2\).
Consider these elements \(a_{u}\). Now, (42) becomes
\[s_{k}^{+}\left[t_{j-1},t_{j}\right] =s_{k}^{+}t_{j-1}\left(t_{j}-\left(t_{j-1}-1\right)\right)\] \[=\left(\underset{u\equiv k-1\operatorname{mod}2}{\underbrace{u \in[j-1,k-1];}}a_{u}s_{u}^{+}\right)\left(t_{j}-\left(t_{j-1}-1\right)\right) \qquad\quad\left(\text{by (\ref{eq:2011})}\right)\] \[=\underset{u\equiv k-1\operatorname{mod}2}{\underbrace{u\in[j-1, k-1];}}a_{u}s_{u}^{+}\left(t_{j}-\left(t_{j-1}-1\right)\right). \tag{44}\]
However, every \(u\in[j-1,k-1]\) satisfying \(u\equiv k-1\,\mathrm{mod}\,2\) satisfies
\[s_{u}^{+}\left(t_{j}-\left(t_{j-1}-1\right)\right)\in H_{k-2,j}. \tag{45}\]
[_Proof of (45)_: Let \(u\in[j-1,k-1]\) be such that \(u\equiv k-1\,\mathrm{mod}\,2\).
We have \(j\in[2,n]\). Moreover, \(u\in[j-1,k-1]\) shows that \(u\geq j-1\) and \(u\leq k-1\leq n\) (since \(k\leq n+1\)). Thus, \(u\in[n]\) (since \(u\leq n\)). Furthermore, \(u\equiv k-1\equiv j-1\,\mathrm{mod}\,2\). Thus, Lemma 8.11 yields \(s_{u}^{+}\left(t_{j}-\left(t_{j-1}-1\right)\right)\in H_{u-1,j}\).
However, from \(u\leq k-1\), we obtain \(u-1\leq(k-1)-1=k-2\). Moreover, from \(u\equiv k-1\,\mathrm{mod}\,2\), we obtain \(u-1\equiv(k-1)-1=k-2\,\mathrm{mod}\,2\). These two facts entail \(H_{u-1,j}\subseteq H_{k-2,j}\) (by Lemma 8.6, applied to \(u-1\) and \(k-2\) instead of \(v\) and \(w\)).
Hence, \(s_{u}^{+}\left(t_{j}-\left(t_{j-1}-1\right)\right)\in H_{u-1,j}\subseteq H_{k -2,j}\). This proves (45).]
Now, (44) becomes
\[s_{k}^{+}\left[t_{j-1},t_{j}\right]=\sum_{\begin{subarray}{c}u\in[j-1,k-1];\\ u\equiv k-1\,\mathrm{mod}\,2\end{subarray}}\frac{a_{u}}{\underset{\begin{subarray}{ c}\in H_{k-2,j}\\ (\mathrm{by}\ \eqref{eq:2})\end{subarray}}{\sum}}\frac{a_{u}\left(t_{j}- \left(t_{j-1}-1\right)\right)}{\underset{\begin{subarray}{c}\in H_{k-2,j}\\ u\equiv k-1\,\mathrm{mod}\,2\end{subarray}}{\sum}}\frac{a_{u}H_{k-2,j}\subseteq H _{k-2,j}}{\underset{\begin{subarray}{c}\in H_{k-2,j}\\ u\equiv k-1\,\mathrm{mod}\,2\end{subarray}}{\sum}}\frac{a_{u}H_{k-2,j}\subseteq H _{k-2,j}}{\underset{\begin{subarray}{c}\end{subarray}}{\sum}}\]
(since \(H_{k-2,j}\) is a left ideal of \(\mathbf{A}\)). This proves Lemma 8.12.
**Lemma 8.13**.: Let \(i,j\in[n]\) and \(k\in[n+1]\) be such that \(i\leq j\) and \(k\equiv j\,\mathrm{mod}\,2\). Then,
\[H_{k,j}\left[t_{i},t_{j}\right]\subseteq H_{k-2,j}.\]
Proof.: This is obvious for \(i=j\) (since we have \(\left[t_{i},t_{j}\right]=\left[t_{j},t_{j}\right]=0\) in this case). Thus, we WLOG assume that \(i\neq j\). Hence, \(i<j\) (since \(i\leq j\)). Therefore, \(1\leq i<j\).
Set \(a:=s_{i}s_{i+1}\cdots s_{j-2}\). We have \(i<j\). Thus, Lemma 7.3**(b)** yields
\[\left[t_{i},t_{j}\right]=\underbrace{\left(s_{i}s_{i+1}\cdots s_{j-2}\right)} _{=a}\left[t_{j-1},t_{j}\right]=a\left[t_{j-1},t_{j}\right].\]
However, it is easy to see that
\[s_{u}^{+}a=as_{u}^{+}\qquad\quad\text{for each }u\in[j,k]\,. \tag{46}\]
[_Proof of (46)_: Let \(u\in[j,k]\). We must prove that \(s_{u}^{+}a=as_{u}^{+}\).
If \(u\notin[n-1]\), then \(s_{u}^{+}=1\) (by the definition of \(s_{u}^{+}\)), and thus this claim boils down to \(1a=a1\), which is obvious. Thus, we WLOG assume that \(u\in[n-1]\). Hence, \(s_{u}^{+}=s_{u}+1\).
However, from \(u\in[j,k]\), we obtain \(u\geq j\). Hence, \(j\leq u\), so that \(j-2\leq u-2\). Thus, each of the integers \(i,i+1,\ldots,j-2\) has a distance larger than \(1\) from \(u\). Hence, each of the transpositions \(s_{i},s_{i+1},\ldots,s_{j-2}\) commutes with \(s_{u}\) (by reflection
locality). Therefore, the product \(s_{i}s_{i+1}\cdots s_{j-2}\) of these transpositions also commutes with \(s_{u}\). In other words, \(a\) commutes with \(s_{u}\) (since \(a=s_{i}s_{i+1}\cdots s_{j-2}\)). In other words, \(s_{u}a=as_{u}\). Now,
\[\underbrace{s_{u}^{+}}_{=s_{u}+1}a=\left(s_{u}+1\right)a=\underbrace{s_{u}a}_{ =as_{u}}+a=as_{u}+a=a\underbrace{\left(s_{u}+1\right)}_{=s_{u}^{+}}=as_{u}^{+}.\]
This proves (46).]
Using (46), we can easily see the following: For each \(u\in[j,k]\) satisfying \(u\equiv k\operatorname{mod}2\), we have
\[s_{u}^{+}\left[t_{i},t_{j}\right]\in H_{k-2,j}. \tag{47}\]
[_Proof of (47):_ Let \(u\in[j,k]\) be such that \(u\equiv k\operatorname{mod}2\). From (46), we obtain \(s_{u}^{+}a=as_{u}^{+}\). Hence,
\[s_{u}^{+}\underbrace{\left[t_{i},t_{j}\right]}_{=a\left[t_{j-1},t_{j}\right]} =\underbrace{s_{u}^{+}a}_{=as_{u}^{+}}\left[t_{j-1},t_{j}\right]=as_{u}^{+} \left[t_{j-1},t_{j}\right]. \tag{48}\]
However, \(u\in[j,k]\subseteq[k]\subseteq[n+1]\) and \(1<j\leq u\) and \(u\equiv k\equiv j\operatorname{mod}2\). Thus, Lemma 8.12 (applied to \(u\) instead of \(k\)) yields \(s_{u}^{+}\left[t_{j-1},t_{j}\right]\in H_{u-2,j}\).
Furthermore, \(u-2\leq k-2\) (since \(u\leq k\)) and \(u-2\equiv k-2\operatorname{mod}2\) (since \(u\equiv k\operatorname{mod}2\)). Hence, Lemma 8.6 (applied to \(v=u-2\) and \(w=k-2\)) yields \(H_{u-2,j}\subseteq H_{k-2,j}\).
Now, (48) becomes
\[s_{u}^{+}\left[t_{i},t_{j}\right] =a\underbrace{s_{u}^{+}\left[t_{j-1},t_{j}\right]}_{\in H_{u-2,j }}\in aH_{u-2,j}\qquad\quad\text{(since $H_{u-2,j}$ is a left ideal)}\] \[\subseteq H_{k-2,j}.\]
This proves (47).]
Now,
\[=\underbrace{H_{k,j}}_{u\in[j,k];\atop u\equiv k \operatorname{mod}2}\mathbf{A}s_{u}^{+}\quad\left[t_{i},t_{j}\right]= \left(\sum_{u\in[j,k];\atop u\equiv k\operatorname{mod}2}\mathbf{A}s_{u}^{+} \right)\left[t_{i},t_{j}\right]=\sum_{u\in[j,k];\atop u\equiv k\operatorname{ mod}2}\mathbf{A}\underbrace{s_{u}^{+}\left[t_{i},t_{j}\right]}_{\in H_{k-2,j}}\] (by the definition of \[H_{k,j}\] ) \[\subseteq\sum_{u\in[j,k];\atop u\equiv k \operatorname{mod}2}\mathbf{A}H_{k-2,j}\subseteq H_{k-2,j}\]
(since \(H_{k-2,j}\) is a left ideal). This proves Lemma 8.13.
If \(i,j\in[n]\) and \(k\in[n+1]\) are such that \(i\leq j\) and \(k\equiv j\bmod 2\), then we can apply Lemma 8.13 recursively, yielding
\[H_{k,j}\left[t_{i},t_{j}\right] \subseteq H_{k-2,j},\] \[H_{k,j}\left[t_{i},t_{j}\right]^{2} \subseteq H_{k-4,j},\] \[H_{k,j}\left[t_{i},t_{j}\right]^{3} \subseteq H_{k-6,j},\] \[\ldots.\]
Eventually, the right hand side will be \(0\), and thus we obtain \(H_{k,j}\left[t_{i},t_{j}\right]^{s}=0\) for some \(s\in\mathbb{N}\). By picking \(k\) appropriately (specifically, setting \(k=n\) or \(k=n+1\) depending on the parity of \(n-j\)), we can ensure that \(H_{k,j}=\mathbf{A}\), and thus this equality \(H_{k,j}\left[t_{i},t_{j}\right]^{s}=0\) yields \(\left[t_{i},t_{j}\right]^{s}=0\). Thus, Lemma 8.13 "lays a fuse" for proving the nilpotency of \(\left[t_{i},t_{j}\right]\). We shall now elaborate on this.
### Products of \(\left[t_{i},t_{j}\right]\)'s for a fixed \(j\)
**Lemma 8.14**.: Let \(j\in[n]\) and \(m\in\mathbb{N}\). Let \(r\) be the unique element of \(\left\{n,n+1\right\}\) that is congruent to \(j\) modulo \(2\). (That is, \(r=\begin{cases}n,&\text{if }n\equiv j\bmod 2;\\ n+1,&\text{otherwise.}\end{cases}\))
Let \(i_{1},i_{2},\ldots,i_{m}\) be \(m\) elements of \([j]\) (not necessarily distinct). Then,
\[\left[t_{i_{1}},t_{j}\right]\left[t_{i_{2}},t_{j}\right]\cdots\left[t_{i_{m}},t_{j}\right]\in H_{r-2m,j}.\]
Proof.: We induct on \(m\):
_Base case:_ We have \(r\geq n\) (by the definition of \(r\)), so that \(r\notin[n-1]\) and \(j\leq r\) (since \(j\leq n\leq r\)). Hence, Remark 8.4 (applied to \(k=r\)) yields \(H_{r,j}=\mathbf{A}\). Now
\[\left[t_{i_{1}},t_{j}\right]\left[t_{i_{2}},t_{j}\right]\cdots\left[t_{i_{0}},t_{j}\right]=(\text{empty product})=1\in\mathbf{A}=H_{r,j}=H_{r-2\cdot 0,j}\]
(since \(r=r-2\cdot 0\)). In other words, Lemma 8.14 is proved for \(m=0\).
_Induction step:_ Let \(m\in\mathbb{N}\). Assume (as the induction hypothesis) that
\[\left[t_{i_{1}},t_{j}\right]\left[t_{i_{2}},t_{j}\right]\cdots\left[t_{i_{m}},t_{j}\right]\in H_{r-2m,j} \tag{49}\]
whenever \(i_{1},i_{2},\ldots,i_{m}\) are \(m\) elements of \([j]\). We must prove that
\[\left[t_{i_{1}},t_{j}\right]\left[t_{i_{2}},t_{j}\right]\cdots\left[t_{i_{m+1 }},t_{j}\right]\in H_{r-2(m+1),j} \tag{50}\]
whenever \(i_{1},i_{2},\ldots,i_{m+1}\) are \(m+1\) elements of \([j]\).
So let \(i_{1},i_{2},\ldots,i_{m+1}\) be \(m+1\) elements of \([j]\). We have \(r-2m\equiv r\equiv j\bmod 2\) (by the definition of \(r\)) and \(i_{m+1}\in[j]\subseteq[n]\) and \(i_{m+1}\leq j\) (since \(i_{m+1}\in[j]\)). Hence,
Lemma 8.13 (applied to \(k=r-2m\) and \(i=i_{m+1}\)) yields4
Footnote 4: Strictly speaking, this argument works only if \(r-2m\in[n+1]\) (since Lemma 8.13 requires \(k\in[n+1]\)). However, in all remaining cases, we can get to the same result in an even simpler way: Namely, assume that \(r-2m\notin[n+1]\). Thus, \(r-2m\) is either \(\leq 0\) or \(>n+1\). Since \(r-2m\) cannot be \(>n+1\) (because \(r-2\underbrace{m}_{\geq 0}\leq r\leq n+1\)), we thus conclude that \(r-2m\leq 0\). Hence, \(r-2m\leq 0<j\) and therefore \(H_{r-2m,j}=0\) (by (32)). Hence,
\[\underbrace{H_{r-2m,j}}_{=0}\left[t_{i_{m+1}},t_{j}\right]=0\subseteq H_{r-2m- 2,j}.\]
Proof.: Let \(r\) be the element of \(\{n,n+1\}\) defined in Lemma 8.14. Then, \(r\leq n+1\), so that
\[\underbrace{r}_{\leq n+1}-\underbrace{2m}_{\geq n-j+2}\leq(n+1)-(n-j+2)=j-1<j.\]
Thus, \(H_{r-2m,j}=0\) (by (32)). But Lemma 8.14 yields
\[\left[t_{i_{1}},t_{j}\right]\left[t_{i_{2}},t_{j}\right]\cdots\left[t_{i_{m}},t_{j}\right]\in H_{r-2m,j}=0.\]
In other words, \(\left[t_{i_{1}},t_{j}\right]\left[t_{i_{2}},t_{j}\right]\cdots\left[t_{i_{m}},t_{j}\right]=0\). This proves Theorem 8.15.
**Lemma 8.16**.: Let \(i,j\in[n]\) and \(m\in\mathbb{N}\) be such that \(2m\geq n-j+2\) and \(i\leq j\). Then, \(\left[t_{i},t_{j}\right]^{m}=0\).
Proof.: We have \(i\in[j]\) (since \(i\leq j\)). Hence, Theorem 8.15 (applied to \(i_{k}=i\)) yields \(\underbrace{\left[t_{i},t_{j}\right]\left[t_{i},t_{j}\right]\cdots\left[t_{i},t_{j}\right]}_{m\text{ times}}=0\). In other words, \(\left[t_{i},t_{j}\right]^{m}=0\). This proves Lemma 8.16.
**Corollary 8.17**.: Let \(i,j\in[n]\) and \(m\in\mathbb{N}\) be such that \(2m\geq n-j+2\). Then, \(\left[t_{i},t_{j}\right]^{m}=0\).
Proof.: If \(i\leq j\), then Corollary 8.17 follows directly from Lemma 8.16. Thus, we WLOG assume that we don't have \(i\leq j\). Hence, \(i>j\).
Therefore, \(j<i\), so that \(j\leq i\). Moreover, \(2m\geq n-\underbrace{j}_{<i}+2>n-i+2\). Hence, we can apply Lemma 8.16 to \(j\) and \(i\) instead of \(i\) and \(j\). We thus obtain \(\left[t_{j},t_{i}\right]^{m}=0\). However, \(\left[t_{i},t_{j}\right]=-\left[t_{j},t_{i}\right]\) (since any two elements \(a\) and \(b\) of a ring satisfy \(\left[a,b\right]=-\left[b,a\right]\)). Hence, \(\left[t_{i},t_{j}\right]^{m}=\left(-\left[t_{j},t_{i}\right]\right)^{m}=\left( -1\right)^{m}\underbrace{\left[t_{j},t_{i}\right]^{m}}_{=0}=0\). This proves Corollary 8.17.
**Corollary 8.18**.: For any \(x\in\mathbb{R}\), let \(\lceil x\rceil\) denote the smallest integer that is \(\geq x\). Let \(i,j\in[n]\). Then, \(\left[t_{i},t_{j}\right]^{\lceil(n-j)/2\rceil+1}=0\).
Proof.: We have \(2\left(\underbrace{\left[\left(n-j\right)/2\right]}_{\geq(n-j)/2}+1\right) \geq 2\left(\left(n-j\right)/2+1\right)=n-j+2\). Thus, Corollary 8.17 (applied to \(m=\lceil\left(n-j\right)/2\rceil+1\)) yields \(\left[t_{i},t_{j}\right]^{\lceil(n-j)/2\rceil+1}=0\). This proves Corollary 8.18.
### Can we lift the \(i_{1},i_{2},\ldots,i_{m}\in[j]\) restriction?
**Remark 8.19**.: Theorem 8.15 does not hold if we drop the \(i_{1},i_{2},\ldots,i_{m}\in[j]\) restriction. For instance, for \(n=6\) and \(j=3\), we have
\[\left[t_{1},t_{3}\right]\left[t_{5},t_{3}\right]\left[t_{4},t_{3}\right]\left[ t_{1},t_{3}\right]\neq 0\hskip 28.452756pt\text{despite }2\cdot 4\geq n-j+2.\]
Another counterexample is obtained for \(n=4\) and \(j=2\), since \(\left[t_{3},t_{2}\right]\left[t_{1},t_{2}\right]\neq 0\).
Despite these counterexamples, the restriction can be lifted in some particular cases. Here is a particularly simple instance:
**Corollary 8.20**.: Assume that \(n\geq 2\). Let \(u,v\in[n]\). Then, \(\left[t_{u},t_{n-1}\right]\left[t_{v},t_{n-1}\right]=0\).
Proof.: We are in one of the following three cases:
_Case 1:_ We have \(u=n\).
_Case 2:_ We have \(v=n\).
_Case 3:_ Neither \(u\) nor \(v\) equals \(n\).
Let us first consider Case 1. In this case, we have \(u=n\). Hence, \(t_{u}=t_{n}=1\) and thus \(\left[t_{u},t_{n-1}\right]=\left[1,t_{n-1}\right]=0\) (since \(\left[1,x\right]=0\) for each \(x\)). Hence, \(\underbrace{\left[t_{u},t_{n-1}\right]}_{=0}\left[t_{v},t_{n-1}\right]=0\). Thus, Corollary 8.20 is proved in Case 1.
A similar argument proves Corollary 8.20 in Case 2.
Let us now consider Case 3. In this case, neither \(u\) nor \(v\) equals \(n\). In other words, \(u\) and \(v\) are both \(\neq n\). Thus, \(u\) and \(v\) are elements of \(\left[n\right]\setminus\left\{n\right\}=\left[n-1\right]\). Hence, Theorem 8.15 (applied to \(j=n-1\) and \(m=2\) and \(\left(i_{1},i_{2},\ldots,i_{m}\right)=\left(u,v\right)\)) yields \(\left[t_{u},t_{n-1}\right]\left[t_{v},t_{n-1}\right]=0\) (since \(2\cdot 2=4\geq 3=n-\left(n-1\right)+2\)). Thus, Corollary 8.20 is proved in Case 3.
We have now proved Corollary 8.20 in all three Cases 1, 2 and 3.
**Proposition 8.21**.: Assume that \(n\geq 3\). Then:
* We have \(\left[t_{i},t_{n-2}\right]\left[s_{n-1},s_{n-2}\right]=0\) for all \(i\in[n-2]\).
* We have \(\left[t_{i},t_{n-2}\right]\left[t_{n-1},t_{n-2}\right]=0\) for all \(i\in[n]\).
* We have \(\left[t_{u},t_{n-2}\right]\left[t_{v},t_{n-2}\right]\left[t_{w},t_{n-2}\right]=0\) for all \(u,v,w\in[n]\).
Proof sketch.: **(a)** This is easily checked for \(i=n-3\) and for \(i=n-2\). 5 In all other cases, Lemma 7.3 **(b)** lets us rewrite \(\left[t_{i},t_{n-2}\right]\) as \(\left(s_{i}s_{i+1}\cdots s_{n-4}\right)\left[t_{n-3},t_{n-2}\right]\), and thus it remains to prove that \(\left[t_{n-3},t_{n-2}\right]\left[s_{n-1},s_{n-2}\right]=0\), which is exactly the \(i=n-3\) case. Thus, Proposition 8.21 **(a)** is proved.
Footnote 5: Indeed, the case of \(i=n-2\) is obvious (since \(\left[t_{n-2},t_{n-2}\right]=0\)). The case of \(i=n-3\) requires some calculations, which can be made simpler by checking that \(\left[t_{n-3},t_{n-2}\right]\) is an element \(a\in\mathbf{k}\left[S_{n}\right]\) satisfying \(a=as_{n-2}=as_{n-1}\). (Explicitly, \(\left[t_{n-3},t_{n-2}\right]=\left(1-s_{n-2}\right)s_{n-3}b\), where \(b\) is the sum of all six permutations in \(S_{n}\) that fix each of \(1,2,\ldots,n-3\).)
**(c)** Let \(u,v,w\in[n]\). We must prove that \(\left[t_{u},t_{n-2}\right]\left[t_{v},t_{n-2}\right]\left[t_{w},t_{n-2}\right]=0\). If any of \(u,v,w\) equals \(n\), then this is clear (since \(t_{n}=1\) and thus \(\left[t_{n},t_{n-2}\right]=\left[1,t_{n-2}\right]=0\)). Thus, WLOG assume that \(u,v,w\in[n-1]\).
If \(v=n-1\), then \(\left[t_{u},t_{n-2}\right]\left[t_{v},t_{n-2}\right]=\left[t_{u},t_{n-2}\right] \left[t_{n-1},t_{n-2}\right]=0\) (by Proposition 8.21 **(b)**), so that our claim holds. Likewise, our claim can be shown if \(w=n-1\). Thus, WLOG assume that neither \(v\) nor \(w\) equals \(n-1\). Hence, \(v,w\in[n-2]\)
Therefore, Theorem 8.15 shows that \([t_{u},t_{n-2}]\,[t_{v},t_{n-2}]=0\), which yields our claim again. This proves Proposition 8.21**(c)**.
## 9 The identity \(\left[t_{i},t_{j}\right]^{j-i+1}=0\) for all \(i\leq j\)
We now approach the proof of another remarkable theorem: the identity \(\left[t_{i},t_{j}\right]^{j-i+1}=0\), which holds for all \(i,j\in[n]\) satisfying \(i\leq j\). Some more work must be done before we can prove this.
### The elements \(\mu_{i,j}\) for \(i\in[j-1]\)
We first introduce a family of elements of the group algebra \(\mathbf{k}\left[S_{n}\right]\).
**Definition 9.1**.: Set \(\mathbf{A}=\mathbf{k}\left[S_{n}\right]\).
**Definition 9.2**.: Let \(j\in[n]\), and let \(i\in[j-1]\). Then, \(j-1\geq 1\) (since \(i\in[j-1]\) entails \(1\leq i\leq j-1\)), so that \(j-1\in[n]\). Hence, the elements \((i\Longrightarrow j-1)\in S_{n}\) and \(t_{j-1}\in\mathbf{k}\left[S_{n}\right]\) are well-defined.
Now, we define an element
\[\mu_{i,j}:=\left(i\Longrightarrow j-1\right)t_{j-1}\in\mathbf{A}.\]
**Lemma 9.3**.: Let \(j\in[n]\), and let \(i\in[j-1]\). Then,
\[\left[t_{i},t_{j}\right] =\left(i\Longrightarrow j-1\right)\left[t_{j-1},t_{j}\right] \tag{51}\] \[=\mu_{i,j}\left(t_{j}-t_{j-1}+1\right). \tag{52}\]
Proof.: From \(i\in[j-1]\), we obtain \(1\leq i\leq j-1\), so that \(j-1\geq 1\). Thus, \(j-1\in[n-1]\) (since \(j-1<j\leq n\)). Hence, (15) (applied to \(j-1\) instead of \(i\)) yields
\[\left[t_{j-1},t_{j-1+1}\right]=t_{j-1}\left(t_{j-1+1}-\left(t_{j-1}-1\right) \right).\]
Since \(j-1+1=j\), we can rewrite this as
\[\left[t_{j-1},t_{j}\right]=t_{j-1}\left(t_{j}-\left(t_{j-1}-1\right)\right). \tag{53}\]
We have \(i\leq j-1\). Hence, Proposition 3.2 (applied to \(v=i\) and \(w=j-1\)) yields
\[\left(i\Longrightarrow j-1\right)=s_{i}s_{i+1}\cdots s_{(j-1)-1}=s_{i}s_{i+1} \cdots s_{j-2}. \tag{54}\]
However, \(i\leq j-1<j\). Thus, Lemma 7.3**(b)** yields
\[\left[t_{i},t_{j}\right]=\underbrace{\left(s_{i}s_{i+1}\cdots s_{j-2}\right)} _{\begin{subarray}{c}=\left(i\Longrightarrow j-1\right)\\ \text{(by \eqref{
This proves (51). Furthermore,
\[\left[t_{i},t_{j}\right] =\left(i\Longrightarrow j-1\right)\underbrace{\left[t_{j-1},t_{j} \right]}_{=t_{j-1}\left(t_{j-\left(t_{j-1}-1\right)}\right)}=\underbrace{\left( i\Longrightarrow j-1\right)t_{j-1}}_{=t_{i,j}}\underbrace{\left(t_{j}-\left(t_{j-1}-1 \right)\right)}_{=t_{j}-t_{j-1}+1}\] \[=\mu_{i,j}\left(t_{j}-t_{j-1}+1\right).\]
This proves (52). Thus, Lemma 9.3 is proved.
**Lemma 9.4**.: Let \(R\) be a ring. Let \(a,b,c\in R\) be three elements satisfying \(ca=ac\) and \(cb=bc\). Then,
\[c\left[a,b\right]=\left[a,b\right]c.\]
Proof.: The definition of a commutator yields \(\left[a,b\right]=ab-ba\). Thus,
\[c\underbrace{\left[a,b\right]}_{=ab-ba} =c\left(ab-ba\right)=\underbrace{ca}_{=ac}b-\underbrace{cb}_{=bc }a=a\underbrace{cb}_{=bc}-b\underbrace{ca}_{=ac}\] \[=abc-bac=\underbrace{\left(ab-ba\right)}_{=\left[a,b\right]}c= \left[a,b\right]c.\]
This proves Lemma 9.4.
**Lemma 9.5**.: Let \(i,j,k\in\left[n\right]\) be such that \(i\leq k<j-1\). Then,
\[\left[t_{i},t_{j}\right]\mu_{k,j}=\mu_{k+1,j}\left[t_{i},t_{j-1}\right].\]
Proof.: We have \(j-1\geq j-1>k\geq i\). Thus, Lemma 3.5 (applied to \(k\), \(j-1\) and \(j-1\) instead of \(j\), \(v\) and \(w\)) yields
\[\left(k+1\Longrightarrow j-1\right)\left(i\Longrightarrow j-1\right) =\left(i\Longrightarrow j-1\right)\left(k\Longrightarrow\underbrace{ \left(j-1\right)-1}_{=j-2}\right)\] \[=\left(i\Longrightarrow j-1\right)\left(k\Longrightarrow j-2 \right). \tag{55}\]
From \(i<j-1\), we obtain \(i\leq j-2\) and thus \(i\in\left[j-2\right]\subseteq\left[j-1\right]\). Likewise, \(k\in\left[j-1\right]\) (since \(k<j-1\)).
Furthermore, Proposition 3.3**(b)** (applied to \(v=k\) and \(w=j-1\)) yields
\[\left(k\Longrightarrow j-1\right) =\left(k\Longrightarrow(j-1)-1\right)s_{\left(j-1\right)-1} \qquad\qquad\left(\text{since }k<j-1\right)\] \[=\left(k\Longrightarrow j-2\right)s_{j-2} \tag{56}\]
(since \((j-1)-1=j-2\)). The same argument (applied to \(i\) instead of \(k\)) yields
\[\left(i\Longrightarrow j-1\right)=\left(i\Longrightarrow j-2\right)s_{j-2} \tag{57}\]
(since \(i<j-1\)).
We have \(k\leq j-2\) (since \(k<j-1\)) and \(j-2<j\). Thus, (8) (applied to \(k\) and \(j-2\) instead of \(i\) and \(k\)) yields
\[(k\Longrightarrow j-2)\,t_{j}=t_{j}\,(k\Longrightarrow j-2)\,. \tag{58}\]
Furthermore, we have \(k\leq j-2\) and \(j-2<j-1\). Thus, (8) (applied to \(k\), \(j-2\) and \(j-1\) instead of \(i\), \(k\) and \(j\)) yields
\[(k\Longrightarrow j-2)\,t_{j-1}=t_{j-1}\,(k\Longrightarrow j-2)\,. \tag{59}\]
The same argument (but using \(i\) instead of \(k\)) shows that
\[(i\Longrightarrow j-2)\,t_{j-1}=t_{j-1}\,(i\Longrightarrow j-2) \tag{60}\]
(since \(i\leq j-2\)).
From (58) and (59), we obtain
\[(k\Longrightarrow j-2)\,\big{[}t_{j-1},t_{j}\big{]}=\big{[}t_{j-1},t_{j}\big{]} \,(k\Longrightarrow j-2) \tag{61}\]
(by Lemma 9.4, applied to \(R=\mathbf{A}\), \(a=t_{j-1}\), \(b=t_{j}\) and \(c=(k\Longrightarrow j-2)\)).
Now, the definition of \(\mu_{k,j}\) yields \(\mu_{k,j}=(k\Longrightarrow j-1)\,t_{j-1}\). Hence,
\[\underbrace{\big{[}t_{i},t_{j}\big{]}}_{\text{(by \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:
Furthermore, \(i\leq j-2\), so that \(j-2\geq i\geq 1\). Combining this with \(j-2\leq n-2\) (since \(j\leq n\)), we obtain \(j-2\in[n-2]\subseteq[n-1]\). Hence, Corollary 4.2 (applied to \(\ell=j-2\)) yields \(t_{j-2}=1+s_{j-2}\underbrace{t_{(j-2)+1}}_{=t_{j-1}}=1+s_{j-2}t_{j-1}\). Hence,
\[t_{j-2}-1=s_{j-2}t_{j-1}. \tag{64}\]
Multiplying the equalities (63) and (64) together, we obtain
\[\left[t_{j-2},t_{j}\right]\left(t_{j-2}-1\right)=s_{j-2}\left[t_{j-1},t_{j} \right]s_{j-2}t_{j-1}. \tag{65}\]
On the other hand, Corollary 6.3 (applied to \(i=j-2\)) yields
\[\left[t_{j-2},t_{j-2+2}\right]\left(t_{j-2}-1\right)=t_{j-2+1}\left[t_{j-2},t_ {j-2+1}\right]\qquad\quad\left(\text{since }j-2\in[n-2]\right).\]
In view of \(j-2+2=j\) and \(j-2+1=j-1\), we can rewrite this as
\[\left[t_{j-2},t_{j}\right]\left(t_{j-2}-1\right)=t_{j-1}\left[t_{j-2},t_{j-1 }\right].\]
Comparing this with (65), we obtain
\[s_{j-2}\left[t_{j-1},t_{j}\right]s_{j-2}t_{j-1}=t_{j-1}\left[t_{j-2},t_{j-1} \right].\]
Hence, (62) becomes
\[\left[t_{i},t_{j}\right]\mu_{k,j} =\left(k+1\Longrightarrow j-1\right)\left(i\Longrightarrow j-2 \right)\underbrace{s_{j-2}\left[t_{j-1},t_{j}\right]s_{j-2}t_{j-1}}_{=t_{j-1} \left[t_{j-2},t_{j-1}\right]}\] \[=\left(k+1\Longrightarrow j-1\right)\underbrace{\left(i \Longrightarrow j-2\right)t_{j-1}}_{=t_{j-1}\left(i\Longrightarrow j-2\right) \left(\text{by \eqref{eq:2}}\right)}\left[t_{j-2},t_{j-1}\right]\] \[=\left(k+1\Longrightarrow j-1\right)t_{j-1}\left(i\Longrightarrow j -2\right)\left[t_{j-2},t_{j-1}\right]. \tag{66}\]
But \(k\leq j-2\), so that \(k+1\leq j-1\). Thus, \(k+1\in[j-1]\). Hence, the definition of \(\mu_{k+1,j}\) yields
\[\mu_{k+1,j}=\left(k+1\Longrightarrow j-1\right)t_{j-1}. \tag{67}\]
Furthermore, \(i\leq j-2=(j-1)-1\), so that \(i\in[(j-1)-1]\). Hence, (51) (applied to \(j-1\) instead of \(j\)) yields
\[\left[t_{i},t_{j-1}\right] =\left(i\Longrightarrow(j-1)-1\right)\left[t_{(j-1)-1},t_{j-1}\right]\] \[=\left(i\Longrightarrow j-2\right)\left[t_{j-2},t_{j-1}\right] \tag{68}\]
(since \((j-1)-1=j-2\)). Multiplying the equalities (67) and (68), we obtain
\[\mu_{k+1,j}\left[t_{i},t_{j-1}\right]=\left(k+1\Longrightarrow j-1\right)t_{j -1}\left(i\Longrightarrow j-2\right)\left[t_{j-2},t_{j-1}\right].\]
Comparing this with (66), we obtain \(\left[t_{i},t_{j}\right]\mu_{k,j}=\mu_{k+1,j}\left[t_{i},t_{j-1}\right]\). This proves Lemma 9.5.
We can combine Lemma 9.3 and Lemma 9.5 into a single result:
**Lemma 9.6**.: Let \(j\in[n]\), and let \(i\in[j]\) and \(k\in[j-1]\). Then, we have
\[\left(\left[t_{i},t_{j}\right]\mu_{k,j}=0\right)\text{ or }\left(\left[t_{i},t_{j} \right]\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some }\ell\in[k+1,j-1]\right).\]
Proof.: If \(i=j\), then this holds for obvious reasons6. Hence, for the rest of this proof, we WLOG assume that \(i\neq j\).
Footnote 6: foot: _Proof._ Assume that \(i=j\). Then, \(\left[t_{i},t_{j}\right]=\left[t_{j},t_{j}\right]=0\) (since \([a,a]=0\) for any element \(a\) of any ring). Hence, \(\underbrace{\left[t_{i},t_{j}\right]}_{=0}\mu_{k,j}=0\). Therefore, we clearly have \(\left(\left[t_{i},t_{j}\right]\mu_{k,j}=0\right)\) or \(\left(\left[t_{i},t_{j}\right]\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some }\ell\in[k+1,j-1]\right)\). Thus, Lemma 9.6 is proved under the assumption that \(i=j\).
We have \(i\leq j\) (since \(i\in[j]\)). Combining this with \(i\neq j\), we obtain \(i<j\), so that \(i\leq j-1\). In other words, \(i\in[j-1]\).
From \(k\in[j-1]\), we obtain \(1\leq k\leq j-1\), so that \(j-1\geq 1\) and therefore \(j\geq 2\). Hence, \(j\in[2,n]\).
We are in one of the following three cases:
_Case 1:_ We have \(k\geq j-1\).
_Case 2:_ We have \(i>k\).
_Case 3:_ We have neither \(k\geq j-1\) nor \(i>k\).
Let us first consider Case 1. In this case, we have \(k\geq j-1\). Combining this with \(k\leq j-1\), we obtain \(k=j-1\).
The definition of \(\mu_{k,j}\) yields
\[\mu_{k,j}=\left(\underbrace{k}_{=j-1}\Longrightarrow j-1\right)t_{j-1}= \underbrace{\left(j-1\Longrightarrow j-1\right)}_{\begin{subarray}{c}=1\\ (\text{by }(\ref{eq:1}))\end{subarray}}t_{j-1}=t_{j-1}.\]
Hence,
\[\left[t_{i},t_{j}\right]\underbrace{\mu_{k,j}}_{=t_{j-1}}=\left[t_{i},t_{j} \right]t_{j-1}=0\qquad\quad(\text{by Corollary \ref{eq:1}})\,.\]
Thus, we have \(\left(\left[t_{i},t_{j}\right]\mu_{k,j}=0\right)\) or \(\left(\left[t_{i},t_{j}\right]\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some }\ell\in[k+1,j-1]\right)\). This proves Lemma 9.6 in Case 1.
Let us next consider Case 2. In this case, we have \(i>k\). Hence, \(i\geq k+1\). Combined with \(i\leq j-1\), this entails \(i\in[k+1,j-1]\). Furthermore, (52) shows that
\[\left[t_{i},t_{j}\right]=\mu_{i,j}\underbrace{\left(t_{j}-t_{j-1}+1\right)}_{ \in\mathbf{A}}\in\mu_{i,j}\mathbf{A}.\]
We now know that \(i\in[k+1,j-1]\) and \(\left[t_{i},t_{j}\right]\in\mu_{i,j}\mathbf{A}\). Therefore, \(\left[t_{i},t_{j}\right]\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\) for some \(\ell\in[k+1,j-1]\) (namely, for \(\ell=i\)). Thus, we have \(\left(\left[t_{i},t_{j}\right]\mu_{k,j}=0\right)\) or \(\left(\left[t_{i},t_{j}\right]\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some }\ell\in[k+1,j-1]\right)\). This proves Lemma 9.6 in Case 2.
Finally, let us consider Case 3. In this case, we have neither \(k\geq j-1\) nor \(i>k\). In other words, we have \(k<j-1\) and \(i\leq k\). Thus, \(i\leq k<j-1\). Hence, Lemma 9.5 yields
\[\left[t_{i},t_{j}\right]\mu_{k,j}=\mu_{k+1,j}\underbrace{\left[t_{i},t_{j-1} \right]}_{\in\mathbf{A}}\in\mu_{k+1,j}\mathbf{A}.\]
Furthermore, \(k<j-1\), so that \(k\leq(j-1)-1\). In other words, \(k+1\leq j-1\). Hence, \(k+1\in[k+1,j-1]\).
We now know that \(k+1\in[k+1,j-1]\) and \(\left[t_{i},t_{j}\right]\mu_{k,j}\in\mu_{k+1,j}\mathbf{A}\). Hence, \(\left[t_{i},t_{j}\right]\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\) for some \(\ell\in[k+1,j-1]\) (namely, for \(\ell=k+1\)). Thus, we have \(\left(\left[t_{i},t_{j}\right]\mu_{k,j}=0\right)\) or \(\left(\left[t_{i},t_{j}\right]\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\) for some \(\ell\in[k+1,j-1]\right)\). This proves Lemma 9.6 in Case 3.
We have now proved Lemma 9.6 in each of the three Cases 1, 2 and 3. Hence, this lemma is proved in all situations.
### Products of \(\left[t_{i},t_{j}\right]\)'s for a fixed \(j\) redux
For the sake of convenience, we shall restate Lemma 9.6 in a simpler form. To this purpose, we extend Definition 9.2 somewhat:
**Definition 9.7**.: Let \(j\in[n]\), and let \(i\) be a positive integer. In Definition 9.7, we have defined \(\mu_{i,j}\) whenever \(i\in[j-1]\). We now set
\[\mu_{i,j}:=0\in\mathbf{A}\qquad\text{ whenever }i\notin[j-1]\,.\]
Thus, \(\mu_{i,j}\) is defined for all positive integers \(i\) (not just for \(i\in[j-1]\)). For example, \(\mu_{j,j}=0\) (since \(j\notin[j-1]\)).
Using this extended meaning of \(\mu_{i,j}\), we can rewrite Lemma 9.6 as follows:
**Lemma 9.8**.: Let \(j\in[n]\), and let \(i\in[j]\). Let \(k\) be a positive integer. Then,
\[\left[t_{i},t_{j}\right]\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+1.\]
Proof.: If \(k\geq j\), then this holds for obvious reasons7
Footnote 7: Proof.: Assume that \(k\geq j\). Thus, \(k\geq j>j-1\), so that \(k\notin[j-1]\) and therefore \(\mu_{k,j}=0\) (by Definition 9.7). Hence,
\[\left[t_{i},t_{j}\right]\underbrace{\mu_{k,j}}_{=0}=0=\mu_{k+1,j}\cdot \underbrace{0}_{\in\mathbf{A}}\in\mu_{k+1,j}\mathbf{A}.\]
Hence, \(\left[t_{i},t_{j}\right]\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\) for some integer \(\ell\geq k+1\) (namely, for \(\ell=k+1\)). Thus, Lemma 9.8 is proved under the assumption that \(k\geq j\).
In other words, we are in one of the following cases:
_Case 1:_ We have \(\left[t_{i},t_{j}\right]\,\mu_{k,j}=0\).
_Case 2:_ We have \(\left[t_{i},t_{j}\right]\,\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\) for some \(\ell\in[k+1,j-1]\).
Let us first consider Case 1. In this case, we have \(\left[t_{i},t_{j}\right]\,\mu_{k,j}=0\). Hence, \(\left[t_{i},t_{j}\right]\,\mu_{k,j}=0=\mu_{k+1,j}\cdot\underbrace{0}_{ \mathbf{A}}\in\mu_{k+1,j}\mathbf{A}\). Hence, \(\left[t_{i},t_{j}\right]\,\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\) for some integer \(\ell\geq k+1\) (namely, for \(\ell=k+1\)). Thus, Lemma 9.8 is proved in Case 1.
Let us now consider Case 2. In this case, we have \(\left[t_{i},t_{j}\right]\,\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\) for some \(\ell\in[k+1,j-1]\). Hence, we have \(\left[t_{i},t_{j}\right]\,\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\) for some integer \(\ell\geq k+1\) (because any \(\ell\in[k+1,j-1]\) is an integer \(\geq k+1\)). Thus, Lemma 9.8 is proved in Case 2.
We have now proved Lemma 9.8 in both Cases 1 and 2. Hence, Lemma 9.8 is proved in all situations.
The next lemma is similar to Lemma 8.14, and will play a similar role:
**Lemma 9.9**.: Let \(j\in[n]\). Let \(k\) be a positive integer, and let \(m\in\mathbb{N}\). Let \(i_{1},i_{2},\ldots,i_{m}\) be \(m\) elements of \([j]\) (not necessarily distinct). Then,
\[\left[t_{i_{m}},t_{j}\right]\,\left[t_{i_{m-1}},t_{j}\right]\,\cdots\,\left[t _{i_{1}},t_{j}\right]\,\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+m.\]
Proof.: We shall show that for each \(v\in\{0,1,\ldots,m\}\), we have
\[\left[t_{i_{v}},t_{j}\right]\,\left[t_{i_{v-1}},t_{j}\right]\,\cdots\,\left[t _{i_{1}},t_{j}\right]\,\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+v. \tag{69}\]
In fact, we shall prove (69) by induction on \(v\):
_Base case:_ Let us check that (69) holds for \(v=0\). Indeed,
\[\underbrace{\left[t_{i_{0}},t_{j}\right]\,\left[t_{i_{0-1}},t_{j}\right]\, \cdots\,\left[t_{i_{1}},t_{j}\right]}_{=(\text{empty product})=1}\,\mu_{k,j}=\mu_{k,j}=\mu_{k,j} \underbrace{1}_{\in\mathbf{A}}\in\mu_{k,j}\mathbf{A}.\]
Thus, \(\left[t_{i_{0}},t_{j}\right]\,\left[t_{i_{0-1}},t_{j}\right]\,\cdots\,\left[t _{i_{1}},t_{j}\right]\,\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\) for some integer \(\ell\geq k+0\) (namely, for \(\ell=k\)). In other words, (69) holds for \(v=0\). This completes the base case.
_Induction step:_ Let \(v\in\{0,1,\ldots,m-1\}\). Assume (as the induction hypothesis) that (69) holds for \(v\). We must prove that (69) holds for \(v+1\) instead of \(v\). In other words, we must prove that
\[\left[t_{i_{v+1}},t_{j}\right]\,\left[t_{i_{v}},t_{j}\right]\,\cdots\,\left[t _{i_{1}},t_{j}\right]\,\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+\left(v+1\right).\]
Our induction hypothesis says that (69) holds for \(v\). In other words, it says that
\[\left[t_{i_{v}},t_{j}\right]\,\left[t_{i_{v-1}},t_{j}\right]\,\cdots\,\left[t _{i_{1}},t_{j}\right]\,\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+v.\]
Let us denote this integer \(\ell\) by \(w\). Thus, \(w\geq k+v\) is an integer and satisfies
\[\left[t_{i_{v}},t_{j}\right]\,\left[t_{i_{v-1}},t_{j}\right]\,\cdots\,\left[t _{i_{1}},t_{j}\right]\,\mu_{k,j}\in\mu_{w,j}\mathbf{A}. \tag{70}\]
However, \(w\geq k+v\geq k\), so that \(w\) is a positive integer. Also, \(i_{v+1}\in[j]\). Thus, Lemma 9.8 (applied to \(i_{v+1}\) and \(w\) instead of \(i\) and \(k\)) yields that
\[\left[t_{i_{v+1}},t_{j}\right]\mu_{w,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq w+1. \tag{71}\]
Consider this \(\ell\). Thus, \(\ell\geq\underbrace{w}_{\geq k+v}+1\geq k+v+1=k+(v+1)\). Furthermore,
\[\begin{split}&\left[t_{i_{v+1}},t_{j}\right]\left[t_{i_{v}},t_{j} \right]\cdots\left[t_{i_{1}},t_{j}\right]\right.\\ =&\left[t_{i_{v+1}},t_{j}\right]\cdot\left(\left[t_{ i_{v},t_{j}}\right]\left[t_{i_{v-1}},t_{j}\right]\right)\\ &\in\underbrace{\left[t_{i_{v+1}},t_{j}\right]\mu_{w,j}}_{ \begin{subarray}{c}\in\mu_{\ell,j}\mathbf{A}\\ \text{(by (\ref{eq:1}))}\end{subarray}}\mathbf{A}\subseteq\mu_{\ell,j} \underbrace{\mathbf{A}\mathbf{A}}_{\subseteq\mathbf{A}}\subseteq\mu_{\ell,j} \mathbf{A}.\end{split}\]
Thus, we have found an integer \(\ell\geq k+(v+1)\) that satisfies
\(\left[t_{i_{v+1}},t_{j}\right]\left[t_{i_{v}},t_{j}\right]\cdots\left[t_{i_{1 }},t_{j}\right]\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\). Hence, we have shown that
\[\left[t_{i_{v+1}},t_{j}\right]\left[t_{i_{v}},t_{j}\right]\cdots\left[t_{i_{1 }},t_{j}\right]\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+(v+1)\,.\]
In other words, (69) holds for \(v+1\) instead of \(v\). This completes the induction step. Thus, (69) is proved by induction on \(v\).
Therefore, we can apply (69) to \(v=m\). We obtain
\[\left[t_{i_{m}},t_{j}\right]\left[t_{i_{m-1}},t_{j}\right]\cdots\left[t_{i_{1 }},t_{j}\right]\mu_{k,j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k+m.\]
This proves Lemma 9.9.
Now, we can show our second main result:
**Theorem 9.10**.: Let \(j\in[n]\), and let \(m\) be a positive integer. Let \(k_{1},k_{2},\ldots,k_{m}\) be any \(m\) elements of \([j]\) (not necessarily distinct) satisfying \(m\geq j-k_{m}+1\). Then,
\[\left[t_{k_{1}},t_{j}\right]\left[t_{k_{2}},t_{j}\right]\cdots\left[t_{k_{m}}, t_{j}\right]=0.\]
Proof.: If \(k_{m}=j\), then this claim is obvious8. Hence, for the rest of this proof, we WLOG assume that \(k_{m}\neq j\). Combining this with \(k_{m}\leq j\) (since \(k_{m}\in[j]\)), we obtain \(k_{m}<j\). Hence, \(k_{m}\in[j-1]\). Therefore, (52) (applied to \(i=k_{m}\)) yields
Footnote 8: Proof.: Assume that \(k_{m}=j\). Thus, \(\left[t_{k_{m}},t_{j}\right]=\left[t_{j},t_{j}\right]=0\) (since \([a,a]=0\) for any element \(a\) of any ring). In other words, the last factor of the product \(\left[t_{k_{1}},t_{j}\right]\left[t_{k_{2}},t_{j}\right]\cdots\left[t_{k_{m}}, t_{j}\right]\) is \(0\). Thus, this whole product must equal \(0\). In other words, \(\left[t_{k_{1}},t_{j}\right]\left[t_{k_{2}},t_{j}\right]\cdots\left[t_{k_{m}}, t_{j}\right]=0\). This proves Theorem 9.10 under the assumption that \(k_{m}=j\).
\[\left[t_{k_{m}},t_{j}\right]=\mu_{k_{m},j}\left(t_{j}-t_{j-1}+1\right). \tag{72}\]
Now, we have \(m-1\in\mathbb{N}\) (since \(m\) is a positive integer). Let us define an \((m-1)\)-tuple \((i_{1},i_{2},\ldots,i_{m-1})\) of elements of \([j]\) by
\[(i_{1},i_{2},\ldots,i_{m-1}):=(k_{m-1},k_{m-2},\ldots,k_{1})\]
(that is, \(i_{v}:=k_{m-v}\) for each \(v\in[m-1]\)). Then, \(i_{1},i_{2},\ldots,i_{m-1}\) are \(m-1\) elements of \([j]\). Hence, Lemma 9.9 (applied to \(m-1\) and \(k_{m}\) instead of \(m\) and \(k\)) yields
\[\left[t_{i_{m-1}},t_{j}\right]\left[t_{i_{(m-1)-1}},t_{j}\right]\cdots\left[t_ {i_{1}},t_{j}\right]\mu_{k_{m},j}\in\mu_{\ell,j}\mathbf{A}\text{ for some integer }\ell\geq k_{m}+(m-1)\,.\]
Consider this \(\ell\). We have
\[\ell\geq k_{m}+(m-1)=k_{m}+\underbrace{m}_{\geq j-k_{m}+1}-1\geq k_{m}+j-k_{m }+1-1=j>j-1,\]
so that \(\ell\not\in[j-1]\). Therefore, \(\mu_{\ell,j}=0\) (by Definition 9.7). Hence,
\[\left[t_{i_{m-1}},t_{j}\right]\left[t_{i_{(m-1)-1}},t_{j}\right]\cdots\left[t _{i_{1}},t_{j}\right]\mu_{k_{m},j}\in\underbrace{\mu_{\ell,j}}_{=0}\mathbf{A} =0\mathbf{A}=0.\]
In other words,
\[\left[t_{i_{m-1}},t_{j}\right]\left[t_{i_{(m-1)-1}},t_{j}\right]\cdots\left[t _{i_{1}},t_{j}\right]\mu_{k_{m},j}=0. \tag{73}\]
However, from the equality \((i_{1},i_{2},\ldots,i_{m-1})=(k_{m-1},k_{m-2},\ldots,k_{1})\), we immediately obtain \(\left(i_{m-1},i_{(m-1)-1},\ldots,i_{1}\right)=(k_{1},k_{2},\ldots,k_{m-1})\). Therefore,
\[\left[t_{i_{m-1}},t_{j}\right]\left[t_{i_{(m-1)-1}},t_{j}\right]\cdots\left[t _{i_{1}},t_{j}\right]=\left[t_{k_{1}},t_{j}\right]\left[t_{k_{2}},t_{j}\right] \cdots\left[t_{k_{m-1}},t_{j}\right].\]
Thus, we can rewrite (73) as
\[\left[t_{k_{1}},t_{j}\right]\left[t_{k_{2}},t_{j}\right]\cdots\left[t_{k_{m-1 }},t_{j}\right]\mu_{k_{m},j}=0. \tag{74}\]
Now,
\[\left[t_{k_{1}},t_{j}\right]\left[t_{k_{2}},t_{j}\right]\cdots \left[t_{k_{m}},t_{j}\right] =\left[t_{k_{1}},t_{j}\right]\left[t_{k_{2}},t_{j}\right]\cdots \left[t_{k_{m-1}},t_{j}\right]\underbrace{\left[t_{k_{m}},t_{j}\right]}_{ \text{(by \eqref{eq:m-1})}}\] \[=\underbrace{\left[t_{k_{1}},t_{j}\right]\left[t_{k_{2}},t_{j} \right]\cdots\left[t_{k_{m-1}},t_{j}\right]\mu_{k_{m},j}}_{\text{(by \eqref{eq:m-1})}}\left(t_{j}-t_{j-1}+1\right)=0.\]
This proves Theorem 9.10.
### The identity \(\left[t_{i},t_{j}\right]^{j-i+1}=0\) for all \(i\leq j\)
As a particular case of Theorem 9.10, we obtain the following:
**Corollary 9.11**.: Let \(i,j\in[n]\) be such that \(i\leq j\). Then, \(\left[t_{i},t_{j}\right]^{j-i+1}=0\).
Proof.: We have \(j-i\geq 0\) (since \(i\leq j\)) and thus \(j-i+1\geq 1\). Hence, \(j-i+1\) is a positive integer. Moreover, \(i\) is an element of \([j]\) (since \(i\leq j\)) and we have \(j-i+1\geq j-i+1\). Hence, Theorem 9.10 (applied to \(m=j-i+1\) and \(k_{r}=i\)) yields \(\underbrace{\left[t_{i},t_{j}\right]\,\left[t_{i},t_{j}\right]\cdots\left[t_{ i},t_{j}\right]}_{j-i+1\text{ times}}=0\). Thus, \(\left[t_{i},t_{j}\right]^{/-i+1}=\underbrace{\left[t_{i},t_{j}\right]\,\left[t _{i},t_{j}\right]\cdots\left[t_{i},t_{j}\right]}_{j-i+1\text{ times}}=0\). This proves Corollary 9.11.
## 10 Further directions
### More identities?
A few other properties of somewhere-to-below shuffles can be shown. For example, the proofs of the following two propositions are left to the reader:
**Proposition 10.1**.: We have \(t_{i}=\sum\limits_{k=i}^{j-1}s_{i}s_{i+1}\cdots s_{k-1}+s_{i}s_{i+1}\cdots s_ {j-1}t_{j}\) for any \(1\leq i<j\leq n\).
**Proposition 10.2**.: Let \(i,j\in[n-1]\) be such that \(i\leq j\). Then, \(\left[t_{i},t_{j}\right]=\left[s_{i}s_{i+1}\cdots s_{j-1},\ s_{j}\right]t_{j+1 }t_{j}\).
**Proposition 10.3**.: Set \(B_{i}:=\prod\limits_{k=0}^{i-1}\left(t_{1}-k\right)\) for each \(i\in[0,n]\). Then, \(B_{i}=t_{i}B_{i-1}\) for each \(i\in[n]\).
We wonder to what extent the identities that hold for \(t_{1},t_{2},\ldots,t_{n}\) can be described. For instance, we can ask:
**Question 10.4**.:
1. What are generators and relations for the Q-algebra \(\mathsf{Q}\left[t_{1},t_{2},\ldots,t_{n}\right]\) for a given \(n\in\mathbb{N}\)?
2. Fix \(k\in\mathbb{N}\). What identities hold for \(t_{1},t_{2},\ldots,t_{k}\) for **all**\(n\)? Is there a single algebra that "governs" the relations between \(t_{1},t_{2},t_{3},\ldots\) that hold independently of \(n\)?
3. If a relation between \(t_{1},t_{2},\ldots,t_{k}\) holds for all sufficiently high \(n\geq k\), must it then hold for all \(n\geq k\)?
We suspect that these questions are hard to answer, as we saw in Remark 6.2 that even the quadratic relations between \(t_{1},t_{2},\ldots,t_{n}\) exhibit some rather finicky behavior. The dimension of \(\mathbb{Q}\left[t_{1},t_{2},\ldots,t_{n}\right]\) as a \(\mathbb{Q}\)-vector space does not seem to follow a simple rule either (see (1) for the first few values), although there appear to be some patterns in how this dimension is generated9.
Footnote 9: Namely, for all \(n\leq 8\), we have verified that the algebra \(\mathbb{Q}\left[t_{1},t_{2},\ldots,t_{n}\right]\) is generated by products of \(m\) somewhere-to-below shuffles with \(m\in\{0,1,\ldots,n-1\}\), and moreover, only one such product for \(m=n-1\) is needed.
Another question, which we have already touched upon in Subsection 8.5, is the following:
**Question 10.5**.: Fix \(j\in[n]\). What is the smallest \(h\in\mathbb{N}\) such that we have \(\left[t_{i_{1}},t_{j}\right]\left[t_{i_{2}},t_{j}\right]\cdots\left[t_{i_{h}},t_{j}\right]=0\) for all \(i_{1},i_{2},\ldots,i_{h}\in[n]\) (as opposed to holding only for \(i_{1},i_{2},\ldots,i_{h}\in[j]\) )?
### Optimal exponents?
Corollary 8.18 and Corollary 9.11 give two different answers to the question "what powers of \(\left[t_{i},t_{j}\right]\) are \(0\)?". One might dare to ask for the **smallest** such power (more precisely, the smallest such exponent). In other words:
**Question 10.6**.: Given \(i,j\in[n]\), what is the smallest \(m\in\mathbb{N}\) such that \(\left[t_{i},t_{j}\right]^{m}=0\)? (We assume \(\mathbf{k}=\mathbb{Z}\) here to avoid small-characteristic cancellations.)
We conjecture that this smallest \(m\) is \(\min\left\{j-i+1,\ \left\lceil\left(n-j\right)/2\right\rceil+1\right\}\) whenever \(i<j\) (so that whichever of Corollary 8.18 and Corollary 9.11 gives the better bound actually gives the optimal bound). Using SageMath, this conjecture has been verified for all \(n\leq 12\).
### Generalizing to the Hecke algebra
The _type-A Hecke algebra_ (also known as the _type-A Iwahori-Hecke algebra_) is a deformation of the group algebra \(\mathbf{k}\left[S_{n}\right]\) involving a new parameter \(q\in\mathbf{k}\). It is commonly denoted by \(\mathcal{H}=\mathcal{H}_{q}\left(S_{n}\right)\); it has a basis \(\left(T_{w}\right)_{w\in S_{n}}\) indexed by the permutations \(w\in S_{n}\), but its multiplication is more complicated than composing the indexing permutations. We refer to [10] for the definition and a deep study of this algebra. We can define the \(q\)-deformed somewhere-to-below shuffles \(t_{1}^{\mathcal{H}},t_{2}^{\mathcal{H}},\ldots,t_{n}^{\mathcal{H}}\) by
\[t_{\ell}^{\mathcal{H}}:=T_{\mathrm{cyc}_{\ell}}+T_{\mathrm{cyc}_{\ell,\ell+1} }+T_{\mathrm{cyc}_{\ell,\ell+1,\ell+2}}+\cdots+T_{\mathrm{cyc}_{\ell,\ell+1, \ldots,n}}\in\mathcal{H}.\]
Surprisingly, it seems that many of the properties of the original somewhere-to-below \(t_{1},t_{2},\ldots,t_{n}\) still hold for these deformations. In particular:
**Conjecture 10.7**.: Corollary 9.11 and Corollary 8.18 both seem to hold in \(\mathcal{H}\) when the \(t_{\ell}\) are replaced by the \(t_{\ell}^{\mathcal{H}}\).
This generalization is not automatic. Our above proofs do not directly apply to \(\mathcal{H}\), as (for example) Lemma 3.6 does not generalize to \(\mathcal{H}\). The \(\mathcal{H}\)-generalization of Theorem 5.1 appears to be
\[qt_{i+1}^{\mathcal{H}}t_{i}^{\mathcal{H}}=\left(t_{i}^{\mathcal{H}}-1\right)t _{i}^{\mathcal{H}}=t_{i}^{\mathcal{H}}\left(t_{i}^{\mathcal{H}}-1\right) \tag{75}\]
(verified using SageMath for all \(n\leq 11\)). (The \(q\) on the left hand side is necessary; the product \(t_{i+1}^{\mathcal{H}}t_{i}^{\mathcal{H}}\) is not a \(\mathbb{Z}\)-linear combination of \(1\), \(t_{i}^{\mathcal{H}}\) and \(\left(t_{i}^{\mathcal{H}}\right)^{2}\) when \(q=0\).) Our proof of Theorem 5.1 does not seem to adapt to (75), and while we suspect that proving (75) won't be too difficult, it is merely the first step.
### One-sided cycle shuffles
We return to \(\mathbf{k}\left[S_{n}\right]\).
The \(\mathbf{k}\)-linear combinations \(\lambda_{1}t_{1}+\lambda_{2}t_{2}+\cdots+\lambda_{n}t_{n}\) (with \(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\in\mathbf{k}\)) of the somewhere-to-below shuffles are called the _one-sided cycle shuffles_. They have been studied in [10]. Again, the main result of [10] entails that their commutators are nilpotent, but we can ask "how nilpotent?".
This question remains wide open, not least due to its computational complexity (even the \(n=6\) case brings SageMath to its limits). All that I can say with surety is that the commutators of one-sided cycle shuffles don't vanish as quickly (under taking powers) as the \(\left[t_{i},t_{j}\right]\)'s.
**Example 10.8**.: For instance, let us set \(n=6\) and choose arbitrary \(a,b,c,d,e,a^{\prime},b^{\prime},c^{\prime},d^{\prime},e^{\prime}\in\mathbf{k}\), and then introduce the elements
\[u :=at_{1}+bt_{2}+ct_{3}+dt_{4}+et_{5}\qquad\text{ and }\] \[u^{\prime} :=a^{\prime}t_{1}+b^{\prime}t_{2}+c^{\prime}t_{3}+d^{\prime}t_{4} +e^{\prime}t_{5}\]
(two completely generic one-sided shuffles, except that omit \(t_{6}\) terms since \(t_{6}=1\) does not influence the commutator). Then, \(10\) minutes of torturing SageMath reveals that \(\left[u,u^{\prime}\right]^{6}=0\), but \(\left[u,u^{\prime}\right]^{5}\) is generally nonzero.
Even this example is misleadingly well-behaved. For \(n=7\), it is not hard to find two one-sided cycle shuffles \(u,u^{\prime}\) such that \(\left[u,u^{\prime}\right]^{n}\neq 0\).
**Question 10.9**.: For each given \(n\), what is the smallest (or at least a reasonably small) \(m\in\mathbbm{N}\) such that every two one-sided cycle shuffles \(u,u^{\prime}\) satisfy \(\left[u,u^{\prime}\right]^{m}=0\) |
2309.11335 | 2D-3D Pose Tracking with Multi-View Constraints | Camera localization in 3D LiDAR maps has gained increasing attention due to
its promising ability to handle complex scenarios, surpassing the limitations
of visual-only localization methods. However, existing methods mostly focus on
addressing the cross-modal gaps, estimating camera poses frame by frame without
considering the relationship between adjacent frames, which makes the pose
tracking unstable. To alleviate this, we propose to couple the 2D-3D
correspondences between adjacent frames using the 2D-2D feature matching,
establishing the multi-view geometrical constraints for simultaneously
estimating multiple camera poses. Specifically, we propose a new 2D-3D pose
tracking framework, which consists: a front-end hybrid flow estimation network
for consecutive frames and a back-end pose optimization module. We further
design a cross-modal consistency-based loss to incorporate the multi-view
constraints during the training and inference process. We evaluate our proposed
framework on the KITTI and Argoverse datasets. Experimental results demonstrate
its superior performance compared to existing frame-by-frame 2D-3D pose
tracking methods and state-of-the-art vision-only pose tracking algorithms.
More online pose tracking videos are available at
\url{https://youtu.be/yfBRdg7gw5M} | Huai Yu, Kuangyi Chen, Wen Yang, Sebastian Scherer, Gui-Song Xia | 2023-09-20T14:12:10Z | http://arxiv.org/abs/2309.11335v1 | # 2D-3D Pose Tracking with Multi-View Constraints
###### Abstract
Camera localization in 3D LiDAR maps has gained increasing attention due to its promising ability to handle complex scenarios, surpassing the limitations of visual-only localization methods. However, existing methods mostly focus on addressing the cross-modal gaps, estimating camera poses frame by frame without considering the relationship between adjacent frames, which makes the pose tracking unstable. To alleviate this, we propose to couple the 2D-3D correspondences between adjacent frames using the 2D-2D feature matching, establishing the multi-view geometrical constraints for simultaneously estimating multiple camera poses. Specifically, we propose a new 2D-3D pose tracking framework, which consists: a front-end hybrid flow estimation network for consecutive frames and a back-end pose optimization module. We further design a cross-modal consistency-based loss to incorporate the multi-view constraints during the training and inference process. We evaluate our proposed framework on the KITTI and Argoverse datasets. Experimental results demonstrate its superior performance compared to existing frame-by-frame 2D-3D pose tracking methods and state-of-the-art vision-only pose tracking algorithms. More online pose tracking videos are available at [https://youtu.be/yfBRdg7gw5M](https://youtu.be/yfBRdg7gw5M).
Camera localization, LiDAR maps, multi-view geometry, 2D-3D matching
## I Introduction
Camera localization in 3D LiDAR maps has attracted more and more attention due to the convenient system setup and high application prospects for mobile robots and autonomous driving. On the one hand, it only leverages lightweight and low-cost cameras, similar to visual SLAM, while offering significant potential in mitigating pose drift issues. On the other hand, LiDAR maps can be effortlessly constructed at a large scale and remain unaffected by changes in illumination, thanks to LiDAR SLAM or registration techniques. However, the existence of cross-modal gaps between camera images and LiDAR point clouds hampers the establishment of robust 2D-3D correspondences, consequently challenging the robustness and accuracy of camera localization.
To alleviate the issue of cross-modal gaps, traditional methods mainly use geometric consistency in 3D or 2D space [1, 2, 3, 4, 5]. The localization performance highly depends on the accuracy of intermediary "products" such as reconstructed sparse points or 3D line segments, which lack robustness and generalization capability. With the help of deep learning, the correspondences between LiDAR points and images are established by a cross-modal flow network [6, 7], and then the camera poses can be inferred using the PnP solver in a RANSAC loop. Besides, some methods also utilize a neural network to regress the camera poses directly [8, 9, 10]. Despite their remarkable performance, these methods solely estimate the camera pose frame by frame, disregarding the multi-view constraints between adjacent frames, thereby resulting in an unstable pose tracking.
We notice that the bundle adjustment of multi-view image matching can ensure the smoothness of camera pose estimation in classical visual odometry algorithms, and 2D-2D image matching is already a mature technique with traditional handcraft features and learning-based optical flow. Therefore, our intuition is to fuse the LiDAR-image correspondences and image-image flow to establish multi-view visual constraints for further smoothing and improving localization performance.
In this paper, we propose a novel 2D-3D pose tracking framework that leverages multi-view constraints to achieve accurate pose estimation for consecutive frames. The proposed framework formulates pose estimation as a local optimization problem with an initial pose. Specifically, the initial pose of each frame is from the previous frame during pose tracking. We impose a random disturbance on the ground truth poses to obtain the initial poses during network training. The 3D point clouds are projected onto the 2D plane with rough initial poses to obtain the synthetic depth maps. Then we utilize
Fig. 1: Illustrative examples of the proposed 2D-3D pose tracking and visual odometry on the KITTI 00 sequence. _Top:_ The visualization of the LiDAR projection with initial pose and predicted pose. _Bottom:_ The top view of the final trajectories of the proposed method and the visual odometry algorithm.
a hybrid flow estimation network to estimate the image-to-LiDAR depth flows between the adjacent camera frames and a synthetic depth map, as well as the optical flow between the adjacent camera frames. Especially, a cross-modal consistency loss function is devised to incorporate multi-view constraints between the estimated image-to-LiDAR depth flows and the optical flow into the learning process. During pose tracking, we define an objective function consisting of the reprojection error and the cross-modal consistency error, and then optimize the estimated camera poses under the paradigm of a non-linear least square problem. An illustrative example of the proposed framework and visual odometry is shown in Fig. 1.
Our main contribution can be summarized as:
* We propose an effective hybrid flow estimation network that simultaneously estimates image-to-LiDAR depth flow and image-to-image flow between consecutive video frames. We further devise a cross-modal consistency loss function to incorporate multi-view constraints into the learning process.
* We introduce a back-end optimization algorithm to smooth and improve localization performance of camera pose tracking in LiDAR maps. The estimated poses corresponding to the consecutive frames are refined together under the paradigm of a non-linear least square problem based on cross-modal consistency.
* We conduct extensive experiments on two public datasets (i.e., KITTI and Argoverse) to evaluate the performance. The experimental results demonstrate that the proposed 2D-3D pose tracking framework can achieve more accurate and robust localization than other frame-by-frame pose tracking methods.
## II Related Work
### _Visual-only Pose Tracking_
Cameras provide a wealth of spectral and degenerated geometrical information at a significantly lower cost compared to other competing sensor options. As a result, visual localization plays an important role in modern localization systems for autonomous vehicles. The paradigm of visual localization is to fetch 2D-2D correspondences between two or more 2D images, typically employing techniques such as optical flow or feature-based matching, and then leverage the epipolar geometry to compute the relative motion. To overcome problems such as scale ambiguity and error accumulation, Triggs [11] introduces a joint optimization procedure that minimizes the re-projection error between the observed feature points and the estimated points, which is called bundle adjustment optimization. It adjusts the bundle of rays passing through the camera center and the feature points in the 3D world to minimize this error over multiple frames. In addition, some methods [12, 13, 14, 15] propose to incorporate global information, such as GPS or inertial measurement units (IMUs), to help resolve scale ambiguity and reduce error accumulation during pose tracking. In recent years, with the advent of machine learning and particularly deep learning, learned visual pose tracking methods [16, 17, 18, 19] have also emerged. These methods leverage Convolutional Neural Networks (CNNs) to learn features and their associations between frames, thereby estimating camera poses. However, visual-only pose tracking has inevitable pose drift and lacks robustness over challenging conditions such as illumination, weather, and season changes over time.
### _2D-3D Pose Tracking_
Unlike cameras, LiDAR is less susceptible to those visual factors. Therefore, pose tracking based on 2D-3D correspondences between 2D images and offline 3D LiDAR maps holds greater potential in achieving robust localization. Existing approaches focus on solving the cross-modal gaps between 2D images and 3D LiDAR points. Some researchers propose to extract repeatable points or line features from images and LiDAR points for 2D-3D matching [20, 21, 4, 5], which achieve remarkable performance in scenes with rich geometric information. Other approaches transform data into the same modal at first. For example, in certain approaches [1, 2, 3], sparse 3D points are reconstructed from consecutive camera frames or stereo disparity images captured by a stereo camera. Subsequently, these reconstructed points are matched with the global LiDAR map. The localization performance of these approaches relies on the accuracy of the reconstructed 3D points. In addition, the 3D reconstruction process is time-consuming
Fig. 2: The proposed 2D-3D pose tracking framework. It consists of two main components: the front-end hybrid flow estimation network, and the back-end pose optimization module.
and the reconstructed 3D points may not correspond to any points in the LiDAR map. Other approaches [6, 7, 8, 9, 10, 22, 23] first project the LiDAR map onto the 2D plane to obtain synthetic depth maps and then match them with camera images. However, these methods predict the camera pose frame by frame and ignore the constraints between adjacent frames. As a result, they are prone to jitter and encounter error accumulation problems during pose tracking. [24] implicitly utilizes the temporal features between consecutive frames, but still predicts the camera poses frame by frame. In this paper, we propose a novel 2D-3D pose tracking framework which integrates the multi-view constraints between consecutive frames during the network training and inference. Unlike the aforementioned methods, we regard the poses of multiple frames as a whole and optimize them together under the proposed cross-modal consistency, thus can well handle the cross-modal differences and multi-view constraints.
## III Proposed Method
The general frame-by-frame 2D-3D pose tracking framework adheres to the paradigm of local optimization based on an initial pose, which can be formulated as follows: In the beginning, the camera pose of the first frame is initialized using GPS. For each subsequent camera frame, the pose of the previous frame serves as the initial estimation. Based on this initial pose, a fixed-size point cloud is segmented from the global LiDAR map. Then, the 2D-3D correspondences between the camera frame and the segmented point cloud are estimated. Finally, the corresponding camera pose is calculated using the PnP solver. Previous work [7, 8, 9] has confirmed the feasibility of this paradigm for pose tracking. However, it has also highlighted the limited robustness in some scenarios, such as environments with extreme degeneracy. The unstable camera localization motivates us to incorporate multi-view constraints between adjacent camera frames into the 2D-3D pose tracking framework.
Our proposed 2D-3D pose tracking framework is shown in Fig. 2. Our key insight is to simultaneously estimate the 6-DoF poses for two adjacent frames based on a hybrid flow estimation network. For each group of two adjacent frames, we initialize the poses as the same and from the estimation of previous continuous frames. Then the global LiDAR maps are cropped with a fixed size centered at the initial poses. By using the hybrid 2D-3D and 2D-2D flow network, we can obtain stable 2D-3D correspondences. PnP solver is then utilized to get the estimated poses. Finally, these poses are further refined using a back-end optimization algorithm that incorporates multi-view constraints. Subsequently, the initial pose of the current time step is updated with the estimated pose of the current frame, while the estimated pose of the next frame serves as the initial pose for the next time step.
In the following, we first introduce the front-end flow estimation network, and then describe the details of the proposed cross-modal consistency loss function. Finally, we present the back-end optimization algorithm.
### _Hybrid Flow Estimation Network_
#### Iii-A1 Depth Map Generation
To obtain the image-to-LiDAR depth flow using the flow estimation network, we first need to project the 3D LiDAR point clouds to the 2D plane to generate the synthetic depth maps. The projection process follows the pinhole camera model which can be described as:
\[\left(\begin{array}{c}u\\ v\\ 1\end{array}\right)=\frac{1}{Z}\left(\begin{array}{ccc}f_{x}&0&c_{x}\\ 0&f_{y}&c_{y}\\ 0&0&1\end{array}\right)\left(\begin{array}{c}X\\ Y\\ Z\end{array}\right)\triangleq\frac{1}{Z}\mathbf{K}P \tag{1}\]
where \((X\quad Y\quad Z)^{T}\) and \(P\) represent the coordinates of the 3D points in the camera coordinate system. \((u\quad v\quad 1)^{T}\) represents the homogeneous coordinates of the projection in the pixel coordinate system. The intrinsic camera parameters, \(f_{x},f_{y},c_{x},c_{y}\), form the camera matrix \(\mathbf{K}\). Additionally, the occlusion removal scheme proposed by [25] is applied to eliminate occluded 3D points on the generated depth map.
#### Iii-A2 Flow Estimation
After obtaining the synthetic depth map, we utilize a hybrid flow estimation network to predict the image-to-LiDAR depth flow and the optical flow between the consecutive camera frames simultaneously. The hybrid network mainly consists of three parts: the image-to-LiDAR depth flow estimation networks \(F_{c}\) and \(F_{n}\), and the optical flow estimation network \(F_{i}\). We use I2D-Loc [7] as the backbone to predict the image-to-LiDAR depth flow and RAFT [26] as the backbone to predict the optical flow because they both excel at their respective tasks.
The ground truth image-to-LiDAR depth flow is generated by computing the distance between the LiDAR depth maps projected based on the initial pose and the ground truth pose. Given the initial pose \(\mathbf{T}_{\text{init}}\), the ground truth image-to-LiDAR depth flow for the current frame can be computed using the following formulation:
\[[\Delta u,\Delta v]_{\text{cur2depth}}=h\left(P_{w},\hat{\mathbf{T}}_{\text{cur}} \right)-h\left(P_{w},\mathbf{T}_{\text{init}}\right) \tag{2}\]
\[h(P,\mathbf{T})\triangleq\mathbf{K}\mathbf{T}P \tag{3}\]
where \([\Delta u,\Delta v]_{\text{cur2depth}}\) is the calculated ground truth image-to-LiDAR depth flow between the current camera frame and the synthetic depth map. \(\hat{\mathbf{T}}_{\text{cur}}\) and \(\mathbf{T}_{\text{init}}\) are the ground truth pose and the initial pose, respectively. \(P_{w}\) is the coordinate of the corresponding 3D point cloud in the world coordinate system. \(h\) represents the camera projection function that first transforms 3D points from the world coordinate system to the camera coordinate system and then projects them to the 2D plane. Besides, the ground truth image-to-LiDAR depth flow for the next frame can be calculated based on a similar formulation:
\[[\Delta u,\Delta v]_{\text{next2depth}}=h\left(P_{w},\hat{\mathbf{T}}_{\text{ next}}\right)-h\left(P_{w},\mathbf{T}_{\text{init}}\right) \tag{4}\]
where \([\Delta u,\Delta v]_{\text{next2depth}}\) represents the calculated ground truth image-to-LiDAR depth flow between the next camera frame and the synthetic depth map. \(\hat{\mathbf{T}}_{\text{next}}\) is the ground truth pose of the next camera frame.
### _Loss Function_
Regarding the network supervision, we first consider the masked average endpoint error (EPE) between the predicted image-to-LiDAR depth flow and the ground truth, defined as follows:
\[L_{\text{epe}}=\frac{\sum g(u,v)\left\|f_{\text{pre}}(u,v)-f_{\text{gl}}(u,v) \right\|_{2}}{\sum g(u,v)} \tag{5}\]
\[g(u,v)=\left\{\begin{array}{l}1,f_{\text{gt}}\neq 0\\ 0,\text{otherwise}\end{array}\right. \tag{6}\]
where \(f_{\text{pre}}\) and \(f_{\text{gt}}\) represent the predicted and ground truth image-to-LiDAR depth flow, respectively. \(g(u,v)\) identifies the valid pixels in the ground truth image-to-LiDAR depth flow. Due to the superior accuracy achieved by the same modal image optical flow network compared to the image-to-LiDAR depth flow network, we load a pre-trained optical flow model and keep its parameters fixed during network training.
Additionally, to incorporate the multi-view constraints between two adjacent camera frames during the network training, we propose a cross-modal consistency-based loss function, which is formulated as follows:
\[L_{\text{consist}}=\left\|w\left(f_{\text{pre}}^{\text{n2d}}-f_{\text{pre}}^{ \text{c2d}},f_{\text{pre}}^{\text{c2d}}\right)-f_{\text{pre}}^{\text{c2n}} \right\|_{2} \tag{7}\]
where \(f_{\text{pre}}^{\text{n2d}}\) represents the predicted image-to-LiDAR depth flow between the next camera frame and the synthetic depth map. Similarly, \(f_{\text{pre}}^{\text{c2d}}\) represents the predicted image-to-LiDAR depth flow for the current camera frame. \(f_{\text{pre}}^{\text{c2d}}\) is the predicted optical flow between the adjacent camera frames. \(w\) represents the warping operation.
The cross-modal consistency, as shown in Fig. 3, explicitly describes the relationship between the adjacent camera frames and the synthetic depth map. Specifically, the image-to-depth flow represents the 2D-3D correspondences between the camera frame and depth map. Fig. 4c and 4d visualize the predicted flows corresponding to the current and next frame, respectively. The image-to-image flow (i.e. optical flow) represents the 2D-2D correspondences between two adjacent camera frames, which is shown in Fig. 4f. By calculating the difference between the predicted image-to-depth flows, we will obtain the equivalent image-to-image flow between the camera frames. As a result, the error between the equivalent and predicted image-to-image flow constructs the multi-view constraints between these predictions. In addition, it's important to note that the predicted image-to-LiDAR depth flow represents the displacement field from the synthetic depth map to the image. Hence, the difference between the image-to-LiDAR depth flows of two adjacent frames is not equivalent to the image-to-image flow between them. We need to warp the calculated difference matrix based on the image-to-LiDAR depth flow of the first frame to obtain the final equivalent optical flow.
The final loss function is the sum of the above loss functions:
\[L=L_{\text{epe}}^{\text{cur}}+L_{\text{epe}}^{\text{next}}+L_{\text{consist}} \tag{8}\]
where \(L_{\text{epe}}^{\text{cur}}\) is the average endpoint error of the predicted current image-to-depth flow. \(L_{\text{epe}}^{\text{next}}\) is the average endpoint error of the predicted next image-to-depth flow.
In summary, we employ two loss functions to guide the network training: the average endpoint error and the cross-modal consistency loss function. The average endpoint error serves as a conventional constraint commonly used in optical flow estimation tasks. The cross-modal consistency error introduces the multi-view constraints between adjacent camera frames, capturing the relationship between different views.
### _Multi-View Constraints-based Back-End Optimization_
During pose tracking, as previously mentioned, we utilize the trained hybrid flow estimation network to simultaneously predict the image-to-LiDAR depth flows between the adjacent camera frames and the synthetic depth map, as well as the optical flow between the camera frames. Then the camera poses corresponding to the two camera frames are solved using
Fig. 4: Visualization of the flow estimation. _a:_ LiDAR projection overlay on the current camera image. _b:_ LiDAR projection overlay on the next camera image. _c:_ Predicted image-to-LiDAR depth flow between the current image and the LiDAR projection. _d:_ Projected image-to-LiDAR depth flow between the next image and the LiDAR projection. _e:_ Warped difference matrix between the predicted image-to-depth flows. _f:_ Predicted optical flow between the two camera images.
Fig. 3: Diagram of the cross-modal consistency.
the PnP solver according to the prediction. Besides, we define an energy function to further optimize the solved poses, as:
\[\mathbf{T}_{\text{cur}}^{*},\mathbf{T}_{\text{next}}^{*}=\operatorname*{arg\,min}_{\mathbf{T }_{\text{cur}},\mathbf{T}_{\text{next}}}(E_{\text{consist}}+E_{\text{reproj}}^{ \text{cur}}+E_{\text{reproj}}^{\text{next}}), \tag{9}\]
where \(\mathbf{T}_{\text{cur}}\) and \(\mathbf{T}_{\text{next}}\) represent the predicted camera poses corresponding to the current and next frames, respectively. \(E_{\text{consist}}\) is the cross-modal consistency error and formulated as follows:
\[E_{\text{consist}}=\left\|h(P_{w},\mathbf{T}_{\text{next}})-h(P_{w},\mathbf{T}_{\text{ cur}})-f_{\text{pre}}^{\text{c2n}}\right\|_{2} \tag{10}\]
where \(h\) is the camera projection function and defined as Eq. (3). \(f_{\text{pre}}^{\text{c2n}}\) is the predicted optical flow. Based on this formula, we begin by projecting the 3D points onto the 2D plane using the predicted poses. Subsequently, we compute the difference matrix between the projections to derive the equivalent optical flow. Therefore, \(E_{\text{consist}}\) represents the distance between the equivalent and predicted optical flows. Additionally, \(E_{\text{reproj}}^{\text{cur}}\) and \(E_{\text{reproj}}^{\text{next}}\) both represent the reprojection error of the solved camera poses:
\[E_{\text{reproj}}^{\text{cur}}=\left\|h(P,\mathbf{T}_{\text{cur}})-X_{\text{cur}} \right\|_{2} \tag{11}\]
\[E_{\text{reproj}}^{\text{next}}=\left\|h(P,\mathbf{T}_{\text{next}})-X_{\text{ next}}\right\|_{2} \tag{12}\]
where \(X_{\text{cur}}\) and \(X_{\text{next}}\) represent the 2D coordinates of pixels with valid depth values in the depth maps that have been warped using the predicted image-to-LiDAR depth flows.
Based on the formulated energy function, we employ the least square method to determine the optimal camera poses \(\mathbf{T}_{\text{cur}}^{*}\) and \(\mathbf{T}_{\text{next}}^{*}\) that minimize the function value. The obtained \(\mathbf{T}_{\text{cur}}^{*}\) replaces the initial pose \(\mathbf{T}_{\text{init}}\) as the final pose for the current frame, while \(\mathbf{T}_{\text{next}}^{*}\) is utilized to initialize the subsequent localization process.
## IV Experiments
In this section, we conduct extensive experiments on two public datasets to evaluate the performance of our proposed method.
### _Dataset and Evaluation Metrics_
#### Iv-A1 Dataset
We conduct experiments on KITTI [27] and Argoverse [28] datasets. KITTI is commonly used in the field of autonomous driving. It consists of 22 sequences. We use sequences 03, 05, 06, 07, 08, and 09 as the training set and sequences 00 and 10 as the validation set. Argoverse offers more complex driving scenarios, albeit with shorter odometry sequences. We use the sequences train1, train2, and train3 as the training set, and sample several sequences with less noise as the validation set.
To acquire the global LiDAR map, unavailable in the datasets, we begin by aggregating all scans based on the ground truth poses. Subsequently, to conserve storage space, we down-sample the aggregated map at a resolution of 0.1m. During training, we segment the point cloud for each frame by centering it around the ground truth pose and extending it 100m forward, 10m backward, and 25m to the left and right. During pose tracking, the coverage of the segmented point cloud remains fixed, but it is centered on the initial pose of the current frame being tracked.
#### Iv-A2 Evaluation Metrics
We evaluate the performance of the proposed method in two ways. The first one is to evaluate the localization accuracy frame by frame using the mean and median errors of the predicted poses. Following [6, 7], we assume the position errors larger than four meters as a _failure_. The second one is to evaluate the pose tracking performance on the global LiDAR maps using the average trajectory error (ATE) and the relative pose error (RPE), which are defined in [29] and commonly used for quantitative trajectory evaluation.
### _Experimental Setup_
Accounting for the varying sizes of RGB images across different sequences, we uniformly crop all RGB images to a resolution of \(960\times 320\) during network training. As mentioned above, the proposed hybrid network comprises three components: the current image-to-LiDAR depth flow estimation network \(F_{c}\), the next image-to-LiDAR depth flow estimation network \(F_{n}\), and the optical flow estimation network \(F_{i}\). We employ 1ZD-Loc [7] as the backbone for \(F_{c}\) and \(F_{n}\), while for \(F_{i}\), we utilize RAFT-S [26] as the backbone due to memory limitations. In particular, we initialize \(F_{c}\) and \(F_{i}\) by loading the pre-trained models provided by [7, 26]. As for \(F_{n}\), we first train it singly for 100 epochs based on the same experimental setup in [7]. Subsequently, we load all the pre-trained model weights and conduct joint training for additional 50 epochs on \(F_{c}\), \(F_{n}\), and \(F_{i}\) using the proposed cross-modal consistency loss function. The learning rate, weight decay, and training batch size are set to \(4\times 10^{-6}\), \(1\times 10^{-4}\), and 2, respectively. We employ the _MultiStepLR_ learning rate scheduler, which reduces the learning rate to one-tenth of the previous value at
\begin{table}
\begin{tabular}{c c c c|c c c|c c|c} \hline \hline Case & \multicolumn{3}{c|}{Module} & \multicolumn{3}{c|}{Mean Error} & \multicolumn{3}{c|}{Median Error} & \multicolumn{2}{c|}{Fail[\%] \(\downarrow\)} \\ & C & N & T & L & Rot1\({}^{\circ}\) & Trans1.[cm] \(\downarrow\) & Rot1\({}^{\circ}\) & Trans1.[cm] \(\downarrow\) & Trans1.[cm] \(\downarrow\) & \multicolumn{1}{c}{} \\ \hline Initial pose & & & & & \(\approx 9.6726\) & \(\approx 182.8381\) & \(\approx 9.9033\) & \(\approx 187.3079\) & - \\ (a) & ✓ & & & & 0.8619 & 24.6955 & 0.6900 & 17.6687 & 1.81 \\ (b) & & ✓ & & & 2.5284 & 75.4198 & 1.8364 & 63.5510 & 16.65 \\ (c) & ✓ & & ✓ & & **0.8375** & **22.4241** & **6.6777** & **15.3426** & **1.61** \\ (d) & ✓ & ✓ & & & **2.4225** & 74.0713 & **1.8184** & 62.6881 & 16.67 \\ (e) & ✓ & ✓ & ✓ & & 0.86402.4801 & 23.7909/75.5822 & 0.6978/1.8511 & 16.0583/63.5162 & 1.61/16.93 \\ (f) & ✓ & ✓ & ✓ & & 0.9051/2.4842 & 26.512/**71.8964** & 0.7168/**1.7970** & 17.8525/**61.08808** & 1.78/**15.90** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Performance comparison of the flow estimation network under different setups on the KITTI dataset. “C”, “N”, “T”, and “L” represent the current image-to-LiDAR depth flow estimation network \(F_{n}\), the next image-to-LiDAR depth flow estimation network \(F_{n}\), additional training, and the cross-modal consistency loss function, respectively. In (e) and (f), the first value represents the performance of the network \(F_{c}\), while the second value represents the performance of the network \(F_{n}\).
the 10th and 30th epochs. The joint training is performed using two NVIDIA RTX 3090 Ti GPUs, while the other experiments are conducted on a single NVIDIA RTX 3090 Ti GPU.
### _Ablation Study_
In this section, we evaluate the effectiveness of the multi-view constraints during the training process. The experimental results are shown in Table I, where we report the mean and median errors of the calculated poses based on the predicted flows as evaluation metrics.
The performance of the single image-to-LiDAR depth flow estimation network for current and next frames using the official I2D-Loc model is displayed in Table I(a) and (b). The error range of the initial poses for the network \(F_{n}\) is greater than that for the network \(F_{c}\), resulting in a relatively higher localization accuracy for the network \(F_{c}\). After conducting an additional 50 epochs of training for each network individually in Table I(c) and (d), the pose errors of the two networks both decrease by a little margin. Then when the networks \(F_{c}\) and \(F_{n}\) are trained jointly for another 50 epochs, Table I(e) shows that the localization accuracy for each network hasn't been improved. The above experiments imply an inherent conflict between the network \(F_{c}\) and \(F_{n}\) due to their varying error ranges for initial poses. In Table I(f), we utilize the proposed cross-modal consistency loss function to incorporate the multi-view constraints during the training process. The experimental result shows that the localization performance of the network \(F_{c}\) degrades a little, but the pose error of the network \(F_{n}\) decreases by a large margin. This outcome demonstrates that the proposed cross-modal consistency-based loss bridges the gaps between the predicted 2D-3D correspondences of adjacent frames.
### _Results Analysis_
Table II gives the quantitative results of our method evaluated on KITTI sequence 00. In addition to the frame-by-frame 2D-3D pose tracking methods CMRNet [8] and I2D-Loc [7], we also compare our method with the traditional visual odometry algorithm and a devised 2D-3D pose tracking framework that loosely couples the visual odometry algorithm with I2D-Loc, referred to as I2D-VO. As illustrated in Fig. 5, this framework utilizes the visual odometry algorithm and I2D-Loc to generate pose candidates based on 2D-2D and 2D-3D correspondences, respectively. Subsequently, the final pose is selected from these candidates based on predefined thresholds.
According to Table II(a) and (b), CMRNet efficiently tracks the entire LiDAR map without any interruption, albeit with certain limitations in its localization accuracy. Conversely, I2D-Loc boasts superior localization precision, yet it falls short of completing the entire map. This difference arises due to the heavy dependence of I2D-Loc on accurate 2D-3D correspondences, while CMRNet directly regresses camera poses. Additionally, as shown in Table II(c), the traditional visual odometry (VO) algorithm can track the entire map seamlessly. However, as shown in Fig. 1, the drift of VO is serious.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Case} & \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Translation Error} & \multicolumn{2}{c|}{Rotation Error} & \multirow{2}{*}{Complete} & \multirow{2}{*}{Param[M]} & \multirow{2}{*}{Time[ms]} \\ & Mean.[cm] \(\downarrow\) & Std.[cm] \(\downarrow\) & Mean.[\({}^{\circ}\)] \(\downarrow\) & Std.[\({}^{\circ}\)] \(\downarrow\) & & \\ \hline (a) & CMRNet [8] & 205 & 107 & 126 & 39 & ✓ & 37.1 & 25 \\ (b) & I2D-Loc [7] & 21 & 10 & 0.46 & 0.30 & & 6.3 & 176 \\ (c) & VO & 162 & 63 & 4.95 & 2.34 & ✓ & - & 219 \\ (d) & I2D-VO & 15 & 21 & 0.48 & 0.64 & ✓ & 6.3 & 180 \\ (e) & Ours w/o Optim & 14 & 17 & 0.50 & 0.53 & & 13.6 & 374 \\ (f) & Ours w/ Optim & 14 & 17 & 0.49 & 0.54 & ✓ & 13.6 & 588 \\ \hline \end{tabular}
\end{table} TABLE II: Pose tracking performance comparison on KITTI sequence 00. “Complete” indicates that the pose tracking process ends without any interruptions caused by localization failure. The localization errors are calculated until being interrupted.
Fig. 5: 2D-3D pose tracking framework that loosely couples 2D-3D correspondences and 2D-2D correspondences.
Fig. 6: LiDAR projection overlaid on the next frame before and after optimization.
By integrating 12D-Loc with the visual odometry algorithm in a loosely-coupled manner, our proposed framework, 12D-VO, accomplishes an enhancement in localization accuracy and successfully overcomes previous shortcomings. In Table II(e) and (f), our proposed 2D-3D pose tracking framework, which incorporates multi-view constraints, achieves minimal translation and rotation errors upon map completion. Furthermore, as demonstrated in Fig. 6, the accuracy of 2D-3D correspondences noticeably improves the following optimization in various challenging scenarios. In these cases, a single image-to-depth flow estimation network will fail to predict reliable 2D-3D correspondences due to homogeneous depth projections or extremely dark RGB images.
To highlight the outstanding performance of our proposed 2D-3D pose tracking methodology, we conducted a comprehensive comparison with the state-of-the-art visual-only pose tracking system, VINS-Fusion [30]. The evaluation was conducted on multiple sequences, including sequences 00 and 10 from the KITTI dataset, as well as sequences 2c07- and 2595- from the Argoverse dataset. The estimated trajectories of these sampled sequences are depicted in Fig. 7, demonstrating the excellent alignment of our proposed method with the ground truth trajectories.
Quantitative analysis, presented in Table III, provides clear evidence of the superiority of our proposed 2D-3D pose tracking method, which incorporates multi-view constraints. Across all four LiDAR maps, our method consistently outperforms the other tested methodologies, delivering remarkable
\begin{table}
\begin{tabular}{c|c c c c c c|c c c c c|c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{KITTI\_00 (186\)Im} & \multicolumn{3}{c|}{KITTI\_10 (459m)} & \multicolumn{3}{c|}{Arg\_2c07 (44m)} & \multicolumn{3}{c}{Arg\_2595 (54m)} \\ & Trans.[m] \(\downarrow\) & Rot.[\({}^{\circ}\)] \(\downarrow\) & Trans.[m] \(\downarrow\) & Rot.[\({}^{\circ}\)] \(\downarrow\) & Trans.[m] \(\downarrow\) & Rot.[\({}^{\circ}\)] \(\downarrow\) & Trans.[m] \(\downarrow\) & Rot.[\({}^{\circ}\)] \(\downarrow\) & \multicolumn{3}{c}{Trans.[m] \(\downarrow\)} & Rot.[\({}^{\circ}\)] \(\downarrow\) \\ & Mean & Std & Mean & Std & Mean & Std & Mean & Std & Mean & Std & Mean & Std & Mean & Std & Mean & Std \\ \hline VINS-Fusion [30] & 16.70 & 9.40 & 4.99 & 2.51 & 2.34 & 1.13 & 1.75 & **0.74** & 3.09 & 2.24 & 22.87 & 6.77 & 19.02 & 11.04 & 176.67 & 1.11 \\ I2D-VO & 0.24 & 0.26 & 0.54 & 0.94 & 0.61 & 0.91 & 2.74 & 15.61 & 0.43 & 0.23 & **0.45** & 0.42 & 0.34 & 0.22 & 4.86 & **0.18** \\ Ours & **0.13** & **0.17** & **0.49** & **0.55** & **0.45** & **0.63** & **1.04** & 1.24 & **0.17** & **0.15** & 0.58 & **0.34** & **0.14** & **0.11** & **2.96** & 0.20 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Performance comparison on KITTI dataset.
Fig. 7: Estimated trajectories of three tested methods on four sampled LiDAR maps.
performance.
Additionally, we have also conducted a comparison between our proposed method and VINS-Fusion, utilizing Absolute Trajectory Error (ATE) and Relative Pose Error (RPE) as the key performance metrics. The quantitative outcomes are presented in Fig. 8 and Fig. 9. These figures emphatically demonstrate that our proposed method consistently achieves superior localization accuracy. Moreover, it exhibits reduced drift during pose tracking compared to both the carefully engineered I2D-VO and the current visual-only VINS-Fusion.
### _Discussion_
The iterative update module in I2D-Loc [7] improves the matching accuracy by solving the large displacement problem. However, when the initial pose is already accurate enough, the motion tends to be small. This hypothesis holds in our pose tracking pipeline. Consequently, we conduct additional experiments to validate the impact of reducing the number of iterative updates. The results of these experiments are presented in Table IV.
The experimental results demonstrate that reducing the number of iterative updates can speed up the localization process without causing a significant decrease in accuracy. However, despite the improved efficiency achieved by reducing the number of iterations, our pose tracking method currently operates at a top speed of 3-4 frames per second, which is mainly due to inefficient point cloud cutting and projection.
## V Conclusions and Future Work
In this study, we propose a novel 2D-3D pose tracking framework that tightly integrates 2D-3D correspondences through 2D-2D matching. Our approach incorporates a cross-modal consistency-based loss function to enable effective network supervision under multi-view constraints. Additionally, we introduce a non-linear least square problem for joint optimization of adjacent camera frame poses during the pose tracking process. Furthermore, we present a comparative analysis by incorporating a pose tracking framework that loosely couples 2D-3D and 2D-2D correspondences. Extensive experiments demonstrate that our proposed method significantly improves the smoothness and accuracy of the pose tracking. In the future, we will further extend our method to various scenarios and improve the efficiency.
|
2308.16773 | Anticipating critical transitions in multidimensional systems driven by
time- and state-dependent noise | Anticipating bifurcation-induced transitions in dynamical systems has gained
relevance in various fields of the natural, social, and economic sciences.
Before the annihilation of a system's equilibrium point by means of a
bifurcation, the system's internal feedbacks that stabilize the initial state
weaken and eventually vanish, a process referred to as critical slowing down
(CSD). In one-dimensional systems, this motivates the use of variance and lag-1
autocorrelation as indicators of CSD. However, the applicability of variance is
limited to time- and state-independent driving noise, strongly constraining the
generality of this CSD indicator. In multidimensional systems, the use of these
indicators is often preceded by a dimension reduction in order to obtain a
one-dimensional time series. Many common techniques for such an extraction of a
one-dimensional time series generally incur the risk of missing CSD in
practice. Here, we propose a data-driven approach based on estimating a
multidimensional Langevin equation to detect local stability changes and
anticipate bifurcation-induced transitions in systems with generally time- and
state-dependent noise. Our approach substantially generalizes the conditions
under which CSD can reliably be detected, as demonstrated in a suite of
examples. In contrast to existing approaches, changes in deterministic dynamics
can be clearly discriminated from changes in the driving noise using our
method. This substantially reduces the risk of false or missed alarms of
conventional CSD indicators in settings with time-dependent or multiplicative
noise. In multidimensional systems, our method can greatly advance the
understanding of the coupling between system components and can avoid risks of
missing CSD due to dimension reduction, which existing approaches suffer from. | Andreas Morr, Keno Riechers, Leonardo Rydin Gorjão, Niklas Boers | 2023-08-31T14:50:24Z | http://arxiv.org/abs/2308.16773v2 | Anticipating critical transitions in multi-dimensional systems driven by time- and state-dependent noise
###### Abstract
The anticipation of bifurcation-induced transitions in dynamical systems has gained relevance in various fields of the natural, social, and economic sciences. When approaching a co-dimension 1 bifurcation, the feedbacks that stabilise the initial state weaken and eventually vanish; a process referred to as critical slowing down (CSD). This motivates the use of variance and lag-1 autocorrelation as indicators of CSD. Both indicators rely on linearising the system's restoring rate. Additionally, the use of variance is limited to time- and state-independent driving noise, strongly constraining the generality of CSD. Here, we propose a data-driven approach based on deriving a Langevin equation to detect local stability changes and anticipate bifurcation-induced transitions in systems with generally time- and state-dependent noise. Our approach substantially generalizes the conditions underlying existing early warning indicators, which we showcase in different examples. Changes in deterministic dynamics can be clearly discriminated from changes in the driving noise. This reduces the risk of false and missed alarms of conventional CSD indicators significantly in settings with time-dependent or multiplicative noise. In multi-dimensional systems, our method can greatly advance the understanding of the coupling between system components and can avoid risks of missing CSD due to dimension reduction, which existing approaches suffer from.
## I Introduction
A mechanistic understanding of complex high-dimensional physical systems is essential for assessing the risk of abrupt regime shifts, for example in ecological, climatic, social, or financial systems. Such shifts may occur when critical forcing thresholds, which correspond to underlying bifurcation points, are crossed [1; 2; 3; 4]. Reducing complex systems to a low-dimensional summary observable \(\mathbf{X}_{t}\) has leveraged impressive modelling capabilities [5; 6; 7; 8]. This is particularly important because observations are typically available in the form of multivariate time series of just a few dimensions.
Commonly, the dynamics of the summary observable \(\mathbf{X}_{t}\in\mathbb{R}^{n}\) is approximately separated into a deterministic component \(A(\mathbf{X}_{t},t)\mathrm{d}t\) and a stochastic component \(B(\mathbf{X}_{t},t)\mathrm{d}\mathbf{W}_{t}\) that represents the action of the omitted dimensions. This results in the Langevin equation
\[\mathrm{d}\mathbf{X}_{t}=A(\mathbf{X}_{t},t)\mathrm{d}t+B(\mathbf{X}_{t},t) \mathrm{d}\mathbf{W}_{t}. \tag{1}\]
Even though, in principle, the stochastic component can take more complicated forms, we restrict ourselves to the case where \(\mathbf{W}\) is an uncorrelated Wiener process supported on the filtered probability space \((\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\in\mathbb{R}_{+}},\mathbb{P})\) and refer to existing extensions to the case of correlated noise [9; 10].
This framework facilitates a mathematical description of abrupt regime shifts in terms of dynamic bifurcations in low-dimensional dynamical systems [11; 2]. Prior to the transition, the deterministic drift \(A(\mathbf{X}_{t},t)\) embodies negative feedback mechanisms keeping the system in a stable equilibrium [12; 13; 14]. The explicit time dependence of \(A(\mathbf{X}_{t},t)\) reflects the changing forcing levels that act on the system from the outside and alter the deterministic, coarse-grained dynamics. At the bifurcation point, i.e. at the critical level of forcing, the currently occupied equilibrium state is annihilated and the system abruptly transitions to another stable state.
For co-dimension 1 bifurcations, it is well known that a weakening of the negative feedback precedes an eventual abrupt transition [2; 15]. This will, heuristically speaking, result in a weaker and slower response to the pseudo-random perturbations stemming from the unresolved dynamics. This phenomenon is referred to as critical slowing down (CSD), and it manifests in an increase of the statistical quantities of variance and lag-1 autocorrelation (AC(1)) of the observable in the components exhibiting stability loss [2; 16; 17; 18]. These two quantities are therefore often employed to anticipate bifurcation-induced abrupt transitions, and their simultaneous increase has been suggested as an early warning signal (EWS) [2; 3; 19]. Mathematically, CSD can be described by approximating the negative feedback around a stable equilibrium as a linear restoring rate. Denoting the time-dependent equilibrium state of a one-dimensional observable \(X_{t}\) by \(x^{*}(t)\), we arrive at the Ornstein-Uhlenbeck model [20]
\[\mathrm{d}X_{t}=-\lambda(t)(X_{t}-x^{*}(t))\mathrm{d}t+\sigma\mathrm{d}W_{t}.\]
The linearised negative feedback \(\lambda(t)\) weakens during CSD while the noise-coupling strength \(\sigma\) is assumed to remain constant. This results in the following expressions
for variance and AC(1),
\[\begin{split}\operatorname{Var}\left[X\right]&=\frac{ \sigma^{2}}{2\lambda}\xrightarrow[]{\lambda\to 0}\infty\\ \operatorname{AC}_{X}(1)&=\exp(-\lambda\Delta t) \xrightarrow[]{\lambda\to 0}1,\end{split} \tag{2}\]
where \(\Delta t>0\) is the sampling time step of the data. Detection of CSD is usually preceded by a reduction of the system to a one-dimensional observable, either by leveraging physical understanding or employing principle component analysis [16; 21] in order to identify a linear combination of components which may be experiencing stability loss. In such a reduction, crucial information about system stability may be lost. The method presented herein is applicable directly to data from higher dimensional systems (or to multivariate data) and thus avoids this preprocessing step.
We will nevertheless, for illustrative purposes, first treat the problem of estimating local system stability in the one-dimensional dynamical system denoted by
\[\mathrm{d}X_{t}=a(X_{t},t)\mathrm{d}t+b(X_{t},t)\mathrm{d}W_{t}, \tag{3}\]
such that the linearised negative feedback takes the form
\[\lambda(t):=-\partial_{x}a(x^{*}(t),t). \tag{4}\]
The extension of the discussed estimation methods to the general, higher-dimensional setting of Eq. (1) is discussed in the Methods section.
Time- or state-dependent driving noise can lead to both false negative and false positive EWS [22; 9; 23]. Therefore, understanding the evolution of the diffusion term \(b(X_{t},t)\mathrm{d}W_{t}\) is crucial for reliable statements on stability changes derived from data. Given that in real-world situations, the assumption of time- and state-independent noise is hardly justifiable, a more general theoretical framework advancing CSD to the case of time- and state-dependent noise is called for.
In particular, a methodology is needed to extract from the observable a more holistic picture of both the deterministic dynamics of the system and the driving noise. The derivation of the variance and AC(1) in (2) hinges on the a priori assumption of linear feedback. In applications, the system might explore parts of the state space where non-linearities in the feedback are not negligible anymore, putting the validity of Eq. (2) into question. In contrast, the linear restoring rate \(\lambda\) directly captures the desired information of local stability and should therefore be considered the key quantity to measure system stability and detect CSD. To obtain an estimation \(\widehat{\lambda}\), we perform a spatially local linear fit to the estimated function \(\widehat{a}(x)\)[24; 25] in some neighbourhood around \(\widehat{x}^{*}=\widehat{\mathbb{E}}[X]\) (see Methods and Supplementary Material (SM) 1). A similar approach has recently been proposed in [26]. We carry the concept to multiple dimensions and include an estimation of the diffusion matrix \(BB^{\top}(x,t)\) to supplement the standard CSD indicators and to avoid false positives and false negatives caused by changes in the driving noise. In particular, we discuss situations where the conventional CSD indicators give an ambiguous or misleading picture. We show how the method proposed herein conclusively resolves these ambiguities.
## II Methods
It can be shown that the drift and diffusion coefficients \(A\) and \(B\) have the following representation in terms of the increments \(\Delta\mathbf{X}_{t}:=\mathbf{X}_{t+\Delta t}-\mathbf{X}_{t}\) of the process \(\mathbf{X}\)[27; 28; 29; 30; 31; 32; 24]:
\[A(\mathbf{x},t) =\lim_{\Delta t\to 0}\frac{1}{\Delta t}\mathbb{E}\left[\Delta \mathbf{X}_{t}|\mathbf{X}_{t}=\mathbf{x}\right],\] \[BB^{\top}(\mathbf{x},t) =\lim_{\Delta t\to 0}\frac{1}{\Delta t}\mathbb{E}\left[ \Delta\mathbf{X}_{t}\Delta\mathbf{X}_{t}^{\top}|\mathbf{X}_{t}=\mathbf{x} \right].\]
If the stochastic differential equation (SDE) (3) exhibits effective time independence, i.e. \(A(\mathbf{x},t)\equiv A(\mathbf{x})\) and \(B(\mathbf{x},t)\equiv B(\mathbf{x})\) in some observation time span and if the sample path of \(\mathbf{X}\) is available at sufficiently small time steps \(\Delta t>0\), one may estimate \(A(\mathbf{x})\) and \(BB^{\top}(\mathbf{x})\) by replacing the above ensemble average by the mean of the observed increments. The law of large numbers yields consistent estimators that converge to the true \(A\) and \(BB^{\top}\), omitting here a small bias stemming from the non-zero \(\Delta t\).
We generalise the definition of local system stability \(\lambda\) in Eq. (4) from the one-dimensional setting to the multi-dimensional setting of Eq. (1). Consider the Jacobian matrix of \(A\) at an equilibrium point \(\mathbf{x}^{*}\in\mathbb{R}^{n}\) with \(A(\mathbf{x}^{*},t)=\mathbf{0}\). Such an equilibrium point is stable if and only if all eigenvalues \((-\lambda_{k}+i\omega_{k})_{k=1,\ldots,n}\) of the Jacobian matrix \(\mathrm{D}A(\mathbf{x}^{*},t)\) exhibit negative feedback \(-\lambda_{k}<0\). Accordingly, we regard the set of \((\lambda_{k})_{k=1,\ldots,n}\) as a measure of system stability.
We obtain an estimation of \(A\) and \(BB^{\top}\) from time series data using the respective estimators given in [34; 25]. A multivariate ordinary least squares regression is performed to extract an estimate of the matrix \(\mathrm{D}A\) (see SM 1 for details). The real parts of the corresponding eigenvalues are then assessed in their time evolution. In a windowed time series analysis, negative trends in any of the \(\lambda_{k}\) would indicate a destabilisation of the equilibrium state, which may point to an upcoming abrupt transition. The windows must be short enough to justify the required time independence of the dynamics within each individual window and yet comprise a sufficient amount of data. The windowed estimation of \(A(\mathbf{x},t)\) and \(BB^{\top}(\mathbf{x},t)\) then reveals potential temporal changes in the system's stability.
We will show that the estimation of the diffusion matrix \(BB^{\top}\) may in certain situations explain peculiar behaviour in the conventional CSD indicators, such as simultaneously decreasing variance and increasing AC1. If instead of local stability, here measured by \(\lambda\), one wishes to examine mean exit times from stable equilibria [35],
the estimated diffusion matrix \(BB^{\top}\) has additional important implications [36].
## III Results
First, we will examine the merits of the proposed CSD indicator \(\widehat{\lambda}\) on the estimated drift coefficient in a conceptual, one-dimensional system subjected to time-dependent noise. Second, we will adopt a two-dimensional predator-prey model with state-dependent noise from the literature and perform a multi-dimensional analysis.
### Fold bifurcation with time-dependent noise
The prototypical structure employed for conceptualising abrupt transitions in many natural systems is the fold bifurcation [2; 37]. Consider therefore the SDE defined by
\[\mathrm{d}X_{t}=\big{(}-X_{t}^{2}+\alpha(t)\big{)}\mathrm{d}t+\sigma(t) \mathrm{d}W_{t}, \tag{5}\]
where \(\alpha\) is the bifurcation parameter and \(\sigma\) the noise strength. For positive \(\alpha>0\), there exists a stable equilibrium at \(x^{\star}(t)=\sqrt{\alpha}\) which vanishes at the critical threshold \(\alpha_{\mathrm{crit}}=0\) (see Fig. 1a). We simultaneously ramp down the noise strength \(\sigma(t)\). Such an evolution should be understood as a change in the nature of the omitted fast dynamics, which cannot be ruled out in many applications [38; 39].
The temporal evolution of the variance and AC(1) can be approximated by Eq. (2) after linearising the system around the time-dependent equilibrium \(x^{\star}(t)\) (red lines in Fig. 1b and c):
\[\mathrm{Var}\left[X_{t}\right]\approx\frac{\sigma(t)^{2}}{4\sqrt{\alpha(t)}},\quad\mathrm{AC}_{X}(1)\approx\exp\left(-2\sqrt{\alpha(t)}\Delta t\right).\]
The time-dependent noise thus induces a deceiving downward trend in the estimated variance of the system (Fig. 1b) alongside increasing AC(1) (Fig. 1c). The conflicting indications given by variance and AC(1) would mislead the observer to conclude that no significant EWS is present. In contrast, the estimation of the linearised feedback \(\lambda\) (Fig. 1d) clearly indicates a weakening of local system stability and thus the presence of CSD.
The apparently inconsistent results of the conventional CSD indicators can be understood and reconciled by examining the structure of the drift and diffusion coefficients \(a(x,t)=x^{2}-\alpha(t)\) and \(b(x,t)=\sigma(t)\) and their evolution in time. The true quantities for the functions \(a(x,t)\) and \(b(x,t)\) at different times \(t\) are plotted in Fig. S1 in SM 1 along with the estimations obtained during the procedure outlined in the methods section. After disclosing the time dependency of the diffusion coefficient in Fig. S1c, the diverging trends in variance and AC(1) can be correctly interpreted in a CSD assessment. The decrease in variance can be attributed to the decreasing noise strength, and the approach of a bifurcation can be confirmed. In an analogous setting featuring no bifurcation but an increasing noise strength, the incurred increase in variance can be attributed correctly and the false alarm that a conventional CSD analysis would raise can be avoided.
The statistical quality of the estimator \(\widehat{\lambda}\) with respect to its distribution width is similar to that of the conventional indicators and sufficiently good to ensure a high likelihood of a statistically significant trend. This can be argued by checking that the confidence intervals at the beginning and the end of the estimator time series do not overlap (see also SM 2).
### Predator-prey model with state-dependent noise
Following the work of Bengfort et al. [40], we examine a predator-prey model for oceanic plankton populations (see SM 3 for details). Since Bengfort et al. consider this model under the assumption of no external disturbances in the form of noise, we adopt a noise model from a related study [41]. Because environmental variability usually does not influence the population sizes directly, but rather their growth rates, a multiplicative noise term is
Figure 1: Application of CSD indicators for synthetic data generated by the model (5) in the main text. (a) Sample paths for zero noise (**red**) and with noise (**black**). The noise strength \(\sigma\) is ramped linearly from 0.2 to 0.06 over the integration time span. (b), (c) Conventional CSD indicators of variance and lag-1 autocorrelations (**black**) are calculated on detrended windows and plotted along with the theoretical values (**red**) obtained from the time-local Ornstein–Uhlenbeck linearisation. The shaded bands represent the 68% confidence intervals on \(N=1000\) samples. (d) Estimator \(\widehat{\lambda}\) as proposed in this work, along with the true value \(\lambda=-\partial_{x}a(x^{\star}(t),t)\) (**red**). All estimations were performed on running windows of length \(10^{2}\) consisting of \(10^{3}\) data points, considering the sample time-step is \(\Delta t=10^{-1}\). A traditional analysis using the variance and AC(1) would lead to a missed alarm, given their opposing trends; in contrast, the CSD methodology proposed here clearly detects the forthcoming bifurcation and correctly attributes the negative variance trend to the decreasing amplitude of the driving noise \(\sigma(t)\).
often employed [42; 43; 44; 45]. This leads us to investigate the following system of SDEs
\[\mathrm{d}P_{t} =\xi^{-1}\left(rP_{t}\left(1-\frac{P_{t}}{K(\mathrm{turb})}\right)- \frac{aP_{t}^{2}}{h(\mathrm{turb})^{2}+P_{t}^{2}}Z_{t}\right)\mathrm{d}t\] \[\quad+\xi^{-1/2}\sigma_{P}P_{t}\mathrm{d}W_{t}^{P},\] \[\mathrm{d}Z_{t} =\left(\frac{aP_{t}^{2}}{h(\mathrm{turb})^{2}+P_{t}^{2}}Z_{t}-mZ _{t}^{2}\right)\mathrm{d}t+\sigma_{Z}Z_{t}\mathrm{d}W_{t}^{Z},\]
under the external forcing of ocean turbulence. Due to the quadratic mortality term \(mZ^{2}\) of the predator population \(Z\), this system can exhibit multiple stable equilibria and bifurcations as indicated in Fig. 2a. The two white noise terms are assumed to be independent and their strengths \(\sigma_{P}\) and \(\sigma_{Z}\) are chosen such that noise-induced tipping only occurs in close proximity to the bifurcation point.
Here, we examine the performance of the conventional and the newly proposed CSD indicators as the system approaches this bifurcation. Fig. 2a shows sample paths for the predator and prey populations along with the stable and unstable equilibria of the prey population \(P\) as implied by the parameter value turb at time \(t\).
The most common approach to the assessment of CSD in a multi-dimensional system such as this one is to first reduce the system to one dimension [46; 8; 47]. The centre manifold theorem states that in close proximity to a critical bifurcation, the direction of lowest stability will be the one to experience further destabilisation. For this reason, a principle component analysis is often performed to determine a linear combination of system components that exhibits the largest variance or AC(1) and can thus be suspected to be of the lowest stability [16; 17; 21]. However, as can be seen in the example at hand, away from the immediate proximity of the bifurcation point, the destabilising direction need not be the direction of lowest stability. Here, the identified direction of stability loss would be closely aligned with the predator population \(Z\), as it operates on a slower time scale. This is problematic, as this dimension is relatively impervious to changes in the control parameter turb and will not exhibit CSD (grey curves in Fig. 2b and c).
To circumvent this issue, one should therefore perform a comprehensive stability analysis on the multi-dimensional time series.
This is achieved by examining both eigenvalues of the local equilibrium dynamics as shown in Fig. 2d. The real part of the larger eigenvalue can be seen to substantially decrease. This provides evidence for a destabilisation along the more stable direction in the eigenspace. Note that the conventional approach of focusing on the direction of largest eigenvalue would miss this destabilili
Figure 2: Application of the CSD indicators on time-series data obtained from the predator-prey model of [40]. (a) Sample paths of the prey population \(P\) (**black**) and the predator population \(Z\) (**grey**). The stable and unstable equilibria of \(P\) in their dependence on turb(\(t\)) are plotted in **red**. (b) and (c) show the means of the conventional CSD indicators variance and AC(1) over \(N=1000\) samples. (d) Real parts \(-\lambda_{1,2}\) corresponding to the estimated eigenvalues of the local Jacobian matrix. The eigenvalues have been assigned (by colour) to the two populations, as the corresponding eigenspace basis aligns very well. All estimations were performed on running windows of length 100, meaning 5000 data points at sampling rate \(\Delta t=2\cdot 10^{-2}\).
Figure 3: Prey population taken as a one-dimensional system (a) and corresponding drift and diffusion coefficients \(a(P,t)\) (b) and \(b(P,t)\) (c) for different time slices. In (b) and (c) the functions are plotted in their \(P\)-dependence, while the time \(t\) is represented by the respective colour of the plot, as indicated in (a). The dashed lines in (b) represent the best linear fits performed on the estimated \(a(P,t)\). Annotated is the value of the estimator \(\hat{\lambda}\) on each window of data, i.e., the negative of the slope of the respective linear fit. Similarly, linear fits in (c) are shown to illustrate the apparent state-dependence, in addition to a linear fit of the entire data (grey dashed).
sation.
Investigating again the conventional variance and AC(1) CSD indicators in Fig. 2b and c, AC(1) seems to indicate a destabilisation along the dimension of the prey population \(P\) but the trend of the observed variance seems to indicate the opposite. This contradiction can be resolved by performing an analogous analysis of the drift and diffusion coefficients for the one-dimensional time series of \(P\) as in the first example above (see Fig. 3). This reduction in dimension can now be motivated by the fact that the vector in eigenspace corresponding to the weakening eigenvalue in 2d lies predominantly in the direction of \(P\). The slopes of the estimated drift coefficient shown in Fig. 3b decrease as the system moves towards the bifurcation, agreeing with the estimations in Fig. 2d. A clear state-dependence can be identified in the estimated diffusion coefficient (Fig. 3c). Together with the observation of a diminishing mean state in the prey population, it can be concluded that the decrease in variance in 2b was due to a reduced noise amplitude and can thus be reconciled with the increase in AC(1).
## IV Discussion
Variance and AC(1) are often used in combination to assess whether or not a system is approaching a critical transition. In general, a positive result is considered robust when both indicators show a significant positive trend. We have shown here that in the presence of time- or state-dependent noise amplitudes, the variance of the system may actually decrease in the advent of a bifurcation. If a monitored system shows a decreasing trend in variance alongside an increasing trend in AC(1), this would typically not be considered a robust EWS, leading to a missed alarm. An increase in noise strength over time could, on the other hand, lead to a false alarm in form of an increasing variance in systems with no underlying bifurcation. We have also shown that common methods in dimension reduction can lead to missed alarms, as the destabilising system component may not be the least stable to begin with.
To overcome these problems, we have proposed a method based on deriving a Langevin equation from the observed dynamics. Our approach allows us to separate the effects of possible CSD dynamics contained in the drift coefficient from changes in the noise represented by the diffusion coefficient. It also allows for a more holistic investigation of multi-dimensional systems, without further mechanistic simplifications (see SM 4 for a second example to this point, which shows in particular that the proposed method works for periodic multi-dimensional systems that are problematic for the conventional CSD indicators). We showed that our approach avoids the pitfalls that a conventional CSD analysis suffers from in these examples.
We have shown that in the presented one-dimensional application, the statistical quality of the estimator \(\widehat{\lambda}\) is of the same order as that of the estimators for variance and AC(1) (see also SM 2 for further discussion). However, one important caveat bears mentioning: While for the estimators of variance and AC(1), the length of the time series is the only determining factor of convergence, the estimators for the drift and diffusion coefficients also require small sampling time steps \(1\gg\lambda\Delta t>0\) in order for their bias to be small. In general, the estimator \(\widehat{\lambda}\) proposed here will still contain information about CSD even in settings of large sample time steps \(\Delta t\), but the signal-to-noise ratio may prohibit its employment as a CSD indicator. Areas of application where systems are potentially susceptible to tipping and where high-frequency data may be available for analysis could be electricity grids [48; 49; 50], financial markets [51; 52; 53], atmospheric circulation systems such as monsoons [54; 55], ecosystems and vegetation systems such as the Amazon rainforest, [56; 57; 58; 59], ocean circulation systems [3], or ice sheets [60; 61].
Our method should be understood as a more general, reliable, and circumspect indicator of CSD compared to the widely used variance and AC(1). Our approach is appropriate in settings of generally time- and state-dependent driving noise, where the combined conventional indicators fail. Moreover, the ability to examine time series in their multi-dimensional complexity constitutes a considerable improvement in the comprehension of the system compared to one-dimensional summary statistics.
## Data availability
Supplementary Material is available for download at the online version of this manuscript. The implementation of the estimators introduced in this work is available in the GitHub repository KramersMoyalEWS. Also included is the code employed to generate all figures in the main text and the Supplementary Material.
###### Acknowledgements.
This work has received funding from the Volkswagen Stiftung, the European Union's Horizon 2020 research and innovation programme under grant agreement No. 820970 and under the Marie Sklodowska-Curie grant agreement No. 956170, as well as from the Federal Ministry of Education and Research under grant No. 01LS2001A. This is TiPES contribution #X.
## Supplementary Material
### Details on the estimator \(\widehat{\lambda}\)
Here, we give a detailed description of the local stability measure \(\widehat{\lambda}\) in \(n\)-dimensional systems. For \(n=1\)
it is the negative of the slope of the estimated drift coefficient \(a\) around the equilibrium, and therefore always a real number. For \(n>1\), the estimation procedure returns \(n\) eigenvalues of the local Jacobian matrix of the estimated drift coefficient in their algebraic multiplicity. The eigenvalues may be complex, and thus the value of interest investigated in the main text is the negative of the real part of the respective eigenvalues.
Prior to any analysis, the windowed time series data of each of the \(n\) dimensions is linearly detrended. For the assessment of the conventional CSD indicators, the mean of the data is removed, as they rely purely on the fluctuations around the equilibrium state. In contrast, drift and diffusion are assessed without subtraction of the mean to retain information about the corresponding state dependence. In order to obtain numerical stability, the \(n\) time series are normalised to a standard deviation of \(1\), with no implication on the subsequent estimations. For each window, the estimation of the function \(A(\mathbf{x})\) is returned as an array of values
\[(\widehat{A}(\mathbf{x}_{i}))_{i=1,\ldots,M^{n}},\]
where \(M\) is the number of evenly spaced target bins in each dimension. The \(M^{n}\) target bins in \(\mathbb{R}^{n}\), therefore, form a grid on the hypercube spanned by the state space explored by the time series. For each of these bins, an estimation \(\widehat{A}(\mathbf{x}_{i})\) is calculated using an Epanechnikov kernel
\[K(\mathbf{x})=\frac{3}{4h}\left(1-\frac{||\mathbf{x}||^{2}}{h^{2}}\right),\text { with support }||\mathbf{x}||<h,\]
with a kernel bandwidth \(h\) of \(14n/M\). Since the estimator \(\widehat{A}(\mathbf{x}_{i})\) will converge to some (biased) value as the number of samples \(\mathbf{X}_{k\Delta t}\) in the bin \(\mathbf{x}_{i}\) tends to infinity, it is clear that those bins with many samples converge fastest. In our setting with equilibrium dynamics around one stable equilibrium \(\mathbf{x}^{*}\), this means that the estimations for bins closest to \(\mathbf{x}^{*}\) converge fastest, and the quality deteriorates for outer bins. For this reason and in order to curtail the effects of a non-linear drift term, we opt to only carry \(50\%\) of bins centred around the bin containing \(\hat{\mathbf{x}}^{*}=\widehat{\mathbb{E}}[\mathbf{X}]\) to the subsequent analysis. This is to say, we select a hypercube with side lengths \(50\%\) as large as the original hypercube. Thus, we are confronted with fixing three free parameters a priori: The number of total bins \(M\) in each dimension, the percentage \(m\) of bins to carry on either side of the estimated equilibrium \(\widehat{\mathbf{x}}^{*}\), and the kernel bandwidth. In this study, we chose \(M=50\) and \(m=50\%\), meaning that for \(n=1\), we have \(25\) relevant bins for further analysis. The bandwidth is chosen as a function of \(M\) and \(n\), as described above. However, the performance of the estimator \(\widehat{\lambda}\) is not very sensitive to small changes in these parameters.
To obtain an estimation of the local Jacobian matrix around the equilibrium point \(\widehat{\mathbf{x}}^{*}\), we perform a multivariate ordinary linear regression between \((\widehat{A}(\mathbf{x}_{i}))\) and \((\mathbf{x}_{i})\) over \(i\), including an intercept in the design matrix. The algebraic eigenvalues of the resulting matrix are computed numerically. For \(n=1\), this procedure is equivalent to finding a best linear fit \((c-\widehat{\lambda}x_{i})\) to \((\widehat{a}(x_{i}))\) over \(i\).
### 2. Assessing the statistical quality of \(\hat{\lambda}\)
The two applications presented in the main text demonstrate that CSD manifests itself in a substantial negative trend of the estimator \(\hat{\lambda}\) when enough data are available. In this section, we aim to make this statement concrete and to compare the indicator's performance to that of variance and AC(1). The assessment is based on the width of the three indicators' numerical distribution after application to synthetic data in one dimension. The top row of Fig. S2 shows the distributions of the estimators that arise from the application of the indicators to \(1000\) synthetic time series generated by numerically integrating a time-homogeneous OU process:
\[\mathrm{d}X_{t}=-\lambda X_{t}\mathrm{d}t+\mathrm{d}W_{t}, \tag{6}\]
To analyse the behaviour of the estimators in a generic CSD scenario, i.e., a temporal reduction of the restoring
rate, we plot their distributions for \(\lambda=1\) and \(\lambda=0.1\). If the distributions are sufficiently distinct, the indicators may correctly detect a given reduction of the restoring rate with a high likelihood. For different choices of window lengths and time steps \(\Delta t\), we check numerically whether this condition is satisfied for each estimator (bottom row of Fig. S2). Being more sensitive at low data availability, the estimators for variance and AC(1) perform better than that for \(\hat{\lambda}\). Above a window length of \(T=100\), this difference is negligible, judging by the proposed metric. The difference may also be less pronounced when performing the same test on time series generated by models with non-linear drift or jump-noise, where the state-locality of our method can alleviate non-linear effects on the far ends of the state space. Therefore, in a large range of applications, the estimator \(\hat{\lambda}\) offers a statistically equally performant method of assessing CSD with the additional advantage of robustness with respect to time- and state-dependent noise, substantial advantages in higher-dimensional settings, as well as settings featuring non-linear drifts and jumps in the noise.
### Details on the predator-prey model
The specific model introduced in the main text is a modification of the Truscott-Brindley model for ocean plankton populations originally introduced in [62]. Bengfort et al. [40] generalised the model by introducing the environmental parameter of fluid turbulence to the system and allowing higher powers in the mortality term of the predator population. The full system equations are given by
\[\xi\dot{p}(t) =rp(t)\left(1-\frac{p(t)}{K(\text{turb})}\right)-\frac{ap(t)^{2} }{h(\text{turb})^{2}+p(t)^{2}}z(t)\] \[\dot{z}(t) =\frac{ap(t)^{2}}{h(\text{turb})^{2}+p(t)^{2}}z(t)-mz(t)^{2}\] \[K(\text{turb}) =K_{0}+c_{K}\cdot\text{turb}\] \[h(\text{turb}) =\frac{h_{0}}{1+c_{h}\cdot\text{turb}}\]
This system has been non-dimensionalised in order to reduce the number of parameters. However, to retrieve realistic values of population sizes in units of density, \(p\) and \(z\) merely need to be multiplied with constants \(p_{0}\) and \(z_{0}\). The first term on the right-hand side of the prey population's evolution \(\dot{p}\) is the population growth rate as determined by the relationship between the current population size and the carrying capacity \(K\). Below that capacity, the population grows and vice versa. The second term is the mortality rate of the prey population, which is simultaneously the growth rate of the predator population since it is assumed that all death in \(p\) and growth in \(z\) occurs through consumption of the former by the latter. The second term in the evolution of \(z\) in the second equation is the quadratic mortality term alluded to in the main text. This ultimately facilitates multiple stable states as opposed to the same model with a linear mortality term. The turbulence \(\text{turb}\in[0,1]\) describes the normalised strength of spatial mixing in the ocean modelled by circular eddies. All parameter values but those for \(\xi\), \(c_{K}\) and \(c_{h}\) are adopted directly from [40] and can be found in Table SI along with a short description of their interpretation. The parameters \(c_{K}\) and \(c_{h}\) were increased by a factor of 2.2 each for the purposes of this study to facilitate a bigger range of stable prey populations in the large population regime. The fundamental nature of the model remains unaltered by this change. Lastly, as described in the main text, we introduced multiplicative noise terms commonly used in the relevant literature [42; 43; 44; 45] to model environmental impacts on the growth and mortality rates of the two populations. This leads us to
the complete set of model equations:
\[\mathrm{d}P_{t} =\xi^{-1}\bigg{(}rP_{t}\bigg{(}1\!-\!\frac{P_{t}}{K(\mathrm{turb})} \bigg{)}\!-\!\frac{aP_{t}^{2}}{h(\mathrm{turb})^{2}\!+\!P_{t}^{2}}Z_{t}\bigg{)} \mathrm{d}t\] \[\quad+\xi^{-1/2}\sigma_{P}P_{t}\mathrm{d}W_{t}^{P}.\] \[\mathrm{d}Z_{t} =\bigg{(}\frac{aP_{t}^{2}}{h(\mathrm{turb})^{2}\!+\!P_{t}^{2}}Z_ {t}-mZ_{t}^{2}\bigg{)}\mathrm{d}t+\sigma_{Z}Z_{t}\mathrm{d}W_{t}^{Z}.\] \[K(\mathrm{turb}) =K_{0}+c_{K}\cdot\mathrm{turb}, \tag{7}\] \[h(\mathrm{turb}) =\frac{h_{0}}{1+c_{h}\cdot\mathrm{turb}}\] \[\mathrm{turb}(t) =1-\frac{7}{10}\frac{t}{T}.\]
### 4. Additional example of the multi-dimensional stability analysis
In the example of the two-dimensional predator-prey model in the main text, it was revealed by the analysis of the two-dimensional drift coefficient that the dynamics could also be well-represented by two uncoupled one-dimensional SDEs for the predator and prey population, respectively. As a result, the CSD analysis was also comprehensive after a reduction to the prey dimension. However, the dynamics of many systems cannot be reduced in such a way. This is especially relevant for systems exhibiting pronounced oscillations. Using the method outlined above, we therefore additionally assess the local stability of a system undergoing a subcritical Hopf bifurcation in normal form.
\[\mathrm{d}\mathbf{X}_{t} =\begin{pmatrix}-\left(\mu(t)-\left(\mathbf{X}^{(1)}\right)^{2}- \left(\mathbf{X}^{(2)}\right)^{2}\right)\mathbf{X}^{(1)}-\omega\mathbf{X}^{(2 )}\\ -\left(\mu(t)-\left(\mathbf{X}^{(1)}\right)^{2}-\left(\mathbf{X}^{(2)}\right)^{ 2}\right)\mathbf{X}^{(2)}+\omega\mathbf{X}^{(1)}\end{pmatrix}\mathrm{d}t\] \[\quad+\varepsilon\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\mathrm{d}\mathbf{W}_{t},\]
where \(\omega=1\), \(\varepsilon=0.01\), and \(\mu(t)\) decreases linearly from \(2\) to \(0.1\) over the integration time of \(T=1000\). For \(\mu>0\), the origin is a stable fixed point with eigenvalues \(-\mu\pm i\omega\). Furthermore, there is an unstable limit cycle with radius \(\sqrt{\mu}\) and perturbations from the origin decay in the form of spirals. At \(\mu=0\), the radius of the unstable limit cycle reaches zero, and the origin turns into an unstable fixed point. The data was sampled at time steps \(\Delta t=0.1\)
\begin{table}
\begin{tabular}{c|c|c} parameter & value & description \\ \hline \(r\) & \(1\) & growth rate factor of prey \(P\) \\ \(a\) & \(1/9\) & rate of the predator consuming the prey \\ \(m\) & \(0.0525\) & mortality rate of the predator \\ \(\xi\) & \(0.7\) & time scale separation between prey and predator evolutions \\ \(h_{0}\) & \(1/16\) & factor influencing maximal consumption at zero turbulence \\ \(c_{h}\) & \(0.88\) & linear relationship between turbulence and \(h\) \\ \(K_{0}\) & \(0.7\) & carrying capacity at zero turbulence \\ \(c_{K}\) & \(0.66\) & linear relationship between turbulence and \(K\) \\ \(\sigma_{P}\) & \(0.037\) & strength of noise coupling to \(P\) \\ \(\sigma_{Z}\) & \(0.01\) & strength of noise coupling to \(Z\) \\ \end{tabular}
\end{table}
Table SI: Parameter values used in the simulation of plankton populations following the model in equations (7).
and analysed in windows of length \(T=100\). The results are presented in Fig. S3. The real parts of both eigenvalues are known to be \(-\mu(t)\), and the estimations track this value relatively closely. A destabilisation of the equilibrium can clearly be made out. An additional insight gained via the CSD assessment through the Langevin equation approach proposed here is that the local system exhibits oscillatory dynamics, as identified by the complex eigenvalues of the Jacobian matrix.
|
2309.06891 | Keep It SimPool: Who Said Supervised Transformers Suffer from Attention
Deficit? | Convolutional networks and vision transformers have different forms of
pairwise interactions, pooling across layers and pooling at the end of the
network. Does the latter really need to be different? As a by-product of
pooling, vision transformers provide spatial attention for free, but this is
most often of low quality unless self-supervised, which is not well studied. Is
supervision really the problem?
In this work, we develop a generic pooling framework and then we formulate a
number of existing methods as instantiations. By discussing the properties of
each group of methods, we derive SimPool, a simple attention-based pooling
mechanism as a replacement of the default one for both convolutional and
transformer encoders. We find that, whether supervised or self-supervised, this
improves performance on pre-training and downstream tasks and provides
attention maps delineating object boundaries in all cases. One could thus call
SimPool universal. To our knowledge, we are the first to obtain attention maps
in supervised transformers of at least as good quality as self-supervised,
without explicit losses or modifying the architecture. Code at:
https://github.com/billpsomas/simpool. | Bill Psomas, Ioannis Kakogeorgiou, Konstantinos Karantzalos, Yannis Avrithis | 2023-09-13T11:28:27Z | http://arxiv.org/abs/2309.06891v1 | # Keep It SimPool:
###### Abstract
Convolutional networks and vision transformers have different forms of pairwise interactions, pooling across layers and pooling at the end of the network. Does the latter really need to be different? As a by-product of pooling, vision transformers provide spatial attention for free, but this is most often of low quality unless self-supervised, which is not well studied. Is supervision really the problem?
In this work, we develop a generic pooling framework and then we formulate a number of existing methods as instantiations. By discussing the properties of each group of methods, we derive SimPool, a simple attention-based pooling mechanism as a replacement of the default one for both convolutional and transformer encoders. We find that, whether supervised or self-supervised, this improves performance on pre-training and downstream tasks and provides attention maps delineating object boundaries in all cases. One could thus call SimPool universal. To our knowledge, we are the first to obtain attention maps in supervised transformers of at least as good quality as self-supervised, without explicit losses or modifying the architecture. Code at: [https://github.com/billpsomas/simpool](https://github.com/billpsomas/simpool).
## 1 Introduction
Extracting visual representations and spatial pooling have been two interconnected processes since the study of 2D Gabor filters [17] and early convolutional networks [27]. Modern _convolutional networks_[31, 53] gradually perform local pooling and downsampling throughout the architecture to extract a low-resolution feature tensor, followed by global spatial pooling. _Vision transformers_[22] only downsample at input tokenization and then preserve resolution, but pooling takes place again throughout the architecture via the interaction of patch tokens with a cls token, inherited from language models [21].
The pooling operation has been studied extensively in instance-level tasks on convolutional networks [3, 75], but less so in category-level tasks or transformers. Pooling in transformers is based on weighted averaging, using as weights the 2D _attention map_ of the cls token at the last layer. However, this attention map is typically of low quality, unless under self-supervision [9].
In this work, we argue that vision transformers can be reformulated in two streams, where one is extracting a visual representation on patch tokens and the other is performing spatial pooling on the cls token; whereas, convolutional networks undergo global spatial pooling at the very last
step, before the classifier. In this sense, one can isolate the pooling process from both kinds of networks and replace it by a new one. This raises the following questions:
1. _Can we derive a simple pooling process at the very last step of either convolutional or transformer encoders that improves over their default?_
2. _Can this process provide high-quality attention maps that delineate object boundaries, for both networks?_
3. _Do these properties hold under both supervised and self-supervised settings?_
To answer these questions, we develop a _generic pooling framework_, parametrized by: (a) the number of vectors in the pooled representation; (b) whether pooling is iterative or not; (c) mappings at every stage of the process; (d) pairwise similarities, attention function and normalization; and (e) a function determining the pooling operation.
We then formulate a number of existing pooling methods as instantiations of this framework, including (a) simple pooling mechanisms in convolutional networks [31, 85, 75, 72, 84], (b) iterative methods on more than one vectors like \(k\)-means [59, 55], (c) feature re-weighting mechanisms originally designed as network components rather than pooling [34, 98], and (d) vision transformers [22, 86]. Finally, by discussing the properties of each group of methods, we derive a new, simple, attention-based pooling mechanism as a replacement of the default one for both convolutional and transformer encoders. SimPool provides high-quality attention maps that delineate object boundaries, under both supervised and self-supervised settings, as shown for ViT-S [22] in Figure 1.
In summary, we make the following contributions:
1. We formulate a generic pooling framework that allows easy inspection and qualitative comparison of a wide range of methods.
2. We introduce a simple, attention-based, non-iterative, universal pooling mechanism that provides a single vector representation and answers all the above questions in the affirmative.
3. We conduct an extensive empirical study that validates the superior qualitative properties and quantitative performance of the proposed mechanism on standard benchmarks and downstream tasks.
## 2 Related Work
We discuss the most related work to pooling in convolutional networks and vision transformers. An extended version with more background is given in the appendix.
**Convolutional networks** Early convolutional networks [27, 47] are based on learnable _convolutional layers_ interleaved with fixed _spatial pooling layers_ that downsample. The same design remains until today [46, 80, 31, 53]. Apart from mapping to a new space, convolutional layers involve a form of local pooling and pooling layers commonly take average [47] or maximum [78, 46].
Early networks end in a fully-connected layer over a feature tensor of low resolution [47, 46, 80]. This evolved into spatial pooling, _e.g._ global / regional average followed by a classifier for category-level tasks like classification [49, 31] / detection [28], or global maximum followed by a pairwise loss [85] for instance-level tasks.
The spatial pooling operation at the end of the network is widely studied in instance level-tasks [3, 85, 75], giving rise to forms of _spatial attention_[42, 65, 8, 84, 63]. In category-level tasks, it is more common to study _feature re-weighting_ as components of the architecture [34, 98, 33]. The two are closely related because _e.g._ the weighted average is element-wise weighting followed by sum.
Pooling can be _spatial_[33, 65, 8, 84, 63], _over channels_[34], or both [42, 98]. CBAM [98] is particularly related to our work in the sense that it includes global average pooling followed by a form of spatial attention, although the latter is not evident in its original formulation and although CBAM is not a pooling mechanism.
**Vision transformers** _Pairwise interactions_ between features are forms of pooling or _self-attention_ over the spatial [96, 4, 107, 73] or channel dimensions [10, 93]. Originating in language models [89], _vision transformers_[22] streamlined these approaches and dominated the architecture landscape. Several variants often bring back ideas from convolutional networks [51, 100, 29, 99, 23, 32, 104].
Transformers downsample only at the input, forming spatial _patch tokens_. Pooling is based on a learnable cls token, which, beginning at the input space, undergoes the same self-attention operation with patch tokens and provides a global image representation. That is, the network ends in global weighted average pooling, using as weights the attention of cls over the patch tokens.
Few works that have studied beyond cls for pooling are mostly limited to global average pooling (GAP) [51, 106, 88, 76]. cls offers attention maps for free, however of low quality unless in a self-supervised setting [9], which is not well studied. Few works that attempt to rectify this in the supervised setting include a spatial entropy loss [69], shape distillation from convolutional networks [62] and skipping computation of self-attention [90].
We attempt to address these limitations and study pooling in convolutional networks, vision transformers, supervised and self-supervised alike. We derive a simple, attention-based, universal pooling mechanism, improving both performance and attention maps.
## 3 Method
We develop a generic pooling framework that encompasses many simple or more complex pooling methods, iterative or not, attention-based or not. We then examine a number of methods as instantiations of this framework. Finally, we discuss their properties and make particular choices in designing our solution.
### A generic pooling framework
PreliminariesLet \(\mathbf{X}\in\mathbb{R}^{d\times W\times H}\) be the \(3\)-dimensional _feature tensor_ obtained from the last layer of a network for a given input image, where \(d\) is the number of feature channels and \(W,H\) are the width and height. We represent the image by the _feature_ matrix \(X\in\mathbb{R}^{d\times p}\) by flattening the spatial dimensions of \(\mathbf{X}\), where \(p:=W\times H\) is the number of spatial locations. Let \(\mathbf{x}_{i}\in\mathbb{R}^{p}\) denote the \(i\)-th row of \(X\), that is, corresponding to the \(2\)-dimensional feature map in channel \(i\), and \(\mathbf{x}_{j}\in\mathbb{R}^{d}\) denote the \(j\)-th column of \(X\), that is, the feature vector of spatial location \(j\).
By \(\mathbf{1}_{n}\in\mathbb{R}^{n}\), we denote the all-ones vector. Given an \(m\times n\) matrix \(A\geq 0\), by \(\eta_{1}(A):=\operatorname{diag}(A\mathbf{1}_{n})^{-1}A\) we denote row-wise \(\ell_{1}\)-normalization; similarly, \(\eta_{2}(A):=A\operatorname{diag}(\mathbf{1}_{m}^{\top}A)^{-1}\) for column-wise.
Pooling processThe objective of pooling is to represent the image by one or more vectors, obtained by interaction with \(X\), either in a single step or by an iterative process. We denote the pooling process by function \(\pi:\mathbb{R}^{d\times p}\to\mathbb{R}^{d^{\prime}\times k}\) and the output vectors by matrix \(U=\pi(X)\in\mathbb{R}^{d^{\prime}\times k}\), where \(d^{\prime}\) is the number of dimensions, possibly \(d^{\prime}=d\), and \(k\) is the number of vectors. In the most common case of a single vector, \(k=1\), we denote \(U\) by \(\mathbf{u}\in\mathbb{R}^{d^{\prime}}\). We discuss here the general iterative process; single-step pooling is the special case where the number of iterations is \(1\).
InitializationWe define \(X^{0}:=X\) and make a particular choice for \(U^{0}\in\mathbb{R}^{d^{\prime}\times k}\), where \(d^{0}:=d\). The latter may depend on the input \(X\), in which case it is itself a simple form of pooling or not; for example, it may be random or a learnable parameter over the entire training set.
Pairwise interactionGiven \(U^{t}\) and \(X^{t}\) at iteration \(t\), we define the _query_ and _key_ matrices
\[Q =\phi_{Q}^{t}(U^{t})\in\mathbb{R}^{n^{t}\times k} \tag{1}\] \[K =\phi_{K}^{t}(X^{t})\in\mathbb{R}^{n^{t}\times p}. \tag{2}\]
Here, functions \(\phi_{Q}^{t}:\mathbb{R}^{d^{t}\times k}\to\mathbb{R}^{n^{t}\times k}\) and \(\phi_{K}^{t}:\mathbb{R}^{d^{t}\times p}\to\mathbb{R}^{n^{t}\times p}\) may be the identity, linear or non-linear mappings to a space of the same (\(n^{t}=d^{t}\)) or different dimensions. We let \(K,Q\) interact pairwise by defining the \(p\times k\) matrix \(S(K,Q):=((s(\mathbf{k}_{\text{-}i},\mathbf{q}_{\text{-}j}))_{i=1}^{p})_{j=1}^{k}\), where \(s:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}\) for any \(n\) is a similarity function. For example, \(s\) can be dot product, cosine similarity, or a decreasing function of some distance. In the case of dot product, \(s(\mathbf{x},\mathbf{y}):=\mathbf{x}^{\top}\mathbf{y}\) for \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\), it follows that \(S(K,Q)=K^{\top}Q\in\mathbb{R}^{p\times k}\).
AttentionWe then define the _attention_ matrix
\[A=h(S(K,Q))\in\mathbb{R}^{p\times k}. \tag{3}\]
Here, \(h:\mathbb{R}^{p\times k}\to[0,1]^{p\times k}\) is a nonlinear function that may be elementwise, for instance \(\operatorname{relu}\) or \(\exp\), normalization over rows or columns of \(S(K,Q)\), or it may yield a form of correspondence or assignment between the columns of \(K\) and \(Q\), possibly optimizing a cost function.
Attention-weighted poolingWe define the _value_ matrix
\[V=\phi_{V}^{t}(X^{t})\in\mathbb{R}^{n^{t}\times p}. \tag{4}\]
Here, function \(\phi_{V}^{t}:\mathbb{R}^{d^{t}\times p}\to\mathbb{R}^{n^{t}\times p}\) plays a similar role with \(\phi_{Q}^{t},\phi_{K}^{t}\). _Attention-weighted pooling_ is defined by
\[Z=f^{-1}(f(V)A)\in\mathbb{R}^{n^{t}\times k}. \tag{5}\]
Here, \(f:\mathbb{R}\to\mathbb{R}\) is a nonlinear elementwise function that determines the pooling operation, for instance, average or max-pooling. The product \(f(V)A\) defines \(k\) linear combinations over the columns of \(f(V)\), that is, the features at different spatial locations. If the columns of \(A\) are \(\ell_{1}\)-normalized, then those are convex combinations. Thus, matrix \(A\) defines the weights of an averaging operation.
OutputFinally, we define the output matrices corresponding to image features and pooling,
\[X^{t+1} =\phi_{X}^{t}(X^{t})\in\mathbb{R}^{d^{t+1}\times p} \tag{6}\] \[U^{t+1} =\phi_{U}^{t}(Z)\in\mathbb{R}^{d^{t+1}\times k}. \tag{7}\]
Functions \(\phi_{X}^{t}:\mathbb{R}^{n^{t}\times p}\to\mathbb{R}^{d^{t+1}\times p}\) and \(\phi_{U}^{t}:\mathbb{R}^{n^{t}\times k}\to\mathbb{R}^{d^{t+1}\times k}\) play a similar role with \(\phi_{Q}^{t},\phi_{K}^{t},\phi_{V}^{t}\) but also determine the dimensionality \(d^{t+1}\) for the next iteration.
At this point, we may iterate by returning to the "pairwise interaction" step, or terminate, yielding \(U^{t+1}\) as \(U\) with \(d^{\prime}=d^{t+1}\). Non-iterative methods do not use \(\phi_{X}^{t}\).
### A pooling landscape
Table 1 examines a number of pooling methods as instantiations of our framework. The objective is to get insight into their basic properties. How this table was obtained is detailed in the appendix.
Group 1 consists of simple methods with \(k=1\) that are not attention-based and have been studied in category-level tasks [31, 72] or mostly in instance-level tasks [85, 75, 84]. Here, the attention is a vector \(\mathbf{a}\in\mathbb{R}^{p}\) and either is uniform or depends directly on \(X\), by pooling over channels [84]. Most important is the choice of pooling operation by function \(f\). Log-sum-exp [72] arises with \(f(x)=e^{rx}\) with
learnable scale \(r\). For the rest, we define \(f=f_{\alpha}\), where
\[f_{\alpha}(x):=\left\{\begin{array}{ll}x^{\frac{1-\alpha}{2}},&\text{if }\alpha\neq 1,\\ \ln x,&\text{if }\alpha=1.\end{array}\right. \tag{8}\]
As studied by Amari [1], function \(f_{\alpha}\) is defined for \(x\geq 0\) (\(\alpha\neq 1\)) or \(x>0\) (\(\alpha=1\)). It reduces to the maximum, quadratic mean (RMS), arithmetic mean, geometric mean, harmonic mean, and minimum for \(\alpha=-\infty,-3,-1,1,3,+\infty\), respectively. It has been proposed as a transition from average to max-pooling [7] and is known as GeM [75], with \(\gamma=(1-\alpha)/2>1\) being a learnable parameter.
_Group 2_ incorporates iterative methods with \(k>1\), including standard \(k\)-means, the soft-clustering variant Slot Attention [55] and optimal transport between \(U\) and \(X\)[59]. The latter is not formally iterative according to our framework, but the Sinkhorn algorithm is iterative internally.
_Group 3_ refers to methods introduced as modules within the architecture rather than pooling mechanisms [34, 98]. An interesting aspect is initialization of \(U^{0}\) by _global average pooling_ (GAP) on \(X\):
\[\pi_{A}(X):=X\mathbf{1}_{p}/p=\frac{1}{p}\sum_{j=1}^{p}\mathbf{x}_{,j}\in \mathbb{R}^{d}, \tag{9}\]
where \(\mathbf{1}_{p}\in\mathbb{R}^{p}\) is the all-ones vector. Channel attention (\(\phi_{Q}(U)\)) and spatial attention (\(A\)) in CBAM [98] are based on a few layers followed by sigmoid, playing the role of a binary classifier (_e.g._ foreground/background); whereas, transformer-based attention uses directly the query and softmax normalization, respectively. Although not evident in the original formulation, we show in the appendix that there is pairwise interaction.
_Group 4_ refers to vision transformers [22, 86], which we reformulate in two separate streams, one for the cls token, \(U\), and another for the patch tokens, \(X\). We observe that, what happens to the cls token throughout the entire encoder, is an iterative pooling process. Moreover, although \(U\) is just one vector, multi-head attention splits it into \(m\) subvectors, where \(m\) is the number of heads. Thus, \(m\) is similar to \(k\) in \(k\)-means. The difference of CaiT [86] from ViT [22] is that this iteration happens only in the last couple of layers, with the patch embeddings \(X\) being fixed.
### SimPool
_Group 5_ of Table 1 is our method, SimPool. A schematic overview is given in Figure 2.
Pooling processWe are striving for a simple design. While pooling into \(k>1\) vectors would yield a more discriminative representation, either these would have to be concatenated, as is the case of multi-head attention, or a particular similarity kernel would be needed beyond dot product, which we consider to be beyond the scope of this
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l l} \# & Method & Cat & Iter & \(k\) & \(U^{0}\) & \(\phi_{Q}(U)\) & \(\phi_{K}(X)\) & \(s(\mathbf{x},\mathbf{y})\) & \(A\) & \(\phi_{V}(X)\) & \(f(x)\) & \(\phi_{X}(X)\) & \(\phi_{U}(Z)\) \\ \hline \multirow{4}{*}{1} & GAP [31] & ✓ & \multirow{4}{*}{\(\mathbf{1}\)} & \multirow{4}{*}{\(\mathbf{1}_{p}/p\)} & \multirow{4}{*}{\(\mathbf{X}\)} & \multirow{4}{*}{\(f_{-1}(x)\)} & \multirow{4}{*}{\(f_{-1}(x)\)} & \multirow{4}{*}{\(\mathbf{Z}\)} \\ & max [85] & & & & & & & & & \(\mathbf{1}_{p}\) & \(X\) & \(f_{-\alpha}(x)\) & & \(Z\) \\ & Gad [75] & & & & & & & & \(\mathbf{1}_{p}\)/p & \(X\) & \(f_{\alpha}(\mathbf{z})\) & & \(Z\) \\ & LSE [72] & ✓ & & & & & & & & \(\mathbf{1}_{p}\)/p & \(X\) & \(f_{\alpha}(\mathbf{z})\) & & \(Z\) \\ & HOW [84] & & & & & & & & & \(\mathbf{1}_{p}\)/p & \(X\) & \(f_{\alpha}(\mathbf{z})\) & & \(Z\) \\ \hline \multirow{4}{*}{2} & OTK [59] & ✓ & \multirow{4}{*}{\(k\)} & \multirow{4}{*}{\(U\)} & \multirow{4}{*}{\(U\)} & \multirow{4}{*}{\(X\)} & \multirow{4}{*}{\(-\|\mathbf{x}-\mathbf{y}\|^{2}_{\mathbf{Sinkhorn}}(e^{\beta/*})\)} & \multirow{4}{*}{\(\psi(X)\)} & \multirow{4}{*}{\(f_{-1}(x)\)} & \multirow{4}{*}{\(Z\)} \\ & \(k\)-means & & ✓ & \(k\) & random & & & \(U\) & \(X\) & \(-\|\mathbf{x}-\mathbf{y}\|^{2}_{\mathbf{\eta}}\) & \(\eta_{2}(\arg\max_{\mathbf{z}}(S))\) & \(X\) & \(f_{-1}(x)\) & \(X\) & \(Z\) \\ & Slot [55]* & ✓ & ✓ & \(k\) & \(U\) & \(W_{Q}U\) & \(W_{K}X\) & \(\mathbf{x}^{\top}\mathbf{y}\) & \(\boldsymbol{\sigma}_{2}(S/\sqrt{d})\) & \(W_{V}X\) & \(f_{-1}(x)\) & \(X\) & \(\mathrm{mlp}(\mathrm{gwt}(Z))\) \\ \hline \multirow{4}{*}{3} & SE [34] & ✓ & \multirow{4}{*}{\(\mathbf{1}\)} & \multirow{4}{*}{\(\pi_{A}(X)\)} & \multirow{4}{*}{\(\sigma(\mathrm{mlp}(U))\)} & \multirow{4}{*}{\(\mathrm{diag}(\mathbf{a})X\)} & \multirow{4}{*}{\(V\)} \\ & CBAM [98]* & ✓ & & & & & & & & & & & \\ \cline{1-1} \cline{6-12} & ViT [22]* & ✓ & & & & & & & & & & & \\ \cline{1-1} \cline{6-12} & CarT [86]* & ✓ & & & & & & & & & & & & \\ \cline{1-1} \cline{6-12} & \multicolumn{1}{c}{} & \multirow{4}{*}{\(\mathbf{1}\)} & \multirow{4}{*}{\(\pi_{A}(X)\)} & \multirow{4}{*}{\(\mathbf{1}\)} & \multirow{4}{*}{\(\pi_{A}(X)\)} & \multirow{4}{*}{\(\mathbf{1}\)} & \multirow{4}{*}{\(\pi_{A}(X)\)} & \multirow{4}{*}{\(\boldsymbol{\sigma}(\mathrm{mlp}(U))\)} & \multirow{4}{*}{\(X\)} & \multirow{4}{*}{\(\boldsymbol{\sigma}(\mathrm{conv}(Y))\)} & \multirow{4}{*}{\(\boldsymbol{\sigma}(\mathrm{conv}(S))\)} & \multirow{4}{*}{\(\mathrm{diag}(\mathbf{a})\)} \\ \end{tabular}
\end{table}
Table 1: A landscape of pooling methods. Cat: used in category-level tasks; Iter: iterative; *: simplified. \(\pi_{A}\): GAP; \(\sigma\): sigmoid; \(\boldsymbol{\sigma}_{2}\): softmax over columns; \(\eta_{2}\): column normalization; \(g_{m}\): partitioning in \(m\) groups (see appendix). Cyan: ours; gray: common choices with ours; green: learnable; red: hyperparameter; blue: detailed in the appendix.
work. We rather argue that it is the task of the encoder to learn a single vector representation of objects, even if those are composed of different parts. This argument is stronger when pre-training is performed on images mostly depicting one object, like ImageNet-1k.
We observe in Table 1 that only methods explicitly pooling into \(k>1\) vectors or implicitly using \(m>1\) heads are iterative. We explain why in the next paragraph. Following this insight, we perform pooling in a single step.
In summary, our solution is limited to a single vector \(\mathbf{u}\in\mathbb{R}^{d}\) for pooling, that is, \(k=1\), and is non-iterative.
InitializationWe observe in Table 1 that single-step attention-based methods in Group 3 initialize \(\mathbf{u}^{0}\) by GAP. We hypothesize that, since attention is based on pairwise similarities, it is essential that \(\mathbf{u}^{0}\) is chosen such that its similarities with \(X\) are maximized on average, which would help to better discriminate between foreground (high similarity) and background (low similarity). Indeed, for \(s(\mathbf{x},\mathbf{y})=-\|\mathbf{x}-\mathbf{y}\|^{2}\), the sum of squared Euclidean distances of each column \(\mathbf{x}_{\centerdot i}\) of \(X\) to \(\mathbf{u}\in\mathbb{R}^{d}\)
\[J(\mathbf{u})=\frac{1}{2}\sum_{i=1}^{p}\|\mathbf{x}_{\centerdot i}-\mathbf{u} \|^{2} \tag{10}\]
is a convex distortion measure with unique minimum the average of vectors \(\{\mathbf{x}_{\centerdot i}\}\)
\[\mathbf{u}^{\star}:=\arg\min_{\mathbf{u}\in\mathbb{R}^{d}}J(\mathbf{u})=\frac {1}{p}\sum_{i=1}^{p}\mathbf{x}_{\centerdot i}=\pi_{A}(X), \tag{11}\]
which can be found in closed form. By contrast, for \(k>1\) vectors, distortion can only be minimized iteratively, _e.g._ by \(k\)-means. We therefore choose:
\[\mathbf{u}^{0}=\pi_{A}(X)=X\mathbf{1}_{p}/p. \tag{12}\]
Pairwise interaction, attentionWe follow the attention mechanism of transformers, in its simplest possible form. In particular, we use a single head, \(m=1\), like Slot Attention [55] (which however uses \(k\) vectors). We find that the query and key mappings are essential in learning where to attend as a separate task from learning the representation for the given task at hand. In particular, we use linear mappings \(\phi_{Q},\phi_{K}\) with learnable parameters \(W_{Q},W_{K}\in\mathbb{R}^{d\times d}\) respectively:
\[\mathbf{q} =\phi_{Q}(\mathbf{u}^{0})=W_{Q}\mathbf{u}^{0}\in\mathbb{R}^{d} \tag{13}\] \[K =\phi_{K}(X)=W_{K}X\in\mathbb{R}^{d\times p}. \tag{14}\]
As in transformers, we define pairwise similarities as dot product, that is, \(S(K,\mathbf{q})=K^{\top}\mathbf{q}\in\mathbb{R}^{p\times k}\), and attention as scaled softmax over columns (spatial locations), that is, \(h(S):=\boldsymbol{\sigma}_{2}(S/\sqrt{d})\):
\[\mathbf{a}=\boldsymbol{\sigma}_{2}\left(K^{\top}\mathbf{q}/\sqrt{d}\right) \in\mathbb{R}^{p}, \tag{15}\]
where \(\boldsymbol{\sigma}_{2}(S):=\eta_{2}(\exp(S))\) and \(\exp\) is taken elementwise.
Attention-weighted poolingAs shown in Table 1, the average pooling operation (\(f=f_{-1}\)) is by far the most common. However, the more general function \(f_{\alpha}\) (8) has shown improved performance in instance-level tasks [75]. For \(\alpha<-1\) (\(\gamma>1\)) in particular, it yields an intermediate operation between average and max-pooling. The latter is clearly beneficial when feature maps are sparse, because it better preserves the non-zero elements.
We adopt \(f=f_{\alpha}\) for its genericity: the only operation that is not included as a special case in Table 1 is log-sum-exp [72]. This choice assumes \(X\geq 0\). This is common in networks ending in \(\mathrm{relu}\), like ResNet [31], which is also what makes feature maps sparse. However, vision transformers and modern convolutional networks like Conv-vNeXt [53] do not end in \(\mathrm{relu}\); hence \(X\) has negative elements and is not necessarily sparse. We therefore define
\[V=\phi_{V}(X)=X-\min X\in\mathbb{R}^{d\times p}, \tag{16}\]
where the minimum is taken over all elements of \(X\), such that \(f_{\alpha}\) operates only on non-negative numbers.
We also define \(\mathbf{u}=\phi_{U}(\mathbf{z})=\mathbf{z}\) and the output dimension is \(d^{\prime}=d\). Thus, the mappings \(\phi_{V},\phi_{U}\) are parameter-free. The argument is that, for average pooling for example (\(f=f_{-1}\) in (5)), any linear layers before or after pooling would commute with pooling, thus they would form part of the encoder rather than the pooling process. Moreover, Table 1 shows that \(\phi_{U}\) is non-identity only for iterative methods.
In summary, we define SimPool (sp) as
\[\mathbf{u}=\pi_{\texttt{sp}}(X):=f_{\alpha}^{-1}(f_{\alpha}(V)\mathbf{a})\in \mathbb{R}^{d}, \tag{17}\]
where \(V\in\mathbb{R}^{d\times p}\) is the value (16) and \(\mathbf{a}\in\mathbb{R}^{p}\) is the attention map (15). Parameter \(\alpha\) is learned in GeM [75], but we find that treating it as a hyperparameter better controls the quality of the attention maps.
## 4 Experiments
### Datasets, networks and evaluation protocols
Supervised pre-trainingWe train ResNet-18, ResNet-50 [31], ConvNeXt-S [53], ViT-S and ViT-B [22] for _image classification_ on ImageNet-1k. For the analysis subsection 4.2 and ablation subsection 4.4, we train ResNet-18 on the first 20% of training examples per class of ImageNet-1k [19] (called ImageNet-20%) for 100 epochs. For the benchmark of subsection 4.3, we train ResNet-50 for 100 and 200 epochs, ConvNeXt-S and ViT-S for 100 and 300 epochs and ViT-B for 100 epochs, all on the 100% of ImageNet-1k. We evaluate on the full validation set in all cases and measure top-1 classification accuracy. The baseline is the default per network, _i.e._ GAP for convolutional networks and cls token for transformers.
Self-supervised pre-trainingOn the 100% of ImageNet-1k, we train DINO [9] with ResNet-50, ConvNeXt-S and ViT-S for 100 epochs. We evaluate on the validation set by \(k\)-NN and _linear probing_ on the training set. For _linear probing_, we train a linear classifier on top of features as in DINO [9]. For \(k\)-NN [101], we freeze the model and extract features, then use a \(k\)-nearest neighbor classifier with \(k=10\).
Downstream tasksWe fine-tune supervised and self-supervised ViT-S on CIFAR-10 [45], CIFAR-100 [45] and Oxford Flowers [64] for _image classification_, measuring top-1 classification accuracy. We perform _object localization_ without fine-tuning using supervised and self-supervised ViT-S on CUB [92] and ImageNet-1k, measuring MaxBoxAccV2 [12]. We perform _object discovery_ without fine-tuning using self-supervised ViT-S with DINO-SEG [9] and LOST [79] on VOC07 [24], VOC12 [24] and COCO [50], measuring CorLoc [20]. We validate _robustness_ against background changes using ViT-S on ImageNet-9 [102] and its variations. We use the linear head and linear probe for supervised and self-supervised ViT-S, respectively, measuring top-1 classification accuracy.
In the appendix, we provide implementation details, more benchmarks, ablations and visualizations.
### Experimental Analysis
Figure 3 evaluates different methods in groups following Table 1, regardless of their original design for (a) pooling or not, (b) different tasks, _e.g_. instance-level or category-level, (c) different networks, _e.g_. convolutional or transformers.
_Group 1_ consists of simple pooling methods with: (a) no parameters: GAP [49], max [85], GAP+\(\max\)[48]; and (b) scalar parameter: GeM [75] and LSE [72]. HOW [84] is the only method to use (parameter-free) attention. GeM is performing the best, with LSE following second. These methods are inferior to those in other groups.
_Group 2_ incorporates methods with \(k>1\) vectors. We set \(k=3\) and take the maximum of the \(3\) logits per class. OTK and Slot use attention. Slot attention [55] works best, outperforming \(k\)-means by 1.3%.
_Group 3_ refers to parametric attention-based methods, weighting features based on their importance for the task: CBAM [98], Squeeze-Excitation [34] and Gather-Excite [33]. While originally designed as components within the architecture, we adapt them to pooling by GAP at the end. Gather-Excite [33] performs best.
_Group 4_ refers to parametric attention-based methods found in vision transformers. ViT [22] refers to multi-head self-attention learnable cls and four heads, which we incorporate as a single layer at the end of the model. CaiT [86] is the same but using only cross-attention between cls and patch embeddings. CaiT performs the best.
SimPool outperforms all other methods. Seeing this experiment as a tournament, we select the best performing method of each group and qualify it for the benchmark of subsection 4.3.
### Benchmark
Image ClassificationTable 2 compares SimPool with baseline and tournament winners per group of subsection 4.2 on supervised pre-training for classification. For 100 epochs, SimPool outperforms all methods, consistently improving the baseline by 0.6% using convolutional networks, 1.6% using ViT-S and 1.0% using ViT-B. Gather-Excite [33] improves over the baseline only on convolutional networks, while Slot [55] only on ViT-S. CaiT improves over the baseline only for ConvNeXt-S. By contrast, SimPool improves everywhere. For more than 100 epochs, SimPool improves the baseline by 0.5% using ResNet-50, 0.4% using ConvNeXt-S and 0.8% using ViT-S.
Table 3 evaluates self-supervised pre-training for 100 epochs. SimPool improves over the baseline by 2.0% \(k\)-NN and 1.4% linear probing on ResNet-50; 3.7% \(k\)-NN and 4.0% linear probing on ConvNeXt-S; and 0.9% \(k\)-NN and 1.3% linear probing on ViT-S.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Method & Ep & ResNet-50 & ConvNext-S & ViT-S & ViT-B \\ \hline Baseline & 100 & 77.4 & 81.1 & 72.7 & 74.1 \\ CaiT [86] & 100 & 77.3 & 81.2 & 72.6 & - \\ Slot [55] & 100 & 77.3 & 80.9 & 72.9 & - \\ GE [33] & 100 & 77.6 & 81.3 & 72.6 & - \\ SimPool & 100 & **78.0** & **81.7** & **74.3** & **75.1** \\ \hline Baseline & 300 & 78.1\({}^{\dagger}\) & 83.1 & 77.9 & - \\ SimPool & 300 & **78.7\({}^{\dagger}\)** & **83.5** & **78.7** & - \\ \hline \hline \end{tabular}
\end{table}
Table 2: _Image classification_ top-1 accuracy (%) on ImageNet-1k. Supervised pre-training for 100 and 300 epochs. Best competitors selected per group from Figure 3. Baseline: GAP for convolutional, cls for transformers; ep: epochs; \({}^{\dagger}\): 200 epochs.
Figure 3: _Image classification_ on ImageNet-20. Supervised training of ResNet-18 for 100 epochs.
Fine-tuning for classificationTable 4 evaluates fine-tuning for classification on different datasets of a supervised and a self-supervised ViT-S. SimPool brings small improvement over the baseline in all cases.
Object localizationAccurate localization can have a significant impact on classification accuracy, particularly under multiple objects, complex scenes and background clutter. Table 5 evaluates localization accuracy under both supervision settings. SimPool significantly improves the baseline by up to 7% MaxBoxAccV2 when self-supervised and up to 14% when supervised. In the latter case, the gain is already up to 12% at epoch 20.
(Base) when using SimPool and add blocks when using cls. We find that, to exceed the accuracy of Base SimPool, Base cls needs 5 extra blocks, _i.e._, 9M more parameters. Equally interestingly, removing 3 blocks from Base SimPool is only slightly worse than Base cls, having 5M fewer parameters.
### Ablation study
We ablate the design and components of SimPool. More ablations are found in the appendix. In particular, for function \(f_{\alpha}\) (8), we set \(\gamma=2\) for convolutional networks and \(\gamma=1.25\) for transformers by default, where \(\gamma=(1-\alpha)/2\) is a hyperparameter.
DesignIn Table 10 (left), we ablate (a) the attention function \(h\) (3); (b) the number of iterations with shared parameters at every iteration (Layers) or not (Iter); (c) the initialization \(U^{0}\); (d) the pairwise similarity function \(s\); (e) the number \(k\) of pooled vectors, obtained by \(k\)-means instead of GAP. We also consider queries and keys sharing the same mapping, \(W_{Q}=W_{K}\). We observe that multi-head, few iterations and initialization by \(\mathrm{diag}(X^{\top}X)\) perform slightly worse, without adding any extra parameters, while setting \(W_{Q}=W_{K}\) performs slightly worse, having 50% less parameters.
Linear and LayerNorm layersIn Table 10 (right), we systematically ablate linear and LayerNorm (LN) [2] layers on query \(q\), key \(k\) and value \(v\). We strive for performance and quality while at the same time having a small number of components and parameters. In this sense, we choose the setup that includes linear layers on \(q,k\) and LN on \(k,v\), yielding 56.6 accuracy. We observe that having linear and LN layers everywhere performs best under classification accuracy. However, this setup has attention maps of lower quality and more parameters.
## 5 Conclusion
We have introduced SimPool, a simple, attention-based pooling mechanism that acts at the very last step of either convolutional or transformer encoders, delivering highly superior quantitative results on several benchmarks and downstream tasks. In addition, SimPool delivers decent attention maps in both convolutional and transformer networks under both supervision and self-supervision with remarkable improvement in delineating object boundaries for supervised transformers. Despite this progress, we believe that investigating why the standard cls-based attention fails under supervision deserves further study.
AcknowledgementsThis work was supported by the Hellenic Foundation for Research and Innovation (HFRI) under the BiCUBES project (grant: 03943). It was also supported by the RAMONES and iToBos EU Horizon 2020 projects, under grants 101017808 and 965221, respectively. NTUA thanks NVIDIA for the support with the donation of GPU hardware.
\begin{table}
\begin{tabular}{l c c|c c c} \hline \hline Ablation & Options & Acc & Linear & LN & Acc \\ \hline \multirow{3}{*}{\(h(S)\)} & \(\boldsymbol{\sigma}_{2}(S_{i}/\sqrt{d})_{i=1}^{m}\) & 56.6 & & \(Q\) & \(K\) & \(V\) & \multirow{3}{*}{\(\boldsymbol{Q}\)} & \multirow{3}{*}{\(K\)} & \multirow{3}{*}{\(V\)} \\ & \(\eta_{2}(\boldsymbol{\sigma}_{1}(S/\sqrt{d}))\) & 55.6 & ✓ & ✓ & ✓ & ✓ & ✓ & **57.0** \\ \hline \multirow{3}{*}{Layers} & 3 & 56.8 & ✓ & ✓ & ✓ & ✓ & **56.6** \\ & 5 & 55.9 & ✓ & & ✓ & ✓ & 56.5 \\ \hline \multirow{3}{*}{Iter} & 3 & 56.5 & & ✓ & ✓ & ✓ & 56.4 \\ & 5 & 56.4 & & & ✓ & ✓ & 55.6 \\ \hline \multirow{3}{*}{\(U^{0}\)} & \(U\) & 56.3 & ✓ & ✓ & ✓ & ✓ & 56.3 \\ & \(\mathrm{diag}(X^{\top}X)\) & 56.6 & ✓ & ✓ & ✓ & 56.0 \\ \hline \multirow{3}{*}{\(s(\mathbf{x},\mathbf{y})\)} & \(-\|\mathbf{x}-\mathbf{y}\|^{2}\) & 56.5 & ✓ & ✓ & ✓ & 56.2 \\ & cosine & 56.3 & ✓ & ✓ & ✓ & ✓ & **56.6** \\ \hline \multirow{3}{*}{\(k\) (max)} & 2 & 56.5 & ✓ & ✓ & & ✓ & 56.4 \\ & 5 & 56.4 & ✓ & ✓ & ✓ & ✓ & 56.2 \\ \cline{1-1} & 2 & 56.5 & & & & & 56.2 \\ \cline{1-1} & 5 & 55.9 & ✓ & ✓ & & & 54.4 \\ \hline \(\phi_{Q}\), \(\phi_{K}\) & \(W_{Q}=W_{K}\) & 56.4 & & & & & 54.5 \\ \hline SimPool & & **57.1** & GAP & & & 55.0 \\ \hline \hline \end{tabular}
\end{table}
Table 10: SimPool ablation on ImageNet-20% using ResNet-18 trained for 100 epochs. Ablation of (left) design; (right) linear and LayerNorm (LN) [2] layers. \(q,k,v\): query, key, value. \(\boldsymbol{\sigma}_{2}(S_{i}/\sqrt{d})_{i=1}^{m}\): same as our default, but with multi-head attention, \(m=4\) heads; \(k\) (max): maximum taken over output logits; \(k\) (concat): concatenation and projection to the same output dimensions \(d^{\prime}\). Green: learnable parameter; blue: winning choice per group of experiments; Cyan: Our chosen default. Using pooling operation \(f=f_{\alpha}\) (8) (left); \(f=f_{-1}\) (right).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Network & Pooling & Depth & Init & Accuracy & \#params \\ \hline \hline Base & GAP & 12 & 12 & 73.3 & 22.1M \\ \hline Base & & 12 & 0 & 72.7 & 22.1M \\ Base \(+1\) & & 13 & 0 & 73.2 & 23.8M \\ Base \(+2\) & cls & 14 & 0 & 73.7 & 25.6M \\ Base \(+3\) & & 15 & 0 & 73.8 & 27.4M \\ Base \(+4\) & & 16 & 0 & 73.9 & 29.2M \\ Base \(+5\) & & 17 & 0 & **74.6** & 30.9M \\ \hline Base & & 12 & 12 & **74.3** & 22.3M \\ Base \(-1\) & SimPool & 11 & 11 & 73.9 & 20.6M \\ Base \(-2\) & & 10 & 10 & 73.6 & 18.7M \\ Base \(-3\) & & 9 & 9 & 72.5 & 17.0M \\ \hline \hline \end{tabular}
\end{table}
Table 9: _Trade-off between performance and parameters._ Supervised pre-training of ViT-S on ImageNet-1k for 100 epochs. Init: Initial layer of pooling token. Base: original network. Base\(+b\) (Base\(-b\)): \(b\) blocks added to (removed from) the network. |
2309.07231 | A new covariant formalism for kinetic plasma simulations in curved
spacetimes | Low density plasmas are characterized by a large scale separation between the
gyromotion of particles around local magnetic fields and the macroscopic scales
of the system, often making global kinetic simulations computationally
intractable. The guiding center formalism has been proposed as a powerful tool
to bridge the gap between these scales. Despite its usefulness, the guiding
center approach has been formulated successfully only in flat spacetimes,
limiting its applicability in astrophysical settings. Here, we present a new
covariant formalism that leads to kinetic equations in the guiding center limit
that are valid in arbitrary spacetimes. Through a variety of experiments, we
demonstrate that our equations capture all known gyro-center drifts while
overcoming one severe limitation imposed on numerical algorithms by the fast
timescales of the particle gyromotion. This formalism will enable explorations
of a variety of global plasma kinetic phenomena in the curved spacetimes around
black holes and neutron stars. | Tyler Trent, Pierre Christian, Chi-kwan Chan, Dimitrios Psaltis, Feryal Ozel | 2023-09-13T18:08:24Z | http://arxiv.org/abs/2309.07231v1 | # A new covariant formalism for kinetic plasma simulations in curved spacetimes
###### Abstract
Low density plasmas are characterized by a large scale separation between the gyromotion of particles around local magnetic fields and the macroscopic scales of the system, often making global kinetic simulations computationally intractable. The guiding center formalism has been proposed as a powerful tool to bridge the gap between these scales. Despite its usefulness, the guiding center approach has been formulated successfully only in flat spacetimes, limiting its applicability in astrophysical settings. Here, we present a new covariant formalism that leads to kinetic equations in the guiding center limit that are valid in arbitrary spacetimes. Through a variety of experiments, we demonstrate that our equations capture all known gyro-center drifts while overcoming one severe limitation imposed on numerical algorithms by the fast timescales of the particle gyromotion. This formalism will enable explorations of a variety of global plasma kinetic phenomena in the curved spacetimes around black holes and neutron stars.
0000-0002-4882-8879]Tyler Trent
## 1 Introduction
Rarefied plasmas are ubiquitous in a diverse range of astrophysical systems, from the heliosphere to the intracluster medium and the accretion flows around black holes. Magnetohydrodynamics (MHD), sometimes in its general relativistic formulation (GRMHD), is often the method of choice when modeling these systems. While useful for understanding their overall dynamics, MHD makes a number of assumptions and approximations that are often not valid in the low density regime. For example, fluid approaches cannot capture phenomena that arise from large mean-free-paths (\(\lambda_{\rm mfp}\)) of particles, pressure anisotropies, and non-ideal acceleration and dissipation effects. All of these effects determine the observational appearance of plasmas and impact the interpretation of astrophysical observations. Of recent interest are imaging observations of nearby black holes with the Event Horizon Telescope (Event Horizon Telescope Collaboration, 2019, 2022) and the electromagnetic counterparts of neutron-star mergers (LIGO Scientific Collaboration & Virgo Collaboration, 2017).
To resolve the shortcomings of the fluid models, one needs to rely on kinetic approaches and solve for the individual motions of charged particles and the fields they produce. While able to resolve much of the microphysics that fluid simulations are unable to, such approaches have been limited to local simulations that study regions of an astrophysical system (for plasmas around compact objects, see, e.g., Ball et al., 2018; Hakobyan et al., 2019; Ball et al., 2019) or to global simulations that use nonphysical parameters (Parfrey et al., 2019; Cinquand et al., 2022; Galishnikova et al., 2023). One of the critical limitations is the large scale separation between the microscopic radius of the charged particle gyro-motion \(\rho\) and the macroscopic size of the system \(R\), which severely constrains the dynamical range that can be numerically simulated. However, this same scale separation also provides an opportunity to reformulate the problem in a computationally tractable way using a guiding center approach.
The guiding center formalism decomposes the motion of a charged particle into a fast gyration around a guiding center (sometimes referred to as gyro-center) and the slower motion of the guiding center itself (sometimes referred to as the drift motion). This approach assumes that the electromagnetic field is slowly varying in space when compared to the gyro-radius and slowly varying in time when compared to the gyro-period. By solving for only the guiding center of the particle, kinetic models can use timesteps larger than the gyro-period and are thus able to solve for the guiding center motion over larger scales, overcoming the challenge of scale separation.
Figure 1 shows the physical conditions present in a variety of astrophysical, solar, and terrestrial plasmas. It also identifies regions for which the large mean free paths (\(\lambda_{\rm mfp}/R\gtrsim 1\)) necessitate a kinetic approach but the scale separation (\(\rho/R\ll 1\)) allows the application of
the guiding center approximation. Adopting the latter would enable some investigations that have not been possible to date.
In flat spacetimes, there exists a widely used set of guiding center equations (Northrop, 1963) that have been successfully implemented in studies, often in conjunction with a background MHD model (see, e.g, Gordovskyy et al., 2010; Threlfall, J. et al., 2016; Ripperda et al., 2017; Gordovskyy et al., 2020 for studies of solar and astrophysical flares). Despite the usefulness and the demonstrated accuracy of the guiding center approach in this regime, there has not been a successful covariant formalism to date, which is necessary for applications in relativistic and compact astrophysical systems. In the traditional formalism, one uses approximate methods to integrate analytically the acceleration equation for the particle once over a gyro-period and obtain an equation for the drift velocity of the guiding center, which can then be solved numerically. This expression typically contains terms related to the curvature and gradient of the \(B\) field, as well as the electric, \(\vec{E}\times\vec{B}\), and gravitational, \(\vec{g}\times\vec{B}\), drifts. However, the step of integrating analytically the acceleration equation cannot be generalized in the same manner to a covariant form because the geodesic equation has nonlinear terms in velocity. An alternate approach of solving the problem in the local Lorentz frame (as done, e.g., in Bacchini et al., 2020) also fails because terms that involve the gradients of the electromagnetic field tensor and the gravitational drift cannot be written in terms of quantities evaluated in the local Lorentz frame 1.
Footnote 1: One attempt by Beklemishev and Tessarotto (2004) leaves the equations in a closed form, but one which is computationally intractable. In Bacchini et al. (2020), the guiding center equations were solved in curved spacetimes, but only the \(\vec{E}\times\vec{B}\) drift was incorporated, in a manner that cannot be generalized to a fully covariant description that includes curvature and gravitational drifts.
In this letter, we derive a new set of fully covariant guiding-center equations of motion that incorporate all of the drift mechanisms captured in the standard guiding center set of equations (Northrop, 1963), including the gravitational drift. The key insight in this approach comes from the realization that, although it is not possible to obtain a covariant equation for the drift velocity of the guiding center, one can derive an equation for its acceleration while still integrating out the gyromotion. The resulting second order differential equation can then be solved numerically without significant increase in the computational cost.
Figure 2 illustrates with an example the guiding center approach. A charged particle travels in a dipole magnetic field around a compact star, both mirroring between the two magnetic poles and drifting in the azimuthal direction. The figure shows both the full gyrating motion of the particle (in red) as well as the motion of its guiding center that results from the approach we derive in this Letter. For visual purposes, the parameters were chosen such that gyrating motion is visible in the full motion of the particle, which is highly exaggerated compared to realistic systems (see Fig 1). Even in this case, the guiding center approach shows a remarkable agreement with the full solution.
## 2 The Covariant Guiding Center Equation of Motion
The covariant equation of motion for a charged particle in a general spacetime with an arbitrary electromagnetic field is given by
\[\frac{d^{2}x^{\alpha}}{d\tau^{2}}=-\Gamma^{\alpha}_{\mu\nu}\frac{dx^{\mu}}{d \tau}\frac{dx^{\nu}}{d\tau}+\frac{q}{m}F^{\alpha}_{\ \beta}\frac{dx^{\beta}}{d\tau}, \tag{1}\]
where \(x^{\alpha}\) is the position of the particle, \(\tau\) is the proper time, \(\Gamma^{\alpha}_{\mu\nu}\) is the Christoffel symbol, \(q/m\) is the charge-to-mass ratio of the particle, and \(F^{\alpha}_{\ \beta}\) is the electromagnetic field tensor. The goal is to decompose the motion of the particle into a fast gyromotion and a slow drift of the guiding center and integrate out the gyromotion analytically.
Expanding the electromagnetic field tensor around the position \(\chi^{\mu}\) of the guiding center, we obtain
\[\frac{d^{2}x^{\alpha}}{d\tau^{2}} = -\Gamma^{\alpha}_{\mu\nu}\big{|}_{\chi}\,\frac{dx^{\mu}}{d\tau} \frac{dx^{\nu}}{d\tau}+\frac{q}{m}\left.F^{\alpha}_{\ \beta}\right|_{\chi}\,\frac{dx^{\beta}}{d\tau} \tag{2}\] \[\quad+\frac{q}{m}\left.\frac{\partial F^{\alpha}_{\ \beta}}{ \partial x^{\mu}}\right|_{\chi}\left(x^{\mu}-\chi^{\mu}\right)\frac{dx^{\beta} }{d\tau}.\]
Figure 1: The physical conditions encountered in plasmas in a variety of astrophysical settings; \(\rho\) denotes the radius of gyromotions, \(\lambda_{\rm mfp}\) the collisional mean-free path of particles, and \(R\) the macroscopic scale of each system. A fluid approach is applicable for modeling systems in the gray shaded region. The conditions in the blue shaded region necessitate a kinetic approach. The guiding center formalism developed here enables accurate integrations of charged particle trajectories in all the systems in the white region. Dashes outline systems for which general relativistic effects require a covariant formulation.
The first and the third term in this equation are first order in \(\rho/R\) and can be neglected to zeroth order when the following three conditions are satisfied (see also Vandervoort, 1960):
1. The gyroradius \(\rho\) is significantly smaller than the characteristic scale over which the electromagnetic field varies
\[\rho\ll|F^{\alpha}_{\ \beta}|/\left|\frac{\partial F^{\alpha}_{\ \beta}}{ \partial x^{\mu}}\right|\;. \tag{3}\]
2. The particle can drift for many gyroperiods before the field changes considerably
\[\frac{1}{\omega}\left|\frac{\partial\chi^{\nu}}{\partial\tau}\right|\ll|F^{ \alpha}_{\ \beta}|/\left|\frac{\partial F^{\alpha}_{\ \beta}}{\partial x^{\mu}}\right|\;. \tag{4}\]
3. The effect of the spacetime curvature on the motion of the particle is weaker than that of the electromagnetic field
\[\left|\Gamma^{\alpha}_{\mu\nu}\frac{dx^{\mu}}{d\tau}\frac{dx^{\nu}}{d\tau} \right|\ll\frac{q}{m}\left|F^{\alpha}_{\ \beta}\frac{dx^{\beta}}{d\tau}\right|\;. \tag{5}\]
In the above expressions, the symbol \(|\ |\) denotes the magnitude of a typical component.
To zeroth order, the reduced Eq. (2) becomes a homogeneous differential equation with constant coefficients; we can write its full solution in terms of the eigenvectors and eigenvalues of the \(F^{\alpha}_{\ \beta}\) tensor (Vandervoort, 1960; Fradkin, 1978). There are two imaginary and two real eigenvalues, which we denote by \(\{i\omega,-i\omega,\lambda,-\lambda\}\), and their corresponding eigenvectors as \(\{\sigma,\delta,\psi,\Upsilon\}\). The eigenvalues can be conveniently written in terms of the field tensor invariants and the eigenvectors can then be solved using the Cayley-Hamilton theorem (Fradkin, 1978). The two imaginary eigenvalues correspond to the gyromotion of the particle with angular frequency \(\omega\), as expected, while the two real eigenvalues describe the drift of the guiding center.
In this limit, the full solution of the particle motion can be expressed as a linear combination of the four eigenvectors. Because of the comparatively small scale of the gyroradius, to first order, the presence of a curved spacetime and a spatially varying electromagnetic field will influence the motion of the guiding center but not the gyration. To obtain the solution in this more general case, we, therefore, only use the eigenvectors that correspond to the gyromotion but do not prescribe the motion of the guiding center in terms of the other two eigenvectors. Instead, we write
\[x^{\alpha}(\tau)=\rho_{0}\sqrt{\frac{\omega_{0}}{\omega}}e^{i\omega\tau}\, \sigma^{\alpha}+\rho_{0}^{*}\sqrt{\frac{\omega_{0}}{\omega}}e^{-i\omega\tau} \,\delta^{\alpha}+\chi^{\alpha}\;, \tag{6}\]
where \(\rho_{0}\equiv-(i\delta_{\beta}/\omega)(dx^{\beta}/d\tau)|_{\tau=0}\) is the gyro-radius; in this expression, \(\rho_{0}\) may be complex and thus does not correspond directly to the usual definition.
In order to obtain an equation for the guiding center position, we insert our ansatz (Eq. [6]) into the equation of motion given in Eq. (2) and expand all terms to first order in \(\rho/R\). Finally, we time average the differential equation over one gyro-period, zeroing out all terms that are oscillatory. The resulting equation of motion for the guiding center becomes2
Footnote 2: Equation (7) can be written in a manifestly covariant form by combining the second and fourth term into \(i\omega_{0}\rho_{0}^{2}\frac{q}{m}\nabla_{\mu}F^{\rho}_{\ \beta}(\sigma^{\beta}\delta^{\mu}-\sigma^{\mu}\delta^{\beta})\).
\[\frac{d^{2}\chi^{\alpha}}{d\tau^{2}}= -\Gamma^{\alpha}_{\ \mu\nu}\bigg{(}\frac{d\chi^{\mu}}{d\tau}\frac{d\chi^{ \tau}}{d\tau}+2\omega\omega_{0}\rho_{0}^{2}\sigma^{\mu}\delta^{\nu}\bigg{)} \tag{7}\] \[+\frac{q}{m}F^{\alpha}_{\ \beta}\frac{d\chi^{\beta}}{d\tau}+i \omega_{0}\rho_{0}^{2}\frac{q}{m}\frac{\partial F^{\alpha}_{\ \beta}}{\partial x^{\mu}}(\sigma^{\beta}\delta^{\mu}-\sigma^{\mu}\delta^{ \beta})\]
The acceleration terms in Eq. (7) correspond to each of the familiar drift mechanisms. The first two terms containing the Christoffel symbols are responsible for the gravitational drift. The last term is responsible for drift due to a non-constant electromagnetic field which includes the \(\nabla B\) drift. Note that, because the eigenvectors are complex, the last term in Eq. (7) is indeed real.
## 3 Application and verification
Figure 2: The trajectory of a charged particle moving in a magnetic dipole in a Schwarzschild spacetime, calculated by solving the full equations of motion (red line) as well as the equations of the guiding center derived here (blue line). The particle experiences mirroring between the two magnetic poles and an azimuthal drift. Even for the exaggerated conditions shown here, where the gyration is clearly visible in the trajectory of the particle, the guiding center equations describe accurately the drift of the particle motion.
In order to demonstrate that the new covariant, guiding-center equations we derived above account for all known drift mechanisms, we devised a set of test problems in various configurations. Specifically, we consider the constant electromagnetic field in flat spacetime, which results in an \(\vec{E}\times\vec{B}\) drift, the dipole magnetic field in a flat spacetime, which yields a \(\nabla B\) drift, and a dipole magnetic field in curved spacetime, which also contains a gravitational drift. Because our goal here is to demonstrate the applicability of the guiding center equation and not to explore and optimize a numerical particle pusher for solving it, we use a simple fourth order Runge-Kutta integrator to solve eq. (7). We then compare the result to the solution of the full equation of motion (1) for the charged particle. We obtain the latter either analytically or numerically. Hereafter, we set \(G=c=1\) and absorb the value of \(q/m\) into the magnitude of the magnetic field.
## 4 Constant electromagnetic field in flat spacetime
We first study the motion of the guiding center of a charged particle in a constant electromagnetic field in a flat spacetime and compare it to the analytic solution. For this configuration, the guiding center is expected to drift with a velocity \(v_{E}=(\vec{E}\times\vec{B})/B^{2}\). We initialize the particle at an arbitrary position \(x^{\alpha}=(0,5\sqrt{2},5\sqrt{2},0)\) and with a velocity \(u^{\alpha}=(u^{t},0,u^{y},2)\). We set the components of the electric and magnetic field to \(E^{i}=(1,0,-0.05)\) and \(B^{i}=(0,0,B_{0})\), respectively. By varying the \(u^{y}\) component of the velocity and the \(B^{z}\) component of the magnetic field, we explore different sizes of the gyroradius and, therefore, test different regimes in scale separation. In each case, we infer the \(u^{t}\) component of the velocity from the requirement \(u^{a}u_{a}=-1\).
With this initial velocity and under the influence of the parallel and perpendicular components of the electric field (see insert in Figure 3), the particle goes through an arc like motion before re-crossing the \(z=0\) plane. The horizontal displacement on this plane is due to the \(E\times B\) drift, which we estimate by dividing the guiding center displacement by the amount of elapsed coordinate time.
We show in Fig. 3 the fractional difference between the average drift velocity measured from the guiding center calculation and the analytic drift velocity, as a function of the gyroradius. As expected, the guiding center equations become increasingly more accurate with decreasing gyroradius. Because we kept terms to first order in gyroradius in the guiding center equations, we expect the truncation error to be of second order. However, the leading order term in the drift velocity is first order in gyroradius and, therefore, the truncation error in drift velocity is only one order higher. This is why Fig. 3 shows a linear dependence of the fractional error on gyroradius. The figure also shows that, as expected, the degree of approximation of the guiding center solution depends only on the magnitude of the gyroradius and not on the magnitude of the magnetic field or of the particle velocity.
## 5 Dipole magnetic field in flat and Schwarzschild spacetimes
Figure 4: Fractional difference between the average azimuthal drift velocity calculated in the guiding center limit and using the full equation of motion of a charge in a magnetic dipole in flat and Schwarzschild spacetimes (see Fig. 2 for the setup). Other details as in Fig. 3. In this configuration, the truncation error depends quadratically on gyroradius.
Figure 3: Fractional difference between the average drift velocity calculated numerically in the guiding center limit and the analytic drift velocity for a charged particle in a constant electromagnetic field, as a function of the gyroradius \(\rho\) divided by the macroscopic scale \(R\) of the system, in a flat spacetime (see insert for the setup). In this configuration, the only drift experienced by the particle is proportional to \(\vec{E}\times\vec{B}\). Different points show different magnetic field strengths (blue) and particle velocities (orange), both of which alter the gyroradius. The truncation error in the drift velocity introduced by the guiding-center equations depends linearly on gyroradius, as expected.
In this application, we consider the motion of a charged particle in a dipole magnetic field, both in a flat and in a Schwarzschild spacetime. In both settings the particle experiences magnetic mirroring between the two magnetic poles as well as azimuthal drift, due to the gradient in the magnetic field strength. This is the configuration shown earlier in Fig. 2. In the Schwarzschild case, the particle also experiences gravitational drift.
We implement the magnetic dipole in terms of its vector potential (Bacchini et al., 2019; Takahashi and Koyama, 2009)
\[A_{\phi}=\frac{3}{4}B_{0}\sin^{2}(\theta)\Bigg{[}2(r+1)-r^{2}\log\biggl{(} \frac{r}{r-2}\biggr{)}\Bigg{]}\;, \tag{8}\]
where all radii are written in terms of \(GM/c^{2}\) and \(M\) is the mass of the central object. For the flat spacetime configuration, we take the limit \(r\gg 1\).
We choose an initial position of the particle on the equatorial plane at a radius \(r=10M\) and with an initial velocity at a pitch angle \(\Theta=\pi/4\), i.e., we set \(u^{\alpha}=[(-g_{tt})^{-1/2}\gamma_{0},(-g_{tt})^{1/2}\beta\gamma_{0}\sin \Theta,\beta\gamma_{0}r^{-1}\sin\Theta,0]\), where \(\beta=(1-1/\gamma_{0}^{2})^{1/2}\) and \(g_{tt}\) is the \(tt-\)component of the metric. The magnitude \(B_{0}\) of the magnetic field and the Lorentz factor \(\gamma_{0}\) determine, as before, the particle gyroradius.
In this configuration, there is no analytic solution for the drift velocity of the particle. In order to test the covariant guiding center equations and explore their convergence in \(\rho/R\), we compare our results to those obtained from integrating the full particle trajectory. In particular, we estimate numerically the drift velocity in the azimuthal direction by tracking the times and azimuths of the successive equatorial crossings of the particle in both solutions. In Fig. 4, we show the fractional difference between the two estimates of the azimuthal drift velocity as a function of the gyroradius. Even in this configuration that incorporates all known drift mechanism, the guiding center equations maintain high accuracy, with a truncation error that scales as \((\rho/R)^{2}\). This steeper dependence compared to the previous configuration likely originates from a hidden symmetry in the problem.
We emphasize here that the results shown in Figs. 3-4 aim to demonstrate the correct limiting behavior of our covariant guiding-center equations and are not convergence plots of a numerical solver. (The latter would have shown the difference between two solutions as a function of numerical resolution and not of the expansion parameter, \(\rho/R\), in the equations).
## 6 Conclusions
In this paper, we developed a new covariant guiding center formalism for the motion of charged particles in general spacetimes that account for electric, magnetic, and gravitational drifts. We showed that the solution to these equations match those of the full equations of motion at the limit \(\rho/R\to 0.\) Our approach allows integrating particle trajectories with time steps that are set by the macroscopic length scales \(R\) and not by the gyroradius \(\rho\). This leads to an increase of order \(R/\rho\) in the computational efficiency when numerically solving for particle trajectories in arbitrary spacetimes and electromagnetic field configurations.
The new guiding center formalism will allow us to explore a number of interesting plasma phenomena in astrophysics affected by the long mean-free paths of charges and the large scale separation between the system size and the gyroradii of charged particles. In particular, phenomena in settings where a background magnetic field is determined by currents elsewhere in the system can be fruitfully simulated with the guiding center formalism. Another application is the motion of charges that contribute negligibly to the overall dynamics and the generation of the electromagnetic field in the flow, but are important for determining the radiative and observational signatures of the systems.
The trapping of non-thermal particles in accretion flows is one such application. Non-thermal electrons in accretion flows are thought to originate from magnetic reconnection and can potentially explain the bright flares observed from low-luminosity systems, such as Sgr A*. Recent observations reveal that, despite their long mean-free paths, these non-thermal particles are trapped in quasi-coherent compact structures that appear to orbit around the black hole (GRAVITY Collaboration et al., 2018). Calculating the spatial distribution of non-thermal particles and understanding the mechanism that confines the flaring emission to a compact region necessitates a kinetic approach that will follow the trajectories of the individual non-thermal particles in the background GRMHD flow. The new guiding center formalism can provide an optimal tool for simulating and understanding such systems.
We thank Gabriele Bozzola, Dirk Heumann, and Matthew Golden for useful conversations. This work has been supported by NSF PIRE award OISE-1743747. T.T. acknowledges support from the Alfred P. Sloan Foundation and the Ford Foundation.
|
2309.17022 | Positionality in Σ_0^2 and a completeness result | We study the existence of positional strategies for the protagonist in
infinite duration games over arbitrary game graphs. We prove that
prefix-independent objectives in {\Sigma}_0^2 which are positional and admit a
(strongly) neutral letter are exactly those that are recognised by
history-deterministic monotone co-B\"uchi automata over countable ordinals.
This generalises a criterion proposed by [Kopczy\'nski, ICALP 2006] and gives
an alternative proof of closure under union for these objectives, which was
known from [Ohlmann, TheoretiCS 2023].
We then give two applications of our result. First, we prove that the
mean-payoff objective is positional over arbitrary game graphs. Second, we
establish the following completeness result: for any objective W which is
prefix-independent, admits a (weakly) neutral letter, and is positional over
finite game graphs, there is an objective W' which is equivalent to W over
finite game graphs and positional over arbitrary game graphs. | Pierre Ohlmann, Michał Skrzypczak | 2023-09-29T07:15:01Z | http://arxiv.org/abs/2309.17022v2 | # Positionality in \(\mathbf{\Sigma_{2}^{0}}\) and a completeness result
###### Abstract
We study the existence of positional strategies for the protagonist in infinite duration games over arbitrary game graphs. We prove that prefix-independent objectives in \(\mathbf{\Sigma}_{2}^{0}\) which are positional and admit a (strongly) neutral letter are exactly those that are recognised by history-deterministic monotone co-Buchi automata over countable ordinals. This generalises a criterion proposed by [Kopczynski, ICALP 2006] and gives an alternative proof of closure under union for these objectives, which was known from [Ohlmann, Theoretic CS 2023].
We then give two applications of our result. First, we prove that the mean-payoff objective is positional over arbitrary game graphs. Second, we establish the following completeness result: for any objective \(W\) which is prefix-independent, admits a (weakly) neutral letter, and is positional over finite game graphs, there is an objective \(W^{\prime}\) which is equivalent to \(W\) over finite game graphs and positional over arbitrary game graphs.
refinee:proofs
###### Contents
* 1 Introduction
* 2 Preliminaries
* 2.1 Preliminaries
* 2.2 Preliminaries
* 2.3 Preliminaries
* 2.4 The \(\mathbf{\Sigma_{2}^{0}}\) and \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.5 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.6 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.7 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.8 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.9 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.10 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.11 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.12 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.13 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.14 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.15 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.16 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.17 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.18 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.19 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.20 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.21 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.22 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.23 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.24 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.25 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.26 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.27 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.28 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.29 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.29 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.30 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.31 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.32 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.33 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.34 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.35 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.36 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.37 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.38 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.39 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.30 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.31 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.32 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.33 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.34 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.35 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.36 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.37 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.38 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.39 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.31 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.32 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.33 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.34 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.35 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.36 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.37 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.38 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.39 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.31 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.32 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.33 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.34 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.35 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.36 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.37 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.38 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.39 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.39 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.38 The \(\mathbf{\Sigma_{2}^{0}}\)-action
* 2.39 The \(\mathbf{\Sigma_{2}^{0}}\)-action
## 1 Introduction
### Context
Games.We study infinite duration games on graphs. In such a game, two players, Eve and Adam, alternate forever in moving a token along the edges of a directed, possibly infinite graph (called _arena_), whose edges are labelled with elements of some set \(C\). An _objective_\(W\subseteq C^{\omega}\) is specified in advance; Eve wins the game if the label of the produced infinite path belongs to \(W\). A _strategy_ in such a game is called _positional_ if it depends only on the current vertex occupied by the token, regardless of the history of the play.
We are interested in _positional objectives_: those for which existence of a winning strategy for Eve entails existence of a winning positional strategy for Eve, on a arbitrary arena. Sometimes we also consider a weaker property: an objective is _positional over finite arenas_ if the above implication holds on any finite arena.
Early results.Although the notion of positionality is already present in Shapley's seminal work [29], the first positionality result for infinite duration games was established by Ehrenfeucht and Mycielsky [10], and it concerns the mean-payoff objective
\[\text{Mean-Payoff}_{\leq 0}=\Big{\{}w_{0}w_{1}\cdots\in\mathbb{Z}^{\omega} \mid\limsup_{k}\frac{1}{k}\sum_{i=0}^{k-1}w_{i}\leq 0\Big{\}},\]
over finite arenas. Nowadays, many proofs are known that establish positionality of mean-payoff games over finite arenas.
Later, and in a different context, Emerson and Jutla [11] as well as Mostowski [23] independently established positionality of the parity objective
\[\text{Parity}_{d}=\Big{\{}p_{0}p_{1}\cdots\in\{0,1,\ldots,d\}^{\omega}\mid \limsup_{k}p_{k}\text{ is even}\Big{\}}\]
over arbitrary arenas. This result was used to give a direct proof of the possibility of complementing automata over infinite trees, which is the key step in Rabin's celebrated proof of decidability of S2S [27]. By now, several proofs are known for positionality of parity games, some of which apply to arbitrary arenas.
Both parity games and mean-payoff games have been the object of considerable attention over the past three decades; we refer to [12] for a thorough exposition. By symmetry, these games are positional not only for Eve but also for the opponent, a property we call _bi-positionality_. Parity and mean-payoff objectives, as well as the vast majority of objectives that are considered in this context, are _prefix-independent_, that is, invariant under adding or removing finite prefixes.
Bi-positionality.Many efforts were devoted to understanding positionality in the early 2000's. These culminated in Gimbert and Zielonka's work [15] establishing a general characterisation of bi-positional objectives over finite arenas, from which it follows that an objective is bi-positional over finite arenas if and only if it is the case for 1-player games. On the other hand, Colcombet and Niwinski [8] established that bi-positionality over arbitrary arenas is very restrictive: any prefix-independent objective which is bi-positional over arbitrary arenas can be recast as a parity objective.
Together, these two results give a good understanding of bi-positional objectives, both over finite and arbitrary arenas.
Positionality for Eve.In contrast, less is known about those objectives which are positional for Eve, regardless of the opponent (this is sometimes called half-positionality). This is somewhat surprising, considering that positionality is more in-line with the primary application in synthesis of reactive systems, where the opponent, who models an antagonistic environment, need not have structured strategies. The thesis of Kopczynski [19] proposes a number of results on positionality, but no characterisation. Kopczynski proposed two classes of prefix-independent objectives, _concave objectives_ and _monotone objectives_, which are positional respectively over finite and over arbitrary arenas. Both classes are closed under unions, which motivated the following conjecture.
[Kopczynski's conjecture [19, 18]] Prefix-independent positional objectives are closed under unions.This conjecture was disproved by Kozachinskiy in the case of finite arenas [20], however, it remains open for arbitrary ones (even in the case of countable unions instead of unions).
Neutral letters.Many of the considered objectives contain a _neutral letter_, that is an element \(\varepsilon\in C\) such that \(W\) is invariant under removing arbitrary many occurrences of the letter \(\varepsilon\) from any infinite word. For instance, \(\varepsilon=0\) is a neutral letter of the parity objective \(\operatorname{Parity}_{d}\). There are two variants of this definition, _strongly neutral letter_ and _weakly neutral letter_, which are formally introduced in the preliminaries. It is unknown whether adding a neutral letter to a given objective may affect its positionality [19, 25].
Borel classes.To stratify the complexity of the considered objectives we use Borel hierarchy [17]. This follows the classical approach to Gale-Stewart games [13], where the determinacy theorem was gradually proved for more and more complex Borel classes: \(\mathbf{\Sigma}^{0}_{2}\) in [31] and \(\mathbf{\Sigma}^{0}_{3}\) in [9]. This finally led to Martin's celebrated result on all Borel objectives [22].
To apply this technique, we assume for the rest of the paper that \(C\) is at most countable. Thus, \(C^{\omega}\) becomes is a Polish topological space, with open sets of the form \(L\cdot C^{\omega}\) where \(L\subseteq C^{*}\) is arbitrary. Closed sets are those whose complement is open. The class \(\mathbf{\Sigma}^{0}_{2}\) contains all sets which can be obtained as a countable union of some closed sets.
Recent developments.A step forward in the study of positionality (for Eve) was recently made by Ohlmann [25] who established that an objective admitting a (strongly) neutral letter is positional over arbitrary arenas if and only if it admits well-ordered monotone universal graphs. Note that this characterisation concerns only positionality over arbitrary arenas. This allowed Ohlmann to prove closure of prefix-independent positional objectives (over arbitrary arenas) admitting a (strongly) neutral letter under finite lexicographic products, and, further assuming membership in \(\mathbf{\Sigma}^{0}_{2}\), under finite unions1.
Footnote 1: In [25], an assumption called “non-healing” is used. This assumption is in fact implied by membership in \(\mathbf{\Sigma}^{0}_{2}\).
Bouyer, Casares, Randour, and Vandenhove [2] also used universal graphs to characterise positionality for objectives recognised by deterministic Buchi automata. They observed that for such an objective \(W\) finiteness of the arena does not impact positionality: \(W\) is positional over arbitrary arenas if and only if it is positional over finite ones.
Going further, Casares [4] recently proposed a characterisation of positionality for all \(\omega\)-regular objectives. As a by-product, it follows that Conjecture 1 holds for \(\omega\)-regular
objectives2, and that again finiteness of the arena does not impact positionality.
Footnote 2: In fact, Casares proved a strengthening of the conjecture when only one objective is required to be prefix-independent.
### Contributions
Positionality in \(\mathbf{\Sigma}^{0}_{2}\).As mentioned above, Kopczynski introduced the class of _monotonic objectives_, defined as those of the form \(C^{\omega}\setminus L^{\omega}\), where \(L\) is a language recognised by a finite linearly-ordered automaton with certain monotonicity properties on transitions. He then proved that monotonic objectives are positional over arbitrary arenas. Such objectives are prefix-independent and belong to \(\mathbf{\Sigma}^{0}_{2}\); our first contribution is to extend Kopczynski's result to a complete characterisation (up to neutral letters) of positional objectives in \(\mathbf{\Sigma}^{0}_{2}\).
Let \(W\subseteq C^{\omega}\) be a prefix-independent \(\mathbf{\Sigma}^{0}_{2}\) objective admitting a strongly neutral letter. Then \(W\) is positional over arbitrary arenas if and only if it is recognised by a countable history-deterministic well-founded monotone co-Buchi automaton.
The proof of Theorem 2 is based on Ohlmann's _structuration_ technique which is the key ingredient to the proof of [25]. As an easy by-product of the above characterisation, we reobtain the result that Kopczynski's conjecture holds for countable unions of \(\mathbf{\Sigma}^{0}_{2}\) objectives (assuming that the given objectives all have strongly neutral letters).
If \(W_{0},W_{1},\ldots\) are all positional prefix-independent \(\mathbf{\Sigma}^{0}_{2}\) objectives, each admitting a strongly neutral letter, then the union \(\bigcup_{i\in\mathbb{N}}W_{i}\) is also positional.
From finite to arbitrary arenas.The most important natural example of an objective which is positional over finite arenas but not on infinite ones is Mean-Payoff\({}_{\leq 0}\), as defined above. However, as a straightforward consequence of their positionality [3, Theorem 3], it holds that over finite arenas, Mean-Payoff\({}_{\leq 0}\) coincides with the energy condition
\[\text{Bounded}=\Big{\{}w_{0}w_{1}\cdots\in\mathbb{Z}^{\omega}\mid\sup_{k} \sum_{i=0}^{k-1}w_{i}\text{ is finite}\Big{\}},\]
which turns out to be positional even over arbitrary arenas [25].
Applying Corollary 3, we establish that with strict threshold, the mean-payoff objective
\[\text{Mean-Payoff}_{<0}=\Big{\{}w_{0}w_{1}\cdots\in\mathbb{Z}^{\omega}\mid \limsup_{k}\frac{1}{k}\sum_{i=0}^{k-1}w_{i}<0\Big{\}}\]
is in fact positional over arbitrary arenas.
Now say that two prefix-independent objectives are _finitely equivalent_, written \(W\equiv W^{\prime}\), if they are won by Eve over the same finite arenas. As observed above, \(\text{Mean-Payoff}_{\leq 0}\equiv\text{Bounded}\), which is positional over arbitrary arenas. Likewise, its complement
\[\mathbb{Z}^{\omega}\setminus\text{Mean-Payoff}_{\leq 0}=\Big{\{}w_{0}w_{1} \cdots\in\mathbb{Z}^{\omega}\mid\limsup_{k}\frac{1}{k}\sum_{i=0}^{k-1}w_{i} \geq 0\Big{\}}\]
is, up to changing each weight \(w\in\mathbb{Z}\) by the opposite one \(-w\in\mathbb{Z}\), isomorphic to
\[\Big{\{}w_{0}w_{1}\cdots\in\mathbb{Z}^{\omega}\mid\liminf_{k}\frac{1}{k}\sum_ {i=0}^{k-1}w_{i}<0\Big{\}}.\]
The letter condition is finitely equivalent to Mean-Payoff\({}_{<0}\) (where the liminf is replaced with a limsup), which, as explained above, turns out to be positional over arbitrary arenas.
Thus, both Mean-Payoff\({}_{\leq 0}\) and its complement are finitely equivalent to objectives that are positional over arbitrary arenas. This brings us to our main contribution, which generalises the above observation to any prefix-independent objective admitting a (weakly) neutral letter which is positional over finite arenas.
Let \(W\subseteq C^{\omega}\) be a prefix-independent objective which is positional over finite arenas and admits a weakly neutral letter. Then there exists an objective \(W^{\prime}\equiv W\) which is positional over arbitrary arenas.
#### Structure of the paper
Section 2 introduces all necessary notions, including Ohlmann's structurations results. Section 3 proves our characterisation result Theorem 2 and its consequence Corollary 3, and provides a few examples. Then we proceed in Section 4 with establishing positionality of Mean-Payoff\({}_{<0}\) over arbitrary arenas, and proving Theorem 4.
## 2 Preliminaries
Graphs.We fix a set of letters \(C\), which we assume to be at most countable. A _\(C\)-graph_\(G\) is comprised of a (potentially infinite) set of _vertices_\(V(G)\) together with a set of _edges_\(E(G)\subseteq V(G)\times C\times V(G)\). An edge \(e=(v,c,v^{\prime})\in E(G)\) is written \(v\xrightarrow{c}v^{\prime}\), with \(c\) being the _label_ of this edge. We say that \(e\) is _outgoing_ from \(v\), that it is _incoming_ to \(v^{\prime}\), and that it is _adjacent_ to both \(v\) and to \(v^{\prime}\). We assume that each vertex \(v\in V(G)\) has at least one outgoing edge (we call this condition being _sinkless_).
We say that \(G\) is _finite_ (resp. _countable_) if both \(V(G)\) and \(E(G)\) are finite (resp. countable). The _size_ of a graph is defined to be \(|G|=|V(G)|\).
A (finite) _path_ is a (finite) sequence of edges with matching endpoints, meaning of the form \(v_{0}\xrightarrow{c_{0}}v_{1},v_{1}\xrightarrow{c_{1}}v_{2},\dots\), which we conveniently write as \(v_{0}\xrightarrow{c_{0}}v_{1}\xrightarrow{c_{1}}\dots\). We say that \(\pi\) is a _path from \(v_{0}\) in \(G\)_, and that vertices \(v_{0},v_{1},v_{2},\dots\) appearing on the path are _reachable_ from \(v_{0}\). We use \(G[v_{0}]\) to denote the restriction of \(G\) to vertices reachable from \(v_{0}\). The _label_ of a path \(\pi\) is the sequence \(c_{0}c_{1}\dots\) of labels of its edges; it belongs to \(C^{\omega}\) if \(\pi\) is infinite and to \(C^{*}\) otherwise. We sometimes write \(v\stackrel{{\omega}}{{\rightsquigarrow}}\) to say that \(w\) labels an infinite path from \(v\), or \(v\stackrel{{ w}}{{\rightsquigarrow}}v^{\prime}\) to say that \(w\) labels a finite path from \(v\) to \(v^{\prime}\). We write \(\operatorname{L}(G,v_{0})\subseteq C^{\omega}\) for the set of labels of all infinite paths from \(v_{0}\) in \(G\), and \(\operatorname{L}(G)\subseteq C^{\omega}\) for the set of labels of all infinite paths in \(G\), that is the union of \(\operatorname{L}(G,v_{0})\) over all \(v_{0}\in V(G)\).
A _graph morphism_ from \(G\) to \(G^{\prime}\) is a map \(\phi\colon V(G)\to V(G^{\prime})\) such that for every edge \(v\xrightarrow{c}v^{\prime}\in E(G)\), it holds that \(\phi(v)\xrightarrow{c}\phi(v^{\prime})\in E(G^{\prime})\). We write \(G\xrightarrow{\phi}G^{\prime}\). We sometimes say that \(G\)_embeds_ in \(G^{\prime}\) or that \(G^{\prime}\)_embeds_\(G\), and we write \(G\to G^{\prime}\), to say that there exists a morphism from \(G\) to \(G^{\prime}\). Note that \(G\to G^{\prime}\) implies \(\operatorname{L}(G)\subseteq\operatorname{L}(G^{\prime})\).
A graph \(G\) is _\(v_{0}\)-rooted_ if it has a distinguished vertex \(v_{0}\in V(G)\) called the _root_. A _tree_\(T\) is a \(t_{0}\)-rooted graph such that all vertices in \(T\) admit a unique finite path from the root \(t_{0}\).
#### Games.
A _\(C\)-arena_ is given by a \(C\)-graph \(A\) together with a partition of its vertices \(V(A)=V_{\operatorname{Eve}}\sqcup V_{\operatorname{Adam}}\) into those controlled by Eve \(V_{\operatorname{Eve}}\) and those controlled by Adam \(V_{\operatorname{Adam}}\). A _strategy_ (for Eve) \((S,\pi)\) in an arena \(A\) is a graph \(S\) together with a surjective morphism \(\pi\colon S\to A\) satisfying that for every vertex \(v\in V_{\operatorname{Adam}}\), every outgoing edge \(v\xrightarrow{c}v^{\prime}\in E(G)\), and every \(s\in\pi^{-1}(v)\), there is an outgoing edge \(s\xrightarrow{c}s^{\prime}\in E(S)\) with \(\pi(s^{\prime})=v^{\prime}\). Recall that under our assumptions every vertex needs to have at least one outgoing edge, thus for every \(v\in V_{\operatorname{Eve}}\) and every \(s\in\pi^{-1}(v)\) there must be at least one outgoing edge from \(s\) in \(S\).
The example arenas in this work are drawn following the standard notation, where circles denote vertices controlled by Eve and squares denote those controlled by Adam. Vertices with a single outgoing edge are denoted by a simple dot, it does not matter who controls them.
A strategy is _positional_ if \(\pi\) is injective. In this case, we can assume that \(V(S)=V(A)\) and \(E(S)\subseteq E(A)\), with \(\pi\) being identity.
An _objective_ is a set \(W\subseteq C^{\omega}\) of infinite sequences of elements of \(C\). In this paper, we will always work with _prefix-independent_ objectives, meaning objectives which satisfy \(cW=W\) for all \(c\in C\); this allows us to simplify many of the definitions. We say that a graph \(G\)_satisfies_ an objective \(W\) if \(\operatorname{L}(G)\subseteq W\). A _game_ is given by a \(C\)-arena \(A\) together with an objective \(W\). It is _winning_ (for Eve) if there is a strategy \((S,\pi)\) such that \(S\) satisfies \(W\). In this case, we also say that Eve _wins_ the game \((A,W)\) with strategy \((S,\pi)\). We say that an objective \(W\) is _positional_ (over finite arenas or over arbitrary arenas) if for any (finite or arbitrary) arena \(A\), if Eve wins the game \((A,W)\) then she wins \((A,W)\) with a positional strategy.
Neutral letters.A letter \(\varepsilon\in C\) is said to be _weakly neutral_ for an objective \(W\subseteq C^{\omega}\) if for any word \(w\in C^{\omega}\) decomposed into \(w=w_{0}w_{1}\dots\) with non-empty words \(w_{i}\in C^{+}\),
\[w\in W\iff\varepsilon w_{0}\varepsilon w_{1}\varepsilon\dots\in W.\]
A weakly neutral letter \(\varepsilon\in C\) is _strongly neutral_ if in the above, the \(w_{i}\) can be chosen empty, and moreover, \(\varepsilon^{\omega}\in W\). A few examples: for the parity objective, the priority \(0\) is strongly neutral; for Bounded, the weight \(0\) is strongly neutral; for Mean-Payoff\({}_{\leq 0}\), the letter \(0\) is only weakly neutral (because \(1^{\omega}\notin\text{Mean-Payoff}_{\leq 0}\) however \(010010001\dots\in\text{Mean-Payoff}_{\leq 0}\)), and likewise for Mean-Payoff\({}_{<0}\) because \(0^{\omega}\notin\text{Mean-Payoff}_{<0}\).
Monotone and universal graphs.An _ordered graph_ is a graph \(G\) equipped with a total order \(\geq\) on its set of vertices \(V(G)\). We say that it is _monotone_ if
\[v\geq u\xrightarrow{c}u^{\prime}\geq v^{\prime}\text{ in }G\qquad\text{ implies}\qquad v\xrightarrow{c}v^{\prime}\in E(G).\]
Such a graph is _well founded_ if the order \(\geq\) on \(V(G)\) is well founded.
We will use a variant of universality called (uniform) _almost-universality_ (for trees), which is convenient when working with prefix-independent objectives. A \(C\)-graph \(U\) is _almost \(W\)-universal_, if \(U\) satisfies \(W\), and for any tree \(T\) satisfying \(W\), there is a vertex \(t\in V(T)\) such that \(T[t]\to U\). We will rely on the following inductive result from [25].
[Follows from Theorem 3.2 and Lemma 4.5 in [25]] Let \(W\subseteq C^{\omega}\) be a prefix-independent objective such that there is a graph which is almost \(W\)-universal. Then \(W\) is positional over arbitrary arenas.
Structuration results.The following results were proved in Ohlmann's PhD thesis (Theorems 3.1 and 3.2 in [24]); the two incomparable variants stem from two different techniques.
[Finite structuration] Let \(W\) be a prefix-independent objective which is positional over finite arenas and admits a weakly neutral letter, and let \(G\) be a finite graph satisfying \(W\). Then there is a monotone graph \(G^{\prime}\) satisfying \(W\) such that \(G\to G^{\prime}\).
[Infinite structuration] Let \(W\) be a prefix-independent objective which is positional over arbitrary arenas and admits a strongly neutral letter, and let \(G\) be any graph satisfying \(W\). Then there is a well-founded monotone graph \(G^{\prime}\) satisfying \(W\) such that \(G\to G^{\prime}\).
Note that in both results, we may assume that \(|G^{\prime}|\leq|G|\), simply by restricting to the image of \(G\). Details of the proof of Lemma 3 can be found in [25, Theorem 3]; Lemma 3 appears only in Ohlmann's PhD thesis [24], we give details in Appendix A for completeness.
#### Automata.
A _co-Buchi automaton over \(C\)_ is a \(q_{0}\)-rooted \(C\times\{\mathcal{N},\mathcal{F}\}\)-graph \(A\). In this context, vertices \(V(A)\) are called _states_, edges \(E(A)\) are called _transitions_, and the root \(q_{0}\) is called the _initial state_. Moreover, transitions of the form \(q\xrightarrow{(c,\mathcal{N})}q^{\prime}\) are called _normal transitions_ and simply denoted \(q\xrightarrow{c}q^{\prime}\), while transitions of the form \(q\xrightarrow{(c,\mathcal{F})}q^{\prime}\) are called _co-Buchi_ transitions and denoted \(q\xrightarrow{c}q^{\prime}\). For simplicity, we assume automata to be _complete_ (for any state \(q\) and any letter \(c\), there is at least one outgoing transition labelled \(c\) from \(q\)) and _reachable_ (for any state \(q\) there is some path from \(q_{0}\) to \(q\) in \(A\)).
A path \(q_{0}\xrightarrow{(c_{0},a_{0})}q_{1}\xrightarrow{(c_{1},a_{1})}\dots\) in \(A\) is _accepting_ if it contains only finitely many co-Buchi transitions, meaning that only finitely many of \(a_{i}\) equal \(\mathcal{F}\). If \(q\in V(A)\) is a state then define the _language_\(\operatorname{L}(A,q)\subseteq C^{\omega}\) of a co-Buchi automaton _from a state \(q\in V(A)\)_ as the set of infinite words which label accepting paths from \(q\) in \(A\). The _language_ of \(A\) denoted \(\operatorname{L}(A)\) is \(\operatorname{L}(A,q_{0})\). Note that in this paper, automata are not assumed to be finite.
We say that an automaton is _monotone_ if it is monotone as a \(C\times\{\mathcal{N},\mathcal{F}\}\)-graph. Likewise, morphisms between automata are just morphisms of the corresponding \(C\times\{\mathcal{N},\mathcal{F}\}\)-graphs that moreover preserve the initial state. Note that \(A\to A^{\prime}\) implies \(\operatorname{L}(A)\subseteq\operatorname{L}(A^{\prime})\). A co-Buchi automaton is _deterministic_ if for each state \(q\in V(A)\) and each letter \(c\in C\) there is exactly one transition labelled by \(c\) outgoing from \(q\).
A _resolver_ for an automaton \(A\) is a deterministic automaton \(R\) with a morphism \(R\to A\). Note that the existence of this morphism implies that \(\operatorname{L}(R)\subseteq\operatorname{L}(A)\). Such a resolver is _sound_ if additionally \(\operatorname{L}(R)\supseteq\operatorname{L}(A)\) (and thus \(\operatorname{L}(R)=\operatorname{L}(A)\)). A co-Buchi automaton is _history-deterministic_ if there exists a sound resolver \(R\). Our definition of history-determinism is slightly non-standard, but it fits well with our overall use of morphisms and of possibly infinite automata. This point of view was also adopted by Colcombet (see [6, Definition 13]). For more details on history-determinism of co-Buchi automata, we refer to [21, 1, 28].
We often make use of the following simple lemma, which follows directly from the definitions and the fact that composing morphisms results in a morphism.
Let \(A\), \(A^{\prime}\) be automata such that \(A\to A^{\prime}\), \(A\) is history-deterministic, and \(\operatorname{L}(A)=\operatorname{L}(A^{\prime})\). Then \(A^{\prime}\) is history-deterministic.
Say that an automaton \(A\) is _saturated_ if it has all possible co-Buchi transitions: \(V(A)\times(C\times\{\mathcal{F}\})\times V(A)\subseteq E(A)\). The _saturation_ of an automaton \(A\) is obtained from \(A\) by adding all possible co-Buchi transitions. Similar techniques of saturating co-Buchi automata have been previously used to study their structure [21, 16, 28].
Note that languages of saturated automata are always prefix-independent. The lemma below states that co-Buchi transitions are somewhat irrelevant in history-deterministic automata recognising prefix-independent languages.
Let \(A\) be a history-deterministic automaton recognising a prefix-independent language and let \(A^{\prime}\) be its saturation. Then \(\operatorname{L}(A)=\operatorname{L}(A^{\prime})\) and \(A^{\prime}\) is history-deterministic. Moreover, \(\operatorname{L}(A^{\prime})=\operatorname{L}(A^{\prime},q)\) for any \(q\in\operatorname{L}(A^{\prime})\).
Proof.: Clearly \(A\to A^{\prime}\) thus \(\operatorname{L}(A)\subseteq\operatorname{L}(A^{\prime})\); it suffices to prove \(\operatorname{L}(A^{\prime})\subseteq\operatorname{L}(A)\) and conclude by Lemma 3. Let \(w_{0}w_{1}\dots\in\operatorname{L}(A^{\prime})\) and let \(q_{0}\xrightarrow{(w_{0},a_{0})}q_{1}\xrightarrow{(w_{1},a_{1})}\dots\) be an accepting path for \(w\) in \(A^{\prime}\). Then for some \(i\), \(q_{i}\xrightarrow{(w_{i},a_{i})}q_{i+1}\xrightarrow{(w_{i+1},a_{i+1})}\dots\) is comprised only of normal
transitions. Therefore, it is an accepting path in \(A\). We conclude that \(w_{i}w_{i+1}\cdots\in\mathrm{L}(A)\) and thus \(w\in\mathrm{L}(A)\) by prefix-independence.
The claim that \(\mathrm{L}(A^{\prime},q)\) is independent on \(q\) follows directly from prefix-independence and the fact that \(A^{\prime}\) is saturated.
## 3 Positional prefix-independent \(\mathbf{\Sigma}^{0}_{2}\) objectives
### A characterisation
Recall that \(\mathbf{\Sigma}^{0}_{2}\) objectives are countable unions of closed objectives; for the purpose of this paper it is convenient to observe that these are exactly those objectives recognised by (countable) deterministic co-Buchi automata (see for instance [30]).
The goal of the section is to prove Theorem 3 which we now restate for convenience.
Let \(W\subseteq C^{\omega}\) be a prefix-independent \(\mathbf{\Sigma}^{0}_{2}\) objective admitting a strongly neutral letter. Then \(W\) is positional over arbitrary arenas if and only if it is recognised by a countable history-deterministic well-founded monotone co-Buchi automaton.
Before moving on to the proof, we proceed with a quick technical statement that allows us to put automata in a slightly more convenient form. Let \(A\) be a history-deterministic automaton recognising a non-empty prefix-independent language. There exists a history-deterministic automaton \(A^{\prime}\) with \(\mathrm{L}(A^{\prime})=\mathrm{L}(A)\) and such that from every state \(q^{\prime}\in V(A^{\prime})\), there is an infinite path comprised only of normal transitions. Moreover, if \(A\) is countable, well founded, and monotone, then so is \(A^{\prime}\).
Proof.: Let \(V\subseteq V(A)\) be the set of states \(q\in V(A)\) from which there is an infinite path of normal transitions. Note that \(V\neq\varnothing\) since \(\mathrm{L}(A)\) is non-empty. First, since every path from \(V(A)\setminus V\) visits at least one co-Buchi transition, we turn all normal transitions adjacent to states in \(V(A)\setminus V\) into co-Buchi ones; this does not affect \(\mathrm{L}(A)\) or history-determinism. Next, we saturate \(A\) and restrict it to \(V\). Call \(A^{\prime}\) the resulting automaton; if \(q_{0}\notin V\) then we pick the initial state \(q^{\prime}_{0}\) of \(A^{\prime}\) arbitrarily in \(V\). It is clear that restricting \(A\) to some subset of states, changing the initial state, as well as saturating, are operations that preserve being countable, well founded, and monotone.
We claim that \(\mathrm{L}(A)=\mathrm{L}(A^{\prime})\). The inclusion \(\mathrm{L}(A^{\prime})\subseteq\mathrm{L}(A)\) follows from the proof of Lemma 3 so we focus on the converse: let \(w=w_{0}w_{1}\cdots\in\mathrm{L}(A)\) and take an accepting path \(\pi\) for \(w\). Then there is a suffix of \(\pi\) which remains in \(V\) and therefore defines a path in \(A^{\prime}\); we conclude thanks to prefix-independence of \(\mathrm{L}(A^{\prime})\).
It remains to see that \(A^{\prime}\) is history-deterministic. For this, we observe that any transition adjacent to states in \(V(A)\setminus V\) is a co-Buchi transition; therefore the map \(\phi:V(A)\to V(A^{\prime})=V\) which is identity on \(V\) and sends \(V(A)\setminus V\) to the initial state of \(A^{\prime}\) defines a morphism \(A\to A^{\prime}\). We conclude by Lemma 3.
To prove Theorem 3, we separate both directions so as to provide more precise hypotheses. Let \(W\) be a prefix-independent \(\mathbf{\Sigma}^{0}_{2}\) objective admitting a strongly neutral letter. Then \(W\) is recognised by a countable history-deterministic monotone well-founded automaton.
Proof.: If \(W=\varnothing\) then the saturated automaton with a single state and no normal transitions gives the wanted result; therefore we assume \(W\) to be non-empty. Let \(A\) be a history-deterministic co-Buchi automaton recognising \(W\) with initial state \(q_{0}\); thanks to Lemma 3 we assume that every state in \(A\) participates in an infinite path of normal transitions. Let
be the \(C\)-graph obtained from \(A\) by removing all the co-Buchi transitions. The fact that \(G\) is sinkless (and therefore, \(G\) is indeed a graph) follows from the assumption on \(A\). Since \(W\) is prefix-independent, it holds that \(G\) satisfies \(W\).
Apply the infinite structuration result (Lemma 7) to \(G\) to obtain a well-founded monotone graph \(G^{\prime}\) satisfying \(W\) and such that \(G\xrightarrow{\phi}G^{\prime}\). Note that we may restrict \(V(G^{\prime})\) to the image of \(\phi\). Due to the fact that \(C\) is countable, this guarantees that \(G^{\prime}\) is countable.
Now let \(A^{\prime}\) be the co-Buchi automaton obtained from \(G^{\prime}\) by turning every edge into a normal transition, setting the initial state to be \(q^{\prime}_{0}=\phi(q_{0})\), and saturating. Note that \(A^{\prime}\) is countable monotone and well-founded; we claim that \(A^{\prime}\) is history-deterministic and recognises \(W\), as required.
Let \(w\in\mathrm{L}(A^{\prime})\). Then \(w=uw^{\prime}\) where \(w^{\prime}\in\mathrm{L}(G^{\prime})\subseteq W\). It follows from prefix-independence that \(w\in W\). Conversely, let \(w_{0}w_{1}\dotsm\in W\) as witnessed by an accepting path \(\pi=q_{0}\xrightarrow{(w_{0},a_{0})}\)\(q_{1}\xrightarrow{(w_{1},a_{1})}\dotsm\) from \(q_{0}\) in \(A\). This path has only finitely many co-Buchi transitions.
Then consider the path \(\pi^{\prime}=\phi(q_{0})\xrightarrow{w_{0}}\phi(q_{1})\xrightarrow{w_{1}}\dotsm\) in \(A^{\prime}\), where we use co-Buchi transitions only when necessary, meaning when there is no normal transition \(\phi(q_{i})\xrightarrow{w_{i}}\phi(q_{i+1})\) in \(A^{\prime}\). Since \(\pi\) visits only finitely many co-Buchi transitions, it is eventually a path in \(G\), and thus since \(\phi\) is a morphism, \(\pi^{\prime}\) is eventually a path in \(G^{\prime}\), and hence it sees only finitely many co-Buchi transitions in \(A^{\prime}\). Hence \(\mathrm{L}(A^{\prime})=W\).
It remains to show that \(A^{\prime}\) is history-deterministic. But since \(A^{\prime}\) is saturated and \(G\to G^{\prime}\) we have \(A\to A^{\prime}\) and thus Lemma 8 concludes.
For the converse direction, we do not require a neutral letter.
Let \(W\) be a prefix-independent objective recognised by a countable history-deterministic monotone well-founded co-Buchi automaton. Then \(W\) is positional over arbitrary arenas.
Proof.: As previously, if \(W\) is empty then it is trivially positional, so we assume that \(W\) is non-empty, and we take an automaton \(A\) satisfying the hypotheses above and apply Lemma 10 so that every state participates in an infinite path of normal transitions. Let \(U\) be the \(C\)-graph obtained from \(A\) by removing all co-Buchi transitions and turning normal transitions into edges; thanks to Lemma 10, \(U\) is sinkless so it is indeed a graph. We prove that \(U\) is almost \(W\)-universal for trees. Let \(T\) be a tree satisfying \(W\) and let \(t_{0}\) be its root.
Since \(A\) is history-deterministic, there is a mapping \(\phi:V(T)\to V(A)\) such that for each edge \(t\xrightarrow{c}t^{\prime}\in E(T)\), there is a transition \(\phi(t)\xrightarrow{(c,a)}\phi(t^{\prime})\) in \(A\) with some \(a\in\{\mathcal{N},\mathcal{F}\}\), and such that for all infinite paths \(t_{0}\xrightarrow{w_{0}}t_{1}\xrightarrow{w_{1}}\dotsm\) in \(T\), there are only finitely many co-Buchi transitions on the path \(\phi(t_{0})\xrightarrow{(w_{0},a_{0})}\phi(t_{1})\xrightarrow{(w_{1},a_{1})}\dotsm\) in \(A\).
\(\rhd\): There is a vertex \(t^{\prime}_{0}\in V(T)\) such that for all infinite paths \(t^{\prime}_{0}\xrightarrow{w_{0}}t^{\prime}_{1}\xrightarrow{w_{1}}\dotsm\) from \(t^{\prime}_{0}\) in \(T\), there is no co-Buchi transition on the path \(\phi(t_{0})\xrightarrow{w_{0}}\phi(t_{1})\xrightarrow{w_{1}}\dotsm\) in \(A\).
Proof.: Assume towards contradiction that no such vertex exists. Then starting from the root \(t_{0}\), we build an infinite path \(t_{0}\xrightarrow{w_{0}}t_{1}\xrightarrow{w_{1}}\dotsm\) in \(T\) such that \(\phi(t_{0})\xrightarrow{w_{0}}\phi(w_{1})\xrightarrow{w_{1}}\dotsm\) has infinitely many co-Buchi transitions in \(A\). Indeed, assuming the path built up to \(t_{i}\), we simply pick \(t_{i}\xrightarrow{w_{i}}t_{i+1}\) such that there is a co-Buchi transition in \(A\) on the corresponding path \(\phi(t_{i})\xrightarrow{w_{i}}\phi(t_{i+1})\). Thus, we constructed a path contradicting the observation below: this path has infinitely many co-Buchi transitions in \(A\).
There remains to observe that \(\phi\) maps \(T[t^{\prime}_{0}]\) to \(U\), and thus \(U\) is almost \(W\)-universal for trees. We conclude by applying Lemma 5.
### A few examples
Kopczynski-monotonic objectives.In our terminology, Kopczynski's monotonic objectives correspond to the prefix-independent languages that are recognised by finite monotone co-Buchi automata. Note that such automata are of course well-founded, but also they are history-deterministic (even determinisable by pruning): one should always follow a transition to a maximal state. Therefore our result proves that such objectives are positional over arbitrary arenas. A very easy example is the co-Buchi objective
\[\text{co-Büchi}=\{w\in\{\mathcal{N},\mathcal{F}\}^{\omega}\mid w\text{ has finitely many occurrences of }\mathcal{F}\},\]
which is recognised by a (monotone) automaton with a single state. Some more advanced examples are given in Figure 1.
Finite support.The finite support objective is defined over \(\omega\) by
\[\text{Finite}=\{w\in\omega^{\omega}\mid\text{ finitely many distinct letters appear in }w\}\]
Consider the automaton \(A\) over \(V(A)=\omega\) with
\[v\xrightarrow{w}v^{\prime}\in E(A)\iff w,v^{\prime}\leq v,\]
co-Buchi transitions everywhere, and initial state \(0\) (see Figure 2).
It is countable, history-deterministic, well-founded, and monotone and recognises \(\operatorname{L}(A)=\text{Finite}\). Details of the proof are easy and left to the reader. Positionality of Finite can also be established by Corollary 3, as it is a countable union of the safety languages \(F^{\omega}\subseteq\omega^{\omega}\), where \(F\) ranges over finite subsets of \(\omega\). As far as we are aware, this result is novel.3
Figure 1: Two finite monotone co-Büchi automata recognising prefix-independent languages. For clarity, the co-Büchi transitions are not depicted but connect every pair of states; likewise, edges following from monotonicity (such as the dashed ones for example), are omitted. The automaton on the left recognises words with finitely many \(aab\) infixes. The automaton on the right recognises words with finitely many infixes in \(c(a^{*}cb^{*})^{+}c\).
Figure 2: An automaton \(A\) for objective Finite. Co-Büchi edges, as well as some edges following from monotonicity (such as the dashed one) are omitted for clarity.
Energy objectives.Recall the energy objective
\[\text{Bounded}=\Big{\{}w_{0}w_{1}\dots\in\mathbb{Z}^{\omega}\mid\sup_{k}\sum_{i=0 }^{k-1}w_{i}\text{ is finite}\Big{\}},\]
which is prefix-independent and belongs to \(\mathbf{\Sigma}^{0}_{2}\). Consider the automaton \(A\) whose set of states is \(\omega\), with the initial state \(0\) and with all possible co-Buchi transitions, and normal transitions of the form \(v\xrightarrow{w}v^{\prime}\) where \(w\leq v-v^{\prime}\). Note that \(A\) is well-founded and monotone, so we should prove that it is history-deterministic and recognises Bounded.
Note that any infinite path of normal edges \(v_{0}\xrightarrow{w_{0}}v_{1}\xrightarrow{w_{1}}\dots\) in \(A\) is such that for all \(i\), \(w_{i}\leq v_{i}-v_{i+1}\), and therefore
\[\sum_{i=0}^{k-1}w_{i}\leq v_{0}-v_{k}\leq v_{0}\]
and thus \(\operatorname{L}(A)\subseteq\text{Bounded}\).
A resolver for \(A\) works as follows: keep a counter \(c\) (initialised to zero), and along the run, from a vertex \(v\) and when reading an edge \(w\),
* if \(v\geq w\) then take the normal transition \(v\xrightarrow{w}v-w\);
* otherwise, take the co-Buchi transition \(v\xrightarrow{w}c\) and increment the counter.
A formal description of this resolver and a proof of its soundness are given in Appendix C.
Eventually non-increasing objective.Over the alphabet \(\omega\), consider the objective
\[\text{ENI}=\big{\{}w_{0}w_{1}\dots\in\omega^{\omega}\mid\text{there are finitely many $i$ such that $w_{i+1}>w_{i}$}\big{\}}.\]
Note that since \(\omega\) is well-founded, a sequence belongs to ENI if and only if it is eventually constant. Consider the automaton \(A\) over \(\omega\) with the initial state \(0\), with all possible co-Buchi transitions, and with normal transitions \(v\xrightarrow{w}v^{\prime}\) if and only if \(v\leq w\leq v^{\prime}\). Note that \(A\) is countable, well-founded, and monotone, so we should prove that it recognises ENI and is history-deterministic.
First, note that any infinite path of normal edges \(v_{0}\xrightarrow{w_{0}}v_{1}\xrightarrow{w_{1}}\dots\) in \(A\) is such that \(v_{0}\geq w_{0}\geq v_{1}\geq w_{1}\geq\dots\), and therefore \(\operatorname{L}(A)\subseteq\text{ENI}\). A sound resolver for \(A\) simply goes to the state \(w\) when reading a letter \(w\), using a normal transition if possible, and a co-Buchi transition otherwise. We leave the formal definition to the reader.
Eventually non-decreasing objective.In contrast, the objective
\[\text{END}=\{w_{0}w_{1}\dots\in\omega^{\omega}\mid\text{there are finitely many $i$ such that $w_{i+1}<w_{i}$}\}\]
is not positional over arbitrary arenas, as witnessed by Figure 3.
### Closure under countable unions
We now move on to Corollary 3, which answers Kopczynski's conjecture in the affirmative in the case of \(\mathbf{\Sigma}^{0}_{2}\) objectives.
If \(W_{0},W_{1},\dots\) are all positional prefix-independent \(\mathbf{\Sigma}^{0}_{2}\) objectives, each admitting a strongly neutral letter, then the union \(\bigcup_{i\in\mathbb{N}}W_{i}\) is also positional.
Proof.: Let \(W_{0},W_{1},\dots\) be a family of countably many prefix-independent \(\mathbf{\Sigma}_{2}^{0}\) objectives admitting strongly neutral letters. Using Theorem 2 we get countable history-deterministic well-founded monotone co-Buchi automata \(A_{0},A_{1},\dots\) for the respective objectives; without loss of generality we assume that they are saturated (Lemma 9).
Then consider the automaton \(A\) obtained from the disjoint union of the \(A_{i}\)'s by adding all possible co-Buchi transitions, and all normal transitions from \(A_{i}\) to \(A_{j}\) with \(i>j\). The initial state in \(A\) can be chosen arbitrarily. Note that \(A\) is well-founded, monotone, and countable, so we should prove that it recognises \(W=\bigcup_{i}W_{i}\) and is history-deterministic.
Note that any infinite path in \(A\) which visits finitely many co-Buchi transitions eventually remains in some \(A_{i}\), and thus by prefix-independence, \(\operatorname{L}(A)\subseteq W\).
It remains to prove history-determinism of \(A\). Let \(R_{0},R_{1},\dots\) be resolvers for \(A_{0},A_{1},\dots\) witnessing that these automata are history deterministic. Consider a resolver which stores a sequence of states \((r_{0},r_{1},\dots)\), with \(r_{i}\) being a state of \(R_{i}\). Initially these are all initial states of the respective resolvers and the transitions follow the transitions of all the resolvers synchronously. Additionally, we store a round-robin counter, which indicates one of the resolvers, following the sequence \(R_{0};R_{0},R_{1};R_{0},R_{1},R_{2};R_{0},R_{1},R_{2},R_{3};\dots\) If we see a normal transition in the currently indicated resolver, then we also see a normal transition in \(R\), and otherwise, we increment the counter to indicate the next resolver and see a co-Buchi transition in \(R\). (For completeness, we give a formal definition of \(R\) in Appendix B.)
We should prove that the above resolver is sound. For that, consider a word \(w\) which belongs to \(\operatorname{L}(A_{n})\) for some \(n\). Assume for the sake of contradiction that the path in \(A\) constructed by the above resolver reading \(w\) contains infinitely many co-Buchi transitions. It means that infinitely many times the resolver \(R_{n}\) reached a co-Buchi state in \(A_{n}\). But this contradicts the assumption that \(R_{n}\) is sound. We conclude that \(W\) is positional by applying Lemma 12.
## 4 From finite to arbitrary arenas
In this section we study the difference between positionality over finite versus arbitrary arenas.
### Mean-payoff games
There are, in fact, four non-isomorphic variants of the mean-payoff objective. Three of them fail to be positional over arbitrary arenas (even over bounded degree arenas), as expressed by the following facts.
**Proposition 14**.: _The mean-payoff objective Mean-Payoff\({}_{\leq 0}\) over \(w_{0}w_{1}\dots\in\mathbb{Z}^{\omega}\) with the condition \(\limsup_{k}\frac{1}{k}\sum_{i=0}^{k-1}w_{i}\leq 0\) is not positional over arbitrary arenas._
Figure 3: An arena over which Eve requires a non-positional strategy in order to produce a sequence which is eventually non-decreasing.
Proof.: Consider the arena depicted on Figure 4. Eve can win by following bigger and bigger loops which reach arbitrarily far to the right. This strategy brings the average of the weights closer and closer to \(0\).
However, each positional strategy of Eve either moves infinitely far to the right (resulting in \(\lim_{k}\frac{1}{k}\sum_{i=0}^{k-1}w_{i}=1\)) or repeats some finite loop which results in a fixed positive limit \(\lim_{k}\frac{1}{k}\sum_{i=0}^{k-1}w_{i}>0\). In both cases it violates Mean-Payoff\({}_{\leq 0}\).
Consider two \(\liminf\) variants of the mean-payoff objective over \(w_{0}w_{1}\cdots\in\mathbb{Z}^{\omega}\): one where we require that \(\liminf_{k}\frac{1}{k}\sum_{i=0}^{k-1}w_{i}\leq 0\), and the other where that same quantity is \(<0\). Both these objectives are not positional over arbitrary arenas.
Proof.: Consider the arena depicted on Figure 5. Again, Eve has a winning strategy for both these objectives by always going sufficiently far to the left, to ensure that the average drops below for instance \(-\frac{1}{2}\).
However, each positional strategy of Eve either moves infinitely far to the left (resulting again in \(\lim_{k}\frac{1}{k}\sum_{i=0}^{k-1}w_{i}=1\)), or repeats some finite loop, reaching a minimal negative weight \(-2^{n}\) for some \(n>0\). Now, Adam can win against this strategy by repeating a loop going to the right, in such a way to reach a weight \(2^{n+1}\). The label of such a path satisfies \(\lim_{k}\frac{1}{k}\sum_{i=0}^{k-1}w_{i}=\frac{2^{n+1}-1}{4n+4}>0\), violating both objectives.
The remaining fourth type of a mean-payoff objective is,,\(\limsup<0\)":
\[\text{Mean-Payoff}_{<0}=\Big{\{}w_{0}w_{1}\cdots\in\mathbb{Z}^{\omega}\mid \limsup_{k}\frac{1}{k}\sum_{i=0}^{k-1}w_{i}<0\Big{\}}.\]
The objective Mean-Payoff\({}_{<0}\) is positional over arbitrary arenas.
Proof.: Consider the tilted boundedness objective with parameter \(n\geq 1\), defined as
\[\text{Tilted-Bounded}_{n}=\Big{\{}w_{0}w_{1}\cdots\in\mathbb{Z}^{\omega}\mid \sup_{k}\sum_{i=0}^{k-1}(w_{i}+1/n)\text{ is finite}\Big{\}}\]
Figure 4: The arena used in the proof of Proposition 4.
Figure 5: The arena used in the proof of Proposition 4.
Note that renaming weights by \(w\mapsto nw\) maps \(\mathrm{Tilted-Bounded}_{n}\) to \(\mathrm{Bounded}\cap(n\mathbb{Z})^{\omega}\), therefore it follows easily that \(\mathrm{Tilted-Bounded}_{n}\) is positional over arbitrary arenas. Note also that for every \(n\) the objective \(\mathrm{Tilted-Bounded}_{n}\) belongs to \(\mathbf{\Sigma}_{2}^{0}\), as a union ranging over \(N\in\mathbb{N}\) of closed (in other words safety) objectives \(\big{\{}w_{0}w_{1}\cdots\in\mathbb{Z}^{\omega}\mid\forall_{k\in\mathbb{N}} \sum_{i=0}^{k-1}(w_{i}+1/n)\leq N\big{\}}\).
It holds that \(\mathrm{Mean-Payoff}_{<0}=\bigcup_{n\geq 1}\mathrm{Tilted-Bounded}_{n}\).
Proof of Claim 17.: Write \(\mathrm{mp}(w)=\limsup_{k}1/k\sum_{i=0}^{k-1}w_{i}\). If
\[w=w_{0}w_{1}\cdots\in\mathrm{Tilted-Bounded}_{n}\]
then there is a bound \(N\) such that for all \(k\), \(\sum_{i=0}^{k-1}(w_{i}+1/n)\leq N\), therefore \(1/k\sum_{i=0}^{k-1}w_{i}\leq N/k-1/n\) and thus \(\mathrm{mp}(w)\leq-1/n<0\), so \(w\in\mathrm{Mean-Payoff}_{<0}\). Conversely, if \(w\in\mathrm{Mean-Payoff}_{<0}\) and \(n\) is large enough so that \(1/n\leq\mathrm{mp}(w)\), then \(w\in\mathrm{Tilted-Bounded}_{n}\).
Now, positionality of \(\mathrm{Mean-Payoff}_{<0}\) follows from the claim together with Corollary 3, as all \(\mathrm{Tilted-Bounded}_{n}\) are prefix-independent, admit a strongly neutral letter, are positional, and belong to \(\mathbf{\Sigma}_{2}^{0}\).4
Footnote 4: We thank Lorenzo Clemente for suggesting to use closure under union. A direct proof (constructing a universal graph) is available in the unpublished preprint [26].
### A completeness result
Equivalence over finite arenasRecall that two prefix-independent objectives \(W,W^{\prime}\subseteq C^{\omega}\) are said to be _finitely equivalent_, written \(W\equiv W^{\prime}\), if for all finite \(C\)-arenas \(A\),
\[\mathrm{Eve\ wins}\ (A,W)\quad\iff\quad\mathrm{Eve\ wins}\ (A,W^{\prime}).\]
Since one may view strategies as games controlled by Adam, we obtain the following motivating result.
If \(W\equiv W^{\prime}\) and \(W\) is positional over finite arenas then so is \(W^{\prime}\).
Proof.: Let \(A\) be a finite \(C\)-arena such that Eve wins \((A,W^{\prime})\). Then Eve wins \((A,W)\), so she wins with a positional strategy \(S\). Looking at \(S\) as a finite \(C\)-arena controlled by Adam yields that Eve wins \((S,W^{\prime})\), thus \(S\) satisfies \(W^{\prime}\).
We now move on to the proof of our completeness result.
Let \(W\subseteq C^{\omega}\) be a prefix-independent objective which is positional over finite arenas and admits a weakly neutral letter. Then there exists an objective \(W^{\prime}\equiv W\) which is positional over arbitrary arenas.
We start with the following observation, which is a standard topological argument based on Konig lemma. Its proof is given in Appendix D. Note that the assumption of finiteness of \(G\) is essential here.
Let \(G\) be a finite \(C\)-graph and \(v\in G\). Then \(\mathrm{L}(G,v)\) is a closed subset of \(C^{\omega}\).
We may now give the crucial definition. Given a prefix-independent objective \(W\subseteq C^{\omega}\), we define its finitary substitute to be
\[W_{\mathrm{fin}}=\{w\in C^{\omega}\mid w\text{ labels a path in some finite graph $G$ which satisfies $W$}\}.\]
Note that \(W_{\text{fin}}\subseteq W\). Now observe that
\[W=\bigcup_{\begin{subarray}{c}G\text{ finite graph}\\ G\text{ satisfies}W\end{subarray}}\operatorname{L}(G)=\bigcup_{ \begin{subarray}{c}G\text{ finite graph}\\ G\text{ satisfies }W\end{subarray}}\operatorname{L}(G,v),\]
and since there are (up to isomorphism) only countably many finite graphs, it follows from Lemma 3 that \(W_{\text{fin}}\in\mathbf{\Sigma}_{2}^{0}\).
Let \(W\subseteq C^{\omega}\) be a prefix-independent objective which is positional over finite arenas. Then \(W_{\text{fin}}\equiv W\).
Proof.: Let \(A\) be a finite \(C\)-arena. Since \(W_{\text{fin}}\subseteq W\), it is clear that if Eve wins \((A,W_{\text{fin}})\) then she wins \((A,W)\). Conversely, assume Eve wins \((A,W)\). Then she has a positional strategy \(S\) in \(A\) which is winning for \(W\). Since \(S\) is a finite graph, it is also winning for \(W_{\text{fin}}\) and therefore Eve wins \((A,W_{\text{fin}})\).
We should make the following sanity check.
If \(W\) is prefix-independent, then \(W_{\text{fin}}\) as well.
Proof.: Take a letter \(c\in C\), we aim to show that \(cW_{\text{fin}}=W_{\text{fin}}\). Let \(w\in cW_{\text{fin}}\), and let \(G\) be a finite graph satisfying \(W\) such that \(cw\) labels a path from \(v\in V[G]\) in \(G\). Then \(w\) labels a path from a \(c\)-successor of \(v\) in \(G\), thus \(w\in W_{\text{fin}}\).
Conversely, let \(w\in W_{\text{fin}}\), and let \(G\) be a finite graph satisfying \(W\) such that \(w\) labels a path from \(v\in V[G]\) in \(G\). Let \(G^{\prime}\) be the graph obtained from \(G\) by adding a fresh vertex \(v^{\prime}\) with a unique outgoing \(c\)-edge towards \(v\). Since \(W\) is prefix-independent, \(G^{\prime}\) satisfies \(W\). Since \(cw\) labels a path from \(v^{\prime}\) in \(G^{\prime}\), it follows that \(cw\in W_{\text{fin}}\).
We are now ready to prove Theorem 3.
Proof of Theorem 3.: Let \(W\) be a prefix-independent objective which is positional over finite arenas and admits a weakly neutral letter \(\varepsilon\). We show that \(W_{\text{fin}}\) is positional over arbitrary arenas. Thus, Lemma 3 implies that \(W_{\text{fin}}\equiv W\), which concludes the proof of Theorem 3.
Thanks to Lemma 3, any finite graph \(H\) satisfying \(W\) can be embedded into a monotone finite graph \(G\) which also satisfies \(W\); note that \(\operatorname{L}(H)\subseteq\operatorname{L}(G)\). Therefore
\[W_{\text{fin}}=\bigcup_{\begin{subarray}{c}H\text{ finite graph}\\ H\text{ satisfies }W\end{subarray}}\operatorname{L}(H)=\bigcup_{ \begin{subarray}{c}G\text{ finite monotone graph}\\ G\text{ satisfies }W\end{subarray}}\operatorname{L}(G).\]
Let \(G_{0},G_{1},\dots\) be an enumeration (up to isomorphism) of all finite monotone graphs satisfying \(W\). Then consider the automaton \(A\) obtained from the disjoint union of the \(G_{i}\)'s by adding all normal transitions from \(G_{i}\) to \(G_{j}\) for \(i>j\), and saturating with co-Buchi transitions. The initial state \(q_{0}\) is chosen to be \(\max V(G_{0})\), the maximal state in \(G_{0}\). Note that \(A\) is countable, monotone, and well founded, so there remains to prove that \(\operatorname{L}(A)=W_{\text{fin}}\) and that \(A\) is history-deterministic.
Clearly for any monotone graph \(G\) satisfying \(W\), it holds that \(\operatorname{L}(G)\subseteq\operatorname{L}(A)\), and thus \(W_{\text{fin}}\subseteq\operatorname{L}(A)\). Conversely, let \(w\in\operatorname{L}(A)\), and consider an accepting path \(\pi\) for \(W\). Then eventually, \(\pi\) visits only normal edges, and therefore eventually, \(\pi\) remains in some \(G_{i}\). Thus \(w=uw^{\prime}\) with \(w^{\prime}\in\operatorname{L}(G_{i})\subseteq W_{\text{fin}}\), we conclude by prefix-independence of \(W_{\text{fin}}\) (Lemma 3).
To prove that \(A\) is history-deterministic we now build a resolver: intuitively, we deterministically try to read in \(G_{0}\), then if we fail, go to \(G_{1}\), then \(G_{2}\) and so on. The fact that reading
in each \(G_{i}\) can be done deterministically follows from monotonicity: for each \(v\in V(G_{i})\) and each \(c\in C\), the set \(\{v^{\prime}\in V(G_{i})\mid v\xrightarrow{c}v^{\prime}\in E(G_{i})\}\) of \(c\)-successors of \(v\) is downward closed. We let \(\delta_{i}(v,c)\) denote the maximal \(c\)-successor of \(v\) in \(G_{i}\) if it it exists, and \(\delta_{i}(v,c)=\bot\) if \(v\) does not have a \(c\)-successor. It is easy to see that in a monotone graph \(G\), \(v\leq v^{\prime}\) implies \(\operatorname{L}(G,v)\subseteq\operatorname{L}(G,v^{\prime})\); in words, more continuations are available from bigger states.
Now we define the resolver \(A\) by \(V(R)=V(A)\), \(r_{0}=q_{0}=\max V(G_{0})\) and for any \(q,q^{\prime}\in V(A)\) and \(c\in C\),
\[\begin{array}{lcl}q\xrightarrow{c}q^{\prime}\in E(A)&\iff&\exists i,q,q^{ \prime}\in V(G_{i})\text{ and }q^{\prime}=\delta_{i}(q)\neq\bot\\ q\xrightarrow{c}q^{\prime}\in E(A)&\iff&\exists i,q\in V(G_{i})\text{ and }\delta_{i}(q,c)=\bot\text{ and }q^{\prime}=\max V(G_{i+1}).\end{array}\]
Clearly \(R\) is deterministic and \(R\to A\) so it is indeed a resolver; it remains to prove soundness. Take \(w\in\operatorname{L}(A)\) and let \(i\) such that \(w\in\operatorname{L}(G_{i})\). Let \(\pi\) denote the unique path from \(r_{0}=\max V(G_{0})\) in \(R\) labelled by \(w\). We claim that \(\pi\) remains in \(\bigcup_{j\leq i}V(G_{j})\) and therefore it can only visit at most \(i\) co-Buchi transitions, so it is accepting. Assume for contradiction that \(\pi\) reaches \(V(G_{i+1})\).
Then it is of the form \(\pi=\pi_{0}\pi_{1}\ldots\pi_{i}\pi^{\prime}\) where each \(\pi_{j}\) is a path from \(\max(V(G_{j}))\) in \(G_{j}\) and \(\pi^{\prime}\) starts from \(\max(G_{i+1})\). Let \(w_{0},w_{1},\ldots,w_{i}\) and \(w^{\prime}\) be the words labelling the paths, so that \(w=w_{0}w_{1}\ldots w_{i}w^{\prime}\). Denote \(q=\max(V(G_{i}))\). Then \(w_{i}\) is not a label of a finite path from \(q\) in \(G_{i}\), therefore \(w_{i}w^{\prime}\notin\operatorname{L}(G_{i},q)=\operatorname{L}(G_{i})\). However \(w\in\operatorname{L}(G_{i})\) thus \(q\stackrel{{ w_{0}\ldots w_{i-1}}}{{\leadsto}}q^{\prime}\stackrel{{ w_{i}w^{\prime}}}{{\leadsto}}\) for some \(q^{\prime}\in V(G_{i})\). But then \(w_{i}w^{\prime}\in L(G_{i},q^{\prime})\subseteq L(G_{i},q)\), a contradiction.
## 5 Conclusion
We gave a characterisation of prefix-independent \(\mathbf{\Sigma}^{0}_{2}\) objectives which are positional over arbitrary arenas as being those recognised by countable history-deterministic well-founded monotone co-Buchi automata. We moreover deduced that this class is closed by unions. We proved that, with a proper definition, mean-payoff games are positional over arbitrary arenas. Finally, we showed that any prefix-independent objective which is positional over finite arenas is finitely equivalent to an objective which is positional over arbitrary arenas.
Open questions.There are many open questions on positionality. Regarding \(\mathbf{\Sigma}^{0}_{2}\) objectives, the remaining step would be to lift the prefix-independence assumptions; this requires some new techniques as the proofs presented here do not immediately adapt to this case. Another open question is whether the 1-to-2 player lift holds in \(\mathbf{\Sigma}^{0}_{2}\): is there a \(\mathbf{\Sigma}^{0}_{2}\) objective which is positional on arenas controlled by Eve, but not on two player arenas?
As mentioned in the introduction, Casares [4] obtained a characterisation of positional \(\omega\)-regular objectives, while we characterised (prefix-independent) \(\mathbf{\Sigma}^{0}_{2}\) positional objectives. A common generalisation, which we see as a far reaching open question would be to characterise positionality within \(\mathbf{\Delta}^{0}_{3}\); hopefully establishing closure under union for this class.
Another interesting direction would be to understand finite memory for prefix-independent \(\mathbf{\Sigma}^{0}_{2}\) objectives; useful tools (such as structuration results) are already available [5]. A related (but independent) path is to develop a better understanding of (non-prefix-independent) closed objectives, which so far has remained elusive. |
2302.14659 | The restframe ultraviolet of superluminous supernovae -- I. Potential as
cosmological probes | Superluminous supernovae (SLSNe) have been detected to $z\sim4$ and can be
detected to $z\gtrsim15$ using current and upcoming facilities. SLSNe are
extremely UV luminous, and hence objects at $z\gtrsim7$ are detected
exclusively via their rest-frame UV using optical and infrared facilities.
SLSNe have great utility in multiple areas of stellar and galactic evolution.
Here, we explore the potential use of SLSNe type-I as high-redshift
cosmological distance indicators in their rest-frame UV. Using a SLSNe-I sample
in the redshift range $1\lesssim z\lesssim 3$, we investigate correlations
between the peak absolute magnitude in a synthetic UV filter centered at 250 nm
and rise time, colour and decline rate of SLSNe-I light curves. We observe a
linear correlation between $M_0(250)$ and the rise time with an intrinsic
scatter of 0.29. Interestingly, this correlation is further tightened
($\sigma_{int} \approx 0.2$) by eliminating those SLSNe which show a pre-peak
bump in their light curve. This result hints at the possibility that the
"bumpy" SLSNe could belong to a different population. Weak correlations are
observed between the peak luminosity and colour indices. No relationship is
found between UV peak magnitude and the decline rate in contrast to what is
typically found in optical band. The correlations found here are promising, and
give encouraging insights for the use of SLSNe as cosmological probes at high
redshifts using standardising relations in the UV. We also highlight the
importance of early, and consistent, photometric data for constraining the
light curve properties. | Nandita Khetan, Jeff Cooke, Marica Branchesi | 2023-02-28T15:28:33Z | http://arxiv.org/abs/2302.14659v1 | # The restframe ultraviolet of superluminous supernovae - I. Potential as cosmological probes
###### Abstract
Superluminous supernovae (SLSNe) have been detected to \(z\sim 4\) and can be detected to \(z\gtrsim 15\) using current and upcoming facilities. SLSNe are extremely UV luminous, and hence objects at \(z\gtrsim 7\) are detected exclusively via their rest-frame UV using optical and infrared facilities. SLSNe have great utility in multiple areas of stellar and galactic evolution. Here, we explore the potential use of SLSNe type-I as high-redshift cosmological distance indicators in their rest-frame UV. Using a SLSNe-I sample in the redshift range \(1\lesssim z\lesssim 3\), we investigate correlations between the peak absolute magnitude in a synthetic UV filter centered at 250 nm and rise time, colour and decline rate of SLSNe-I light curves. We observe a linear correlation between \(M_{0}(250)\) and the rise time with an intrinsic scatter of 0.29. Interestingly, this correlation is further tightened (\(\sigma_{int}\approx 0.2\)) by eliminating those SLSNe which show a pre-peak bump in their light curve. This result hints at the possibility that the "bumpy" SLSNe could belong to a different population. Weak correlations are observed between the peak luminosity and colour indices. No relationship is found between UV peak magnitude and the decline rate in contrast to what is typically found in optical band. The correlations found here are promising, and give encouraging insights for the use of SLSNe as cosmological probes at high redshifts using standardising relations in the UV. We also highlight the importance of early, and consistent, photometric data for constraining the light curve properties.
keywords: supernovae: general - cosmology: distance scale - ultraviolet: general
## 1 Introduction
The newer generation of wide-format time-domain surveys over the past 15 years have discovered a rare class of highly luminous transients termed "superluminous supernovae" (SLSNe). These events are 10-100 times brighter at peak compared to classical type Ia and core-collapse supernova events, with total radiated energies of about \(10^{51}\) ergs (e.g., Smith and McCray, 2007; Pastorcello et al., 2010; Gal-Yam, 2012; Quimby et al., 2018; Angus et al., 2019). SLSNe are exceptionally blue events and are characterised by slowly evolving light curves that remain optically detectable, within several magnitudes from peak, for 100s of days. SLSNe (type I) have been observed to have a preference to occur in low-metallicity, star-forming dwarf galaxies (Lunnan et al., 2014; Leloudas et al., 2015; Perley et al., 2016; Schulze et al., 2018; Hatsukade et al., 2018). A recent comprehensive review of SLSNe is given by Gal-Yam (2019).
Previously, SLSNe were primarily defined by an arbitrary peak absolute magnitude cutoff of \(M<-21\) mag in optical filters (Gall-Yam, 2012), however, fainter events have since been discovered which show similar spectroscopic and photometric behaviour (De Cia et al., 2018; Lunnan et al., 2018; Angus et al., 2019). Therefore, this limit has been relaxed and SLSNe are now identified based on their unique spectral properties (Quimby et al., 2018). SLSNe are UV-luminous explosions, with majority of their spectral energy distribution (SED) emitted in the UV. This, combined with their extreme luminosities, makes SLSNe detectable upto very high redshifts (to \(z\sim 20\)) with current and upcoming optical and near-infrared space- and ground-based telescopes. Moreover, studies detecting SLSN at \(z\sim 1.5\)-4 suggest that their rate is higher than that at lower redshifts (Neill et al., 2011; Cooke et al., 2012; Howell et al., 2013; Prajs et al., 2017). Therefore SLSNe offer an appealing tool to study the high redshift Universe.
Investigations of SLSN rest-frame UV light curves and key features found in their UV spectra help to understand their progenitors, explosion mechanisms, and physics behind their enormous energies. Moreover, high redshift SLSNe and their rest-frame UV
emission enable to investigate both stellar and galactic evolution and physics. SLSNe are typically brighter near peak magnitude than their host galaxies and are one of our only means to probe the \(z\gtrsim 10\) Universe. As bright background beacons, SLSNe provide internal probes of their host proto-galaxies and the intervening material in absorption in the circumgalactic medium (CGM) and the intergalactic medium (IGM) along the line of sight. SLSN detections can trace star formation in high redshift dwarf galaxies (and potentially arrest star formation), which are believed to have contributed the most to cosmic reionisation. SLSN number-counts in well-defined volumes, as is done with SLSN detection, can place direct constraints on the high-mass end of the stellar initial mass function. SLSN ejecta and pre-explosion mass loss lend insight into the chemical enrichment of their host galaxies and the CGM and IGM over cosmic time. Finally, the very high redshifts at which SLSNe can be detected enable the study of the deaths of Population III stars, while providing our best chance to detect pair-instability supernovae. Here, we explore another potential utility of SLSNe as standardisable candles to probe the Universe from \(z\sim 0\)-20.
Reviewing the diversity in the population to date, SLSNe have been broadly classified into two classes (Gal-Yam, 2012) based on their optical spectroscopic and photometric properties. SLSNe type I (SLSNe-I) are hydrogen poor and exhibit a blue continuum, with a distinctive "W"-shaped or 'comb'-shaped feature from OII absorption around \(\sim 4200\) A during early epochs. At later times, SLSNe-I spectrum transforms to a SN-Ic like spectrum (Pastorello et al., 2010; Quimby et al., 2011). SLSNe type II (SLSNe-II) on the other hand show hydrogen emission lines and are likely related to Type IIn supernovae. The energy source for most SLSNe-II has been modelled as ejecta interaction with hydrogen-rich circumstellar material (CSM; Smith and McCray, 2007; Ofek et al., 2014; Benetti et al., 2014; Inserra et al., 2018). However, the power engine of SLSNe-I remains under debate, as radioactive decay of several solar masses of nickel fails to fully explain their light curve evolution and points to additional central energy input. Some proposed mechanisms include; central engine models, such as a magnetar spin-down (e.g.; Kasen and Bildsten, 2010; Woosley, 2010), the pair-instability process for stars with massive cores (Kasen et al., 2011; Kozyreva et al., 2017), and fallback accretion into a black hole that can also explain observed light curve undulations (Dexter and Kasen, 2013; Kasen et al., 2016). Another interesting feature that has recently been brought into light by Nicholl and Smartt (2016) is the presence of a small 'bump' before the main peak in the light curves of some of the observed SLSNe-I. (e.g.; Leloudas et al., 2012; Nicholl et al., 2015; Smith et al., 2016; Angus et al., 2019). In this work, we focus on SLSNe type I including those events with a pre-peak bump.
Although the physical understanding of the SLSNe-I explosion mechanisms and progenitor scenarios is still emerging, their fairly homogeneous observational behaviour has attracted significant attention for their potential use as cosmological probes for the local to high redshift universe (King et al., 2014; Inserra and Smartt, 2014; Wei et al., 2015; Scovarcichi et al., 2016; Inserra et al., 2020). Driven by the fact that they show a relatively small dispersion in their peak optical magnitudes (Quimby et al., 2013), SLSNe-I have recently been proposed as standardisable distance indicators to constrain cosmological parameters. Inserra and Smartt (2014) (hereafter, IS14) studied this prospect for the first time with a sample of 13 SLSNe-I over the redshift range of \(0.1<z<1.2\) to develop a method of standardisation analogous to SNe Ia. They find a linear relationship between the peak absolute magnitude and decline rate of the light curves (over 10, 20, and 30 days after peak), measured in a synthetic filter centred at 400 nm. This correlation reduce the scatter in peak magnitudes from \(\sim 0.4\) mag to around 0.25 mag (see also Papadopoulos et al., 2015). They also find a similar relation with the change in colour of SLSNe over 30 days after maximum. This work gives a promising proof-of-concept that SLSNe-I could be standardised for measuring distances and larger data samples could increase the accuracy to be competitive with SNe Ia. However, De Cia et al. (2018) explored similar correlations with various light curve properties of their sample of SLSNe-I but they did not confirm the above results. Recently, Inserra et al. (2020) (hereafter, 120) built an updated sample and used a novel technique that classifies SLSNe-I based on their photometric properties in a 4-dimensional parameter space (Inserra et al., 2018). With this more homogenised sample, I20 obtained similar scatters as IS14 for the decline rate-magnitude and colour-magnitude relationships, thus further encouraging the exploration of SLSNe as standardisable candles.
Overall, attempts to standardise SLSNe have been promising, however, current small data samples greatly hamper these efforts. Larger data sets are necessary not only to reduce statistical errors but also to understand the population diversity and their physics to consequently reduce systematic uncertainties. Scovarcichi et al. (2016) showed that even an addition of \(\sim\)100 SLSNe-I to current SNe Ia samples could significantly improve the cosmological constraints by extending the Hubble diagram into the deceleration epoch of the Universe (i.e. \(z>1\)). Upcoming transient surveys are expected to significantly increase the numbers and the redshift range of the detected SLSNe. I20 predict detection of \(\sim\)900 SLSNe-I with the Vera C. Rubin Observatory1 in optical-NIR filters, (_ugrizy_) to redshift \(z\sim 4\), with a majority of the detections around the redshift 2. Villar et al. (2018) also did simulations of SLSNe-I for LSST and predict a 10 times larger number (\(10^{4}\) SLSNe per year) with most (90%) detections at \(z\lesssim 3\). It is expected that such large samples would constrain \(\Omega_{M}\) and \(w\) to 2% and and 4% respectively (Scovarcichi et al., 2016). Inserra et al. (2018) show that the _Euclid_ satellite2 should detect approximately 140 (lower limit) high-quality SLSNe-I to \(z\sim 3.5\) over the first five years of the mission. The Nancy Grace Roman Space Telescope3 will also perform deep wide IR surveys that can detect SLSNe to \(z\sim 13\). Finally, the high sensitivity and long wavelength range of the _James Webb Space Telescope_ (JWST) could conduct a powerful survey for high-redshift transients and enable SLSNe detections to \(z\sim 20\) in the near- and mid-IR bands (e.g., Wang et al., 2017). However, since JWST has a relatively small FOV, it will be highly useful for acquisition of spectra of high redshift SLSNe (and their host galaxies) detected by other facilities.
Footnote 1: Legacy Survey of Space and Time (LSST) of the Vera Rubin Observatory is a 8.2 m (\(\sim\)6.7 m effective) diameter telescope with 9.6deg\({}^{2}\) field-of-view and will conduct several 10-year wide-field surveys across the Southern hemisphere in the _ugrizy_ filters.
Footnote 2: _Euclid_ is a 1.2 m optical and near-infrared (NIR) satellite (550 - 2000 nm) designed to probe the early Universe (Lautreijs et al., 2011). Euclid Deep Survey (EDS) is particularly suited for long SLSNe light curves.
Footnote 3: Roman Space Telescope is an infrared observatory based on a 2.4 m primary mirror. One of its two instruments is the Wide-Field Instrument (WFI) that has a 300-megapixel infrared camera giving it a field of view which is a hundred times larger than the Hubble Space Telescope
For the redshifts investigated here, i.e., \(z\sim 1\)-3, that span the peak of cosmic star formation, current and future optical and NIR surveys will observe the SLSN rest-frame far-UV (FUV) and near-UV (NUV) emission. For example, the optical filters \(g\), \(r\) and \(i\) of the Rubin Observatory will detect \(\sim 160\) nm, 200 nm and 250 nm central rest-frame wavelengths, respectively, for an object at \(z=2\). At higher redshifts, even NIR telescopes will sample rest-frame
UV wavelengths for \(z\gtrsim 6\) SLSNe. In terms of using SLSNe as cosmological probes, the major advantage is their detectability at redshifts beyond those possible for SNe Ia, i.e., redshifts \(\gtrsim 1.5\). Thus, not only would SLSNe be used as a secondary check on SNe Ia results at \(z\lesssim 2\), but SLSNe would complement SNe Ia by extending the Hubble diagram to potentially \(z\sim 10\) and higher, enabling a distinction between various dark energy models well past the deceleration epoch. Therefore, perhaps the most powerful use of SLSNe for cosmology, using data from future instruments, lies at higher redshifts where they will be observed at their rest-frame FUV/NUV wavelengths. In summary, in order to exploit SLSNe for studying stellar and galaxy formation and evolution, and for their potential use as cosmological probes, it is crucial to characterise their UV behaviour.
In this work, we investigate correlations among SLSNe-1 UV light curve properties and explore their use as standardisable candles for the high redshift Universe. Past works (IS14; 120) investigated SLSN peak magnitude correlations with their decline rates and colours with a 400 nm filter. This work primarily focuses on the rising properties of SLSNe light curves, such as the rise time and colour evolution during rise, and explores corresponding luminosity correlations in the rest frame UV synthetic filters.
There are a few physical and practical motivations to explore the rising part of the light curve instead of (or in addition to) the decline for standardising relations. Firstly, with the assumption that there is some uniformity in the underlying physics of SLSNe-I, one might expect more consistent evolution of the light curve at epochs right after the explosion and expansion compared to post-peak magnitude, where potential interaction with circumstellar material and unknown mechanisms (e.g., the SLSNe exhibiting post-peak bumps and undulations), effects/efficiency of magnet energy transfer, and other aspects could make the light curve more complicated. Secondly, since we attempt to study the evolution properties of SLSN at bluer wavelengths, the effects from dust creation might be smaller at early times. Thirdly, work on SNe Ia standardisation have examined the peak magnitude correlations with the rising part of the light curve and have indicated its importance for cosmological measurements (e.g Hayden et al., 2010; Firth et al., 2015; Zheng et al., 2018; Hayden et al., 2019), making it worthwhile to explore similar relations for SLSNe-I standardisation. Fourthly, we aim to use a phenomenological approach because of the lack of understanding behind the explosion mechanisms and details behind SLSNe. Finally, the limited non-uniform sampling and fragmented nature of the light curve data does not permit a study of the decline beyond \(\sim\)15 d for most cases for SLSNe at high redshift. Nevertheless, we do explore the decline part with the limited data, thus enabling a search for relationships between rise and decline with respect to peak magnitudes.
Besides the reasons presented earlier for exploring SLSNe in the UV for their cosmological use, characterising their blue wavelength behaviour is also important more generally for their detection, to help understand their explosion physics, their nature, and classification. For example, I20 measured the pseudo equivalent width (pEW) of the C iii/C ii/Ti iii and Mg ii/C ii blended lines at \(\sim 2200\) A and \(\sim 2800\) A respectively, to search for a more quantitative way of distinguishing between Fast and Slow SLSNe detected at high redshifts.
In this work, we assemble all the available UV and NUV SLSNe-I data from the literature, and include all events with data coverage on the rising part for our analysis. We measure their light curve properties and determine peak magnitude correlations in order to probe their potential for cosmological use. Given the presence of SLSNe with pre-peak bumps in their light curves (Nicholl and Smartt, 2016) and the debate over their ubiquity, we identify them separately within our data sample to look for differences, if any, in their trends from those of a general SLSNe-I sample. The intention is to help determine if there is different physics driving these explosions and whether or not they are a distinct population, providing insight on their nature. Additionally, since these events can be identified via photometry alone, we explore the prospect of using them as standardisable candles, or a population that can be easily eliminated from the full SLSN population, to enable 'clean' photometric events for standardisation. Although the number of objects involved in this study is statistically small and the available data is sparse, this work provides an important first investigation of the evolutionary properties and potential cosmological use of the UV light curves of SLSNe.
We describe the SLSN data sample used in this work in Section 2. In Section 3, we outline the methodology for light curve fitting and present the estimated light curve parameters along with explaining various analytical techniques used to determine the correlation relations. Section 4 presents the observed relationships for various light curve properties of the two data samples. Finally we discuss our findings and draw the conclusions in Section 5
## 2 Data
The primary goal of this work is to investigate the rest-frame UV behaviour of SLSNe-I light curves. The subset of existing high redshift SLSN data which includes rest-frame UV wavelengths is relatively small. From among the published events, we select all the SLSNe-I having rest-frame FUV/NUV photometric coverage with sufficient data to reliably measure their light curve properties (such as peak magnitude, rise time, colour evolution, etc.). This data set is referred to as the _'Literature'_ sample because it includes all objects published to date that have photometric data in the rest-frame UV. Among this sample, we select SLSNe which pass certain data completeness and quality cuts in order to define a data set which allows us to measure the light curve properties with no, or negligible, extrapolations. This sub sample is called the _'SLSN-UV test'_ sample and all correlations presented in this work are measured using only this sub-sample. The literature sample objects are shown only for completeness and comparison purposes. Additionally, as mentioned earlier, a lesser understood phenomenon observed in some SLSNe is the presence of a pre-peak bump in their light curves. We separately identify and mark such objects in our data sample, and refer to them as _'Bumpy'_ SLSNe. Below, we describe in detail our data set.
### The Literature Sample
We take all published SLSNe type I from the literature having \(z\gtrsim 1\) redshifts in order to assemble all SLSNe with available FUV/NUV data. This redshift cut is chosen such that the observed optical filters are blue-shifted to UV filters in the rest frame of the SLSN (\(\lambda_{eff}=\lambda_{obs}/(z+1)\); for example, a SLSN at the lowest redshift of \(z=1\), the \(g\) band effectively samples the spectral region around 2400 A). Exceptions here are the lower redshift events that have UV data coverage with space-based telescopes such as the _Neil Gehrels Swift Observatory (Swift)_. Secondly, in order to explore the rising phase of the light curve, we require photometric coverage from several days before the maximum to post maximum so as to efficiently constrain the rise time and the peak magnitude/epoch. Thirdly, often the data is not available in all optical filters, so we
require data specifically in those observer-frame filters that coincide with UV/NUV bands in the SLSN rest frame.
Finding a statistically significant sample which passes all these three criteria is challenging since there are only a few tens (\(\sim 30\)) of detected SLSNe-I at high redshift, and even fewer with adequate photometric coverage during the rising phase. Although acquiring high-cadence light curves for events at higher redshifts may be easier than low redshift owing to their slow evolving nature combined with the effect of time dilation, surprisingly few objects have coverage to later times (\(\div 30\) days and beyond). Detecting these events on their rise is demanding because of their faintness at such high redshifts, and the fact that the temporal coverage of surveys with sufficiently deep and wide fields is comparatively sparse. High resource outlay also often prevents getting early spectral confirmation of these high redshift events since exposures of several hours on highly competitive 8m-class telescopes are generally required for the m\({}_{r}\gtrsim 24\)-25 near-peak magnitudes of SLSNe at \(z\gtrsim 2\)(Smith et al., 2018; Curtin et al., 2019).
Another challenge associated with SLSNe cosmology, as pointed out by 120, is to find a uniform classification scheme which relies on SLSNe progenitor scenarios and explosion mechanisms. Building a homogeneous sample is important in context of using SLSNe as standardised candles because variations in the underlying physics of SLSNe may increase the intrinsic scatter in the correlations. In the absence of a robust definition of SLSNe-I subclasses and understanding of their physics and explosion mechanisms, we do not make any distinctions in our data sample based on the light curve evolution of the SLSNe, unlike 120, where the objects are categorised into Fast and Slow types. We build the sample used in this work based solely on the availability of the photometric data and the required redshift cut (\(z\gtrsim 1\) to reach the rest-frame UV/NUV), and study their light curve properties as a whole. The SLSN-UV test sample (see below) is also selected based only on the data quality and cadence. Furthermore, objects with early peaks (Nicholl et al., 2015) are not excluded if they pass our redshift and data quality criteria, instead, we identify them as a separate sub-set (see Section 2.1.2).
Among the \(\sim 30\) published high redshift events, there are 22 SLSNe which pass our three filtering criteria: (1) spectroscopic redshifts of \(z\gtrsim 1\), (2) data coverage on the rising phase of the light curve, and (3) photometric coverage in the rest-frame UV/NUV filters, and hence these form our Literature Sample. An exception to the imposed redshift cut are the two events SN2017egm and SN2015bn at \(z=0.030\) and \(z=0.114\), respectively. These two objects have UV data coverage with the _Swift_ Ultraviolet and Optical Telescope (UVOT). Figure 1 shows the redshift distribution for the Literature Sample. The correlations presented in this work are built using only the SLSN-UV test sample which is a sub-set of the Literature Sample (Section 2.1.1).
The Literature Sample is composed of events discovered and followed-up from several surveys. Of the 22 SLSNe-I in the Literature Sample, 8 are discovered with the Dark Energy Survey (DES, Dark Energy Survey Collaboration et al., 2016), an optical imaging survey using the Dark Energy Camera (DECam, Flaugher et al., 2015) on the 4 m Blanco Telescope at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. These SLSNe were discovered during the DES-SN programme (Kessler et al., 2015; Diehl et al., 2018) which surveyed 10 DECam pointings, imaging 27 deg\({}^{2}\) in \(g,r,i\) and \(z\) filters with an approximate 7-day cadence.These objects are reported and analysed in Angus et al. (2019).
Another 6 objects come from the Pan-STARRS1 Medium Deep Survey (PS1 MDS, Chambers & Pan-STARRS Team, 2016), using the PS1 telescope on the summit of Haleakala in Hawaii, a wide-field survey instrument with a 1.8 m primary mirror. PS1 MDS observed in \(g_{\rm PS1,\it FP1,\it FP1,\it FP1}\) filters with a typical 3-day cadence. These PS1 SLSNe are presented in various papers including Chomiuk et al. (2011); McCrum et al. (2015) and Lunnan et al. (2018).
Three of the very distant SLSNe (\(z\gtrsim 2\)) in our sample are discovered with the Subaru High-Z SUpernova CAmpaign (SHIZUCA) that uses the Hyper-SuprimeCam (HSC, Miyazaki et al., 2018; Kawanomoto et al., 2018) on the 8.2 m Subaru telescope on the summit of Maunakea in Hawaii. HSC has a 1.8 deg\({}^{2}\) field of view and the necessary sensitivity required to find high-redshift SNe. Type-I classification of the three HSC SLSNe is uncertain owing to their low signal-to-noise spectra and have been assumed to be SLSNe-I for this work. These objects have been reported in Moriya et al. (2019).
Two SLSNe-I are included from the Supernova Legacy Survey (SNLS, Perrett et al., 2010) that was based on the Deep Survey of the Canada-France-Hawaii Telescope Legacy Survey 4 (CFHT-LS). CFHT-LS Deep Fields imaged four fields in \(g,r,i\) and \(z\) filters with a cadence of 3-5 days. These are presented and analysed in Howell et al. (2013). Additionally, another CFHT object in the sample is SLSNe213-1745 (\(z=2.046\)), discovered by Cooke et al. (2012) in the CFHT-LS Deep Fields using an image stacking technique. We note that this work discovered another SLSN, SN1000+0213, at \(z\approx 4\) which is the highest redshift SLSN detected to date and also exhibits a pre-peak bump. However, this object is excluded from the SLSN-UV sample because the ground-based optical date probes too blue (rest-frame \(gri\) data coverage of 850 A to 1700 A; z-band data are too shallow for this work) compared to the rest of the sample.
Footnote 4: Canada-France-Hawaii Telescope Legacy Survey
Finally, two low redshift SLSN events; SN2017egm at \(z=0.03\) and SN2015bn at \(z=0.114\). SN2017egm was discovered by the _Gaia_ Satellite (presented in Nicholl et al. (2017)) and SN2015bn (presented in Nicholl et al. (2015)) was first discovered by the Catalina Sky Survey. These SLSNe are included in our sample as they have rising phase data coverage with the _Swift_-UVOT UV filters.
Table 1 lists all the 22 objects that comprise the Literature Sample along with their redshifts and photometric data references. Redshifts of all the SLSNe-I used in this work were determined spectroscopically, with the exception of HSC16apuo, where the host galaxy redshift is estimated photometrically as a distribution between \(z\simeq 2.8\) and 3.5 (Moriya et al., 2019), with the most probable value from that work, 3.22, adopted here.
#### 2.1.1 The SLSN-UV test sample: SLSNe used here for testing correlations
Given the poor physical understanding of SLSNe and their classifications, coupled with the paucity of UV data, selection criteria adopted here to build the literature data sample rely more on the availability of the UV data than on the physical properties of the SLSNe. However, not all the SLSNe-I available in the literature have data quality which allows us to measure light curve properties with small uncertainties and without biases on the light curve shape. For example, no early data to measure the rise time or missing peak data because of limited observing seasons visibility during the year. This happens particularly for high redshift SNe where time dilation stretches the transient's visibility in the observer frame such that
the full light curve evolution (rise an decline) is not covered in one observing season.
Taking into account the data quality and cadence, we select SLSNe from the Literature sample that have (1) a well defined main peak with at least one data point before and after maximum and (2) rising light curve data from at least 1 magnitude before maximum so that the rise time can be measured reliably. These criteria are used to enable peak magnitude and rise time measurements with negligible extrapolations. Thirteen of 22 SLSNe from the Literature sample pass the above criteria and comprise the SLSN-UV test sample (Table 1), which is used for all the cosmological correlations explored in this work.
#### 2.1.2 Bumpy SLSNe
Several SLSNe detected to date have been observed to exhibit multiple peaks in their light curves. Some objects show rebrightening at later times during the decline of the main peak, and others (with sufficient early and deep data) have been observed to exhibit bumps prior to the main peak of the light curve (e.g., [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 223, 209, 224, 209, 231, 201, 232, 209, 240, 241, 242, 243, 244, 245, 246, 247, 248, 250, 251, 261, 273, 274, 275, 276, 277, 278, 280, 281, 290, 209, 209, 209, 209, 209, 211, 209, 212, 213, 214, 215, 216, 217, 218, 219, 222, 233, 234, 246, 247, 249, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 273, 279, 282, 283, 284, 285, 286, 287, 291, 292, 293, 294, 295, 296, 297, 298, 299, 209, 210, 211, 212, 214, 215, 216, 217, 218, 219, 223, 219, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 241, 242, 243, 245, 246, 247, 249, 252, 254, 255, 256, 257, 258, 259, 261, 270, 271, 272, 273, 274, 276, 278, 279, 280, 282, 284, 286, 289, 290, 291, 292, 294, 296, 297, 298, 299, 209, 209, 211, 216, 217, 218, 219, 223, 234, 236, 237, 239, 240, 241, 242, 243, 246, 247, 248, 249, 250, 251, 252, 253, 254, 256, 257, 259, 262, 263, 264, 265, 266, 267, 268, 269, 270, 272, 274, 278, 279, 284, 286, 287, 288, 290, 292, 295, 296, 271, 273, 274, 279, 275, 276, 278, 273, 279, 285, 272, 278, 280, 299, 209, 209, 210, 211, 216, 217, 219, 218, 219, 223, 219, 224, 219, 231, 232, 233, 234, 236, 238, 239, 241, 242, 243, 245, 246, 247, 248, 249, 250, 252, 250, 269, 273, 270, 274, 271, 275, 276, 278, 279, 280, 290, 273, 270, 271, 278, 279, 286, 283, 287, 280, 291, 292, 296, 297, 298, 299, 209, 209, 211, 216, 217, 219, 223, 234, 235, 236, 237, 238, 239, 240, 241, 242, 242, 244, 245, 246, 247, 249, 251, 252, 254, 256, 257, 258, 259, 260, 273, 270, 274, 279, 280, 275, 276, 272, 278, 280, 279, 281, 270, 279, 282, 287, 288, 290, 290, 273, 270, 271, 278, 273, 279, 283, 272, 279, 284, 270, 271, 279, 285, 273, 274, 278, 273, 275, 276, 279, 286, 287, 288, 291, 292, 297, 298, 299, 209, 209, 209, 210, 211, 216, 217, 219, 218, 219, 224, 209, 209, 220, 221, 224, 209, 220, 221, 235, 226, 227, 228, 293, 236, 237, 238, 239, 240, 241, 243, 245, 246, 247, 248, 249, 250, 247, 249, 251, 252, 253, 254, 255, 256, 257, 258, 259, 261, 270, 272, 273, 274, 275, 276, 278, 280, 279, 280, 290, 209, 209, 209, 209, 210, 209, 209, 211, 216, 219, 22, 224, 209, 209, 220, 221, 223, 224, 209, 224, 209, 225, 226, 227, 229, 228, 230, 227, 229, 229, 231, 232, 235, 236, 237, 239, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 249, 253, 240, 247, 248, 249, 255, 256, 257, 259, 260, 273, 278, 280, 280, 290, 209, 209, 209, 209, 209, 209, 209, 209, 210, 211, 216, 217, 219, 220, 224, 209, 220, 221, 224, 209, 220, 221, 236, 227, 229, 232, 237, 238, 239, 24
are simply assumed to have no pre-peak bumps. They are called 'non-bumpy' to differentiate them from the former.
There are 10 SLSNe among the full Literature Sample of 22 which either have a confirmed pre-peak bump (i.e. 4 bumpy SLSNe), or the possibility of a pre-peak bump in their light curves could not be excluded (i.e. 6 may-be bumpy SLSNe) and are listed in Table 1.
## 3 Method
This work aims at testing standardisation of SLSNe in the UV to enable their use as cosmological probes from low to high redshift, in particular for \(z\geq 1\). As such, for SLSNe at redshifts \(z\approx 1\)-6, standard optical filters probe the rest-frame from \(\sim 2500\) A to the far-UV. An immediate challenge is the small number of high redshift SLSNe-I detected to date with well-sampled light curves in a specific optical filter. With this in mind, we attempt to develop an optimised framework that interpolates the observed SLSNe light curves at any epoch, even if they are sparse, and measures their properties and evolution. We aim to estimate the peak magnitude, the epoch of peak magnitude, and to characterise their colour and light curve behaviour. Furthermore, in order to compare peak magnitudes in a synthetic filter, we also explore a viable solution for \(K\)-correcting the estimated peak magnitudes in the observed filters into a synthetic band in UV (described in section 3.3), in the absence of UV spectra at peak for each SLSN.
Apparent magnitudes at all epochs are converted to absolute magnitudes using flat \(\Lambda\) cold dark matter (\(\Lambda\)CDM) cosmology with \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1},\Omega_{\Lambda}=0.7\) and \(\Omega_{\rm M}=0.3\). Correction for cosmological expansion is applied to all the absolute magnitudes, hence absolute magnitude \(M\) at any observed epoch is given by \(M=m-5\log_{10}(d_{L}/10pc)+2.5\log_{10}(1+z)\). Light curves are corrected for time dilation and all timescales are given in the SLSN restframe throughout the paper. Photometric data has been corrected for Milky Way extinction (Schlafly & Finkbeiner, 2011), however, no correction is applied for the extinction in SLSN host galaxies, which is assumed to be small (Nicholl et al., 2015; Leloudas et al., 2015; Inserra et al., 2020). Several factors support this assumption, including that SLSNe-I have been observed to typically occur in dwarf, low metallicity hosts (Lunnan et al., 2014; Chen et al., 2017; Perley et al., 2016; Angus et al., 2016; Izzo et al., 2018; Hatsukade et al., 2018) and often on the outskirts of the galaxies in space-based imaging (Curtin et al., 2019, see also Angus et al., 2016 and Lunnan et al., 2015). Furthermore, low scatter has been observed in the SLSNe UV peak magnitude distribution (Smith et al., 2018) and in the colour distribution in the optical and UV (Inserra et al., 2018; Smith et al., 2018).
The uncertainties on the absolute magnitudes are directly propagated from the uncertainties on the observed apparent magnitudes. We do not include any errors from the distance estimates with the redshift and the assumed cosmological model, as they are negligible.
### Synthetic photometric bands
The intention here is to explore SLSN-I FUV/NUV peak magnitude correlations, however, the details of the spectra of SLSNe-I in the UV are poorly known. Currently, _Gaia16apd_ is one of only two SLSNe-I that have UV spectral data at/near the peak light (Yan et al., 2017, 2018) and we use it here as a template for the near-peak spectral behaviour of SLSNe-I to evaluate \(K\)-correction (See section 3.3). Strong absorption features are observed in the rest-frame FUV and, although these absorption features are key to understanding SLSN physical processes, they could also introduce additional photometric scatter in SLSN light curves. One would ideally want to explore light curve correlations in a continuum region devoid of strong absorption features, however, this becomes difficult blueward of \(\sim\)3000A (Figure 2). Below, we motivate our choice to use synthetic photometric bands, keeping in mind the above considerations and to maximise the sample size.
For our data sample, we build a synthetic passband with a width of 500 A centred at 2500 A. This band contains two moderately strong absorption features that appear relatively consistent in SLSNe-I spectra obtained to date. This band is referred to as the 250 nm band throughout this work and is shown in Figure 2. We define another similar synthetic band centred at 3100 A (Fig.2), referred to as the 310 nm band, and it is used to compute colour (250-310) of the SLSNe. This synthetic band probes a relatively featureless portion of the spectrum and has good observed photometric coverage in the data, while still remaining in the NUV spectral region. Table 1 presents the observed filters which are \(K\)-corrected to corresponding synthetic filters for the whole Literature Sample.
As the aim here is to explore SLSNe in the rest-frame UV also for higher redshift objects for cosmology, we define a bluer synthetic filter with a width of 500 A centred at 1900 A (see Figure 2). This 190 nm band has large absorption features and only 6 of the highest redshift SLSNe have the required data.
### Light Curve fitting
In order to estimate the peak brightness of SLSNe-I and interpolate the photometric observations to characterise their light curve behaviour, we employ Gaussian Process Regression (GPR) technique (Rasmussen & Williams, 2006; Bishop, 2006). Being a non-parametric Bayesian approach, this method removes any assumptions on the SLSN light curve shape, which may be introduced with polynomial fitting. Comparing polynomial fitting with Gaussian Process, Inserra et al. (2018c) show that a Gaussian Process fit represents the observed data better than a polynomial fit. GPR works well on small data sets common in transient astronomy and
Figure 1: Redshift distributions of the SLSNe in the Literature Sample (22 SLSNe) and the sub-set SLSN-UV Test sample (13 SLSNe).
has the ability to provide uncertainty measurement intrinsically on the predicted values, which is particularly important in this work.
This technique has the advantage over other methods by including uncertainty information of the observed data, thus producing less-biased interpolated values. Additionally, this method is very powerful for SN light curves having incomplete or noisy photometric data. Therefore, GPs are being successfully used in astronomy (Mahabal et al., 2008; Way et al., 2009; Gibson et al., 2012) and supernovae analyses (Mandel et al., 2009; Kim et al., 2013; Scalzo et al., 2014; Lochner et al., 2016). The use of Gaussian Process techniques allows the user to marginalise over systematic sources of noise within a data, which might otherwise not be captured in an astrophysical model.
Gaussian Process (GP) is a probability distribution over all admissible functions that can model the correlated noise within a set of temporal or spatial data. It is a collection of possibly infinite random variables and hence termed non-parametric. Among these infinite parameters, any finite number set has a multivariate Gaussian distribution. Just like a Gaussian distribution, a GP is defined by (1) a mean function \(\mu(x)\) which determines the mean at any point of the input space, and (2) a covariance function, a.k., 'kernel' \(K(x,x^{\prime})\), which sets the covariance between data points \(x\) and \(x^{\prime}\).
\[f(x)=GP(\mu(x),K(x,x^{\prime})) \tag{1}\]
For any input point \(x\), the reconstruction function \(f(x)\) has a normal distribution with its mean value given by the mean function \(\mu(x)\) and a covariance between two points by the kernel function \(K(x,x^{\prime})\). The kernel determines the kind of relationship between adjacent data points making one point dependent on the other. Both \(\mu\) and \(K\) may be parameterised and the parameters of the latter are referred to as hyper-parameters since they describe the function scatter rather than the function itself. There are two hyper-parameters; the vertical scale \(\sigma\) that describes how much the function can span vertically, and the horizontal scale \(l\) that tells how quickly the correlation between two points drops as the distance between them increases. A high \(l\) gives a _smooth_ function, while a lower \(l\) results in a _wiggly_ function. For time-series data as dealt with here, these two hyper-parameters respectively become the uncertainties in the measured magnitudes and the timescale over which significant changes occur within the data. The functional form of the covariance function or 'kernel' used can be selected/constructed such that it represent any periodic tendencies within the data.
The likelihood function of a GP is a multivariate Gaussian distribution of dimension equal to the number of measurements \(n\). The functional form of the covariance function defines the relationship between the measurements and it absorbs any intrinsic systematics which are unknown to the user. The kernel hyper-parameters may be optimised to reach the best convergence of the distribution.
We implement GPR to interpolate the observed light curves of all the SLSNe in our sample. We use a flexible python library called GEORGE (Ambikasaran et al., 2015) and employ a Matern 3/2 kernel already implemented in this library. A Matern 3/2 kernel is mathematically similar to squared exponential function and can be written as:
\[k(r)=\sigma^{2}\left(1+\frac{\sqrt{3}r}{l}\right)\exp\left(\frac{-\sqrt{3}r}{ l}\right) \tag{2}\]
where \(\sigma\) is the vertical span _i.e._, error in an observation, \(l\) is the horizontal scale over which variations happen in the data, and \(r\) is the separation between observations.
This kernel provides a greater flexibility in fluctuations over short time scales and has been shown to best represent the form of SLSN light curves (Inserra et al., 2018; Angus et al., 2019). Our GPR fitting method determines the best fitting hyper-parameters for each light curve via a gradient based optimisation. We interpolate the light curves over the full temporal range of measurements and estimate their evolution parameters. The fitting is done for the chosen observed band(s) for each SLSN in the Literature Sample.
Figure 3 presents the mean and 1 \(\sigma\) uncertainties for the GP fits of all 22 objects in the Literature Sample. The 13 SLSNe constituting the SLSN-UV test sample are marked with an orange star on their light curve plot. The figure shows photometric measurements for all observed bands for each SLSN, while the GP fit is shown only for one chosen filter which is closest to the 250 nm band (as given in Table 1) based on the SLSN redshift. For those SLSNe which have an early bump in their light curve, the data points in the bump phase (where present) are not included in the fitting unless the absence of sufficient data characterising an early bump does not skew the fitting process. For all GP interpolated fits, we observe that the light curves are less flexible or 'tighter' in areas with high cadence data, while interpolations over large data gaps are more uncertain or 'loose'. The optimised kernel defines the relationship between successive points and greater data density results in fewer degrees of freedom for the fit, while sparsely spaced data generate more uncertainty.
#### 3.2.1 Light Curve properties
We can estimate SLSN magnitudes and uncertainties at any epoch from the GP fitted light curves, regardless of the data cadence. For each SLSN, we quantify its peak and evolutionary behaviour using certain quantities measured with the interpolated light curve. Below, we define each of these quantities along with their notation which are used throughout this work.
* The maximum absolute magnitude in a photometric band \(X\) written as \(M_{0}(X)\), where 0 indicates the peak epoch. For example, peak magnitude in 250 nm band is written as \(M_{0}(250)\).
* The time in rest frame days that a SLSN takes to rise from 1 magnitude below peak to the peak magnitude. It is denoted as \(\tau_{\rm rise}^{\rm\Delta 1m}\). Rise time here is measured in the
Figure 2: Gain16apd spectrum at peak (Yan et al., 2017) and the blackbody function at 15000K. Also shown are the four synthetic filters, 190 nm, 250 nm, 310 nm and 400 nm used in this work.
Figure 3: Light curve fits for the 22 SLSNe in the Literature Sample. Objects marked with an orange star denote they are part of the SLSNe-UV test sample. All phases are given in SLSN rest-frame days relative to the peak magnitude in the fitted observed band. The bands are chosen such that they are closest to the 250 mm filter in rest-frame as given in Table 1. _(continued on the next page)_
250 mm rest frame filter except in Figure 15 where it is measured in 310 mm filter.
3. _Rise rate_ - The change in magnitude from 15 rest-frame days before the peak magnitude to the peak magnitude. The rise rate is denoted as \(\Delta M_{-15}\).
4. _Decline rate_ - The change in magnitude from peak magnitude to 15 rest-frame days after the peak magnitude. It is denoted as \(\Delta M_{15}\)
5. _Colour at peak_ - The colour index of a SLSN at the epoch of peak magnitude and is defined as \(M_{0}(X)-M_{0}(Y)\), written as \((X-Y)_{0}\). For example, the colour at peak magnitude in the 250 nm and 310 nm bands is denoted as \((250-310)\).
6. _Delta colour_ - The change in colour from 15 days before peak magnitude to the peak magnitude, denoted as \(\Delta(X-Y)_{-15}\). For example, \(\Delta(250-310)_{-15}\).
All quantities are measured using the interpolated light curves. To measure the changes, the epoch of 15 days before peak is chosen because many SLSNe in our sample do not have very early data and estimating/extrapolating to earlier epochs would incur assumptions on the light curve shape. In few cases where there was no data 15 days before/after the peak in the required observer band (for example, HSC16apuo, SN2015bn), we use extrapolated values with associated uncertainties that are larger than those of interpolated values, and those objects are not included in the SLSN-UV test sample used to determine the peak magnitude correlations. Table 2 lists the measured light curve properties for the full Literature Sample.
Figure 3: Cont. Continued Figure 3.
[MISSING_PAGE_POST]
#### 3.2.2 Error on Rise time
While the errors on interpolated magnitudes at peak or any other epoch are directly estimated by GP fitting depending on the cadence and uncertainties in observed data, the time stamp of any observed data has essentially zero error. Though the gaps in photometric data cause uncertainty in predicting the time of maximum, this uncertainty in peak epoch is not explicitly measured with the GPR. Quantifying the error on the estimated peak epoch is important since we want to measure the rise time (here, time taken to rise 1 magnitude below peak to the peak). We derive the uncertainties in peak epoch and the rise time by using a data resampling technique with Monte Carlo method (See e.g., Burtscher et al., 2009).
This approach is intuitive to error estimation in the sense that it simulates repeated measurements of the light curve. We assume that the error distribution of the measured data is Gaussian. For each data point \(x_{n}\) on the light curve, we invoke a Gaussian with mean \(x_{n}\) and standard deviation \(\sigma_{n}\) as the measured uncertainty. We randomly sample a new data point \(x^{\prime}_{n}\) from this distribution. Similarly, for all data points, we generate an _alternative_ light curve with the same temporal sampling as the observed light curve. We then estimate the evolution parameters for this resampled light curve, namely peak magnitude, peak epoch and the epoch for 1 magnitude before peak using GPR. Repeating this resampling process 500 times, and estimating the light curve parameters each time, we get a frequency distribution of each parameter. We can then infer the uncertainty in that parameter as the spread of the frequency distribution. Figure 4 presents an example of the resampled light curves (left panel) and the resulting distribution of the time of maximum (right panel) for SLSN DES15E2mlf. We estimate the errors on the peak epoch and on the epoch for 1 magnitude before the peak for each SLSNe in our data sample. These two errors are then added in quadrature to give the final error on the rise time.
This method provides an upper limit of the uncertainty, because even though we use the measured error as the standard deviation of the Gaussian, we centre this error distribution on the measured value instead of the unknown true value. This introduces additional scatter and leads us to overestimate the uncertainty. Nevertheless, it still provides a conservative estimate of the error which is useful for the purpose of this work.
### Cross-filter \(K\) corrections
A comparative study of peak magnitudes for a set of objects having a wide redshift range (See Figure 1) requires _K_-correcting the peak magnitudes to a single common synthetic band. The method of _K_-correction requires the spectrum of each object at or around the peak epoch. As described in section 3.1, peak absolute magnitudes for our data sample are determined in the 250 nm and 310 nm (used for calculating colour) rest-frame bands. Additionally, in order to reach further blue in the spectrum, we compute the peak magnitudes also in the 190 nm band for higher redshift SLSNe.
In order to _K_-correct the estimated peak magnitude of a SLSN in an observed filter into a fiducial UV filter, one requires the SLSN spectrum taken at or near peak epoch with coverage in the UV. We intend to use spectra at peak or within \(\pm\)5 days from peak to calculate K-corrections. However, we found only 5 objects out of 13 in our SLSN-UV test sample (\(<40\%\)) which have a UV/NUV spectrum taken within \(\pm\)5 days with none of them at peak. This number significantly limits any analysis. Therefore, we resort to _K_-correcting our peak magnitudes by adopting either a'standard' template spectrum or a blackbody (BB) curve depending on the synthetic filter in question. SLSN spectra have been observed to be largely featureless rewards of \(\lesssim 3000\) A, and are well represented by a blackbody at those wavelengths. However, they show significant absorption features at UV wavelengths below \(\lesssim 3000\) A, that become even stronger shortward of \(\sim 2000\) A. The strength of these UV absorption features is generally highest around SLSN peak magnitude (i.e., when the photosphere is hottest), and decreases as the supernova cools down (Angus et al., 2019).
As mentioned before in sec 3.1, _Gaia16apd_ is one of two SLSNe-I that have UV spectra at/near the peak (Yan et al., 2017, 2018). We use this spectrum for _K_-correcting our peak absolute magnitudes in the UV filters with the assumption that it represents a'standard' SLSN-I spectrum over this wavelength range (Smith et al., 2018). Besides _Gaia16apd_ being the only UV spectrum at peak, using it as a template spectrum provides a baseline against which future measurements can be compared. Additionally, we minimize the _K_-corrections as much as possible by choosing observer-frame filters closest to the synthetic-filters (as given in Table 1). We further test this assumption as given below to make sure that it does not compromise any results. Figure 2 shows the _Gaia16apd_ spectrum at peak, the BB fit to the spectrum, and the three synthetic bands used in this work.
Among our synthetic filters, the 310 nm band is the bluest wavelength range that has few absorption features and follows a BB function well. The 250 nm has two relatively moderate-strength broad absorption features. Shortward of \(\sim\)2200 A, there are a number of broad absorption features, including those near \(\sim\)2100, 1700, and 1400 A. We note that although the absorption features strongly affect the flux as compared to a BB spectrum, measuring magnitudes over these features is useful if the features are shown to be consistent amongst SLSNe-I. Moreover, with a larger sample of near-peak spectra, the flux scatter in these UV bands can be computed and tested for their usefulness. With the discovery of higher redshift SLSNe-I, one or more synthetic bands blueward of 190 nm and redward of Lyman-\(\alpha\) may likely be required.
We adopt a hybrid approach for calculating _K_-corrections to our synthetic filters. Since the 310 nm band (used for calculating color at peak) is continuum dominated and closely follows a blackbody, the K-corrections to the this band have been computed using a constant temperature BB spectrum of 15 000 K. This value was adopted as a mean value among the various SLSN-I peak temperatures evaluated in the literature. To verify this, we also measure the peak temperatures of all SLSNe in our sample using the available photometry and found them to be scattered around 15 000 K. Comparing K-corrections computed using individual temperature
Figure 4: Resampled light curves simulated from the measured data of SLSN DES15E2mlf are shown in blue (left panel) along with the density distribution for the measured peak epoch for each one these light curves (right panel). The error on peak epoch is estimated as the standard deviation of the distribution.
BB curves with those computed using 15000K BB curve, we find the differences to be smaller than the individual errors on the peak absolute magnitude. Additionally, owing to the high redshifts of the objects, majority of the photometry is in NUV rest frame bands ( 200 nm - 450 nm) where spectral region bluer than 300 nm has strong absorption features. Hence, fitting BB to this blue photometry does not give accurate results. Arbitrarily removing bluer bands data from the BB fits often leave us with two or even one data which is not enough for fitting. Hence, for consistency and minimising arbitrary assumptions, we decide to use a constant temperature BB curve at 15 000 K for calculating the K-corrections to the 310 nm band.
However, 250 nm filter contains moderate-strength features and therefore we use the _Gaia16apd_ spectrum at peak (Yan et al., 2017) for calculating those magnitudes. We test this by comparing the 5 available UV near-peak spectra with the _Gaia16apd_ spectrum. Figure 5 shows normalised near-peak spectra of the 5 SLSNe along with the Gaia16apd peak spectrum and the 250nm synthetic band used in the analysis. Visually, we find that the existing data are sufficiently consistent with the assumption of a template spectra at this wavelength. Furthermore, we calculated the \(K\)-corrections for these 5 objects using their spectra and found that the mean difference between them and those from the _Gaia16apd_ spectrum is about 0.07 mag. As explained later in sec 4.2, we add an error floor of 0.15 mag to the computed errors on peak absolute magnitude from GPR, to account for uncertainties in the K-correction. Hence, using _Gaia16apd_ spectrum as template enables us to have a statistically significant number of objects, provides a baseline for future comparison, and does not alter the results. We note that this work does not aim to make a precise cosmological measurement, but rather explores the possible use of high redshift SLSNe-I as standardisable candles in the UV wavelengths. We also use the _Gaia16apd_ spectrum for the 190 nm filter used in our higher redshift exploratory test.
For each SLSN, we \(K\)-correct the estimated peak magnitude in an observer frame filter to one of the synthetic filters. The observer frame filter is chosen such that its central wavelength in the SLSN rest frame is closest to the synthetic band. We calculate the average integrated flux in the rest-frame filter (i.e., blue-shifted observed filter) and then compare it with the flux in the _Gaia16apd_ spectrum (250 nm) or blackbody function (310 nm) synthetic band. The difference between the two is added as correction to the peak magnitude estimated with the GPR interpolation. As a result, the absolute peak magnitude in synthetic filters can be written as:
\[M_{0}(250)=M_{0}(X)+K_{X\to 250} \tag{3}\]
\[M_{0}(310)=M_{0}(X)+K_{X\to 310} \tag{4}\]
where \(M_{0}(X)\) is the peak magnitude in the chosen observer frame filter \(X\) for SLSNe, such that \(X\) after accounting for the cosmological redshift (\(1+z\)) is closest to the target synthetic filter and \(K_{X\to 250}\) is the \(K\)-correction from observed band \(X\) to the synthetic band.
### Bayesian Inference
We perform linear fits to the correlations studied here employing a Bayesian approach for a weighted linear regression using Markov Chain Monte Carlo (MCMC) sampling. Borrowing from Bayes' principles, this method provides posterior distributions of our correlation parameters and enables us to reflect on the uncertainties in our estimates. Additionally, this method is termed weighted because one can tailor the variance of the likelihood allowing for the uncertainties in both the \(x\) and \(y\) variables along with the intrinsic scatter.
We use linear models to correlate the peak absolute magnitude with each of the light curve properties (Section 3.2.1). For a light curve parameter \(x\), we model the peak absolute magnitude in a synthetic filter \(\gamma\) (250 nm or 310 nm band) as follows,
\[M_{\gamma}=b_{0}+b_{1}(x) \tag{5}\]
where \(b_{0}\) and \(b_{1}\) are the model parameters. Bayes's theorem gives the posterior probability distribution of the model parameters as
\[P(\Theta|D)\propto P(D|\Theta)P(\Theta), \tag{6}\]
where D is the vector for the observed SLSN light curve data and \(\Theta\) denotes the vector for the model parameters (\(b_{0}\), and \(b_{1}\)). For a sample of \(N\) SLSNe, model parameters for each SLSN are marginalised over and the likelihood probability distribution \(P(D|\Theta)\) can be written as
\[P(D|\Theta)=\prod_{i=1}^{N}P(D_{i}|\Theta) \tag{7}\]
where \(i\) is the index for the \(N\) SLSNe of the sample (here \(N\)=13 for the key SLSN-UV sample) and \(D_{i}\) is the observed light curve data for the \(i\)th SLSN. Assuming normally distributed errors and treating the peak absolute magnitude (\(M_{\gamma}\)) as the target variable, the log likelihood can be written as
\[\ln\mathcal{L}=-\frac{1}{2}\sum_{l=1}^{N}\frac{(M_{\gamma}^{l}-M_{\gamma}^{T}) ^{2}}{\sigma_{l}^{2}}-\frac{1}{2}\sum_{i=1}^{N}\ln 2\pi\sigma_{i}^{2} \tag{8}\]
where \(M_{\gamma}^{l}\) is the measured peak absolute magnitude of \(i\)th SLSN in the synthetic filters \(\gamma\) and \(M_{\gamma}^{T}\) is the true magnitude given by the model in equation 5. The variance \(\sigma_{l}\) for a SLSN is computed as the quadrature sum of the errors on light curve data. We also include an additional intrinsic scatter term \(\sigma_{int}\). This term is added to the variance and is left as a free parameter in the analysis accounting for any _"unexplained"_ dispersion observed in the peak absolute
Figure 5: Normalised _Gaia16apd_ spectra at peak compared with available near peak spectra of 5 SLSNe from the SLSN-UV Test sample along with the 250 nm synthetic band.
magnitudes. The errors on peak magnitudes are estimated from the GPR fits and the error on rise time are calculated by resampling as explained in section 3.2.2. The variance of the likelihood is then given as
\[\sigma_{t}^{2}=\sigma_{M_{\gamma,i}}^{2}+(b_{1}\sigma_{\tau_{\rm{rise}},i})^{2}+ \sigma_{int}^{2} \tag{9}\]
The term \(P(\Theta)\) in equation 6 are the priors on the model parameters. We adopt normal priors for the correlation parameters and a Half Cauchy distribution for the intrinsic scatter. The MCMC sampling is implemented using the "No U-Turn Sample" (NUTS) provided in the PyMC36(Salvatier et al., 2016), a python probabilistic programming package, with \(10^{5}\) iterations. For all linear fits performed in this work, we use the observed light curve data parameters as input and estimate the posterior distributions for the correlation parameters \(b_{0}\) and \(b_{1}\), along with the intrinsic scatter. All best-fit values provided in this work are the posterior means and the errors in the parameters are the standard deviation of their posterior.
Footnote 6: See:[https://docs.pymc.io/](https://docs.pymc.io/)
## 4 Results
We measure the peak magnitudes and the light curve evolution properties using the GPR interpolated light curves for the literature sample. We proceed to investigate their peak magnitude correlations modelled using Bayesian regression. We note that all correlations are measured using only the SLSNe in the SLSN-UV test sample
### Peak Magnitude distribution
As a first check, we plot the distributions of _uncorrected_ peak magnitudes \(M_{0}(250)\) for our Literature Sample and its subset, the SLSN-UV test sample. Figure 6 shows the \(M_{0}(250)\) histograms and density distributions from the GP interpolated light curves. The Literature Sample has an absolute magnitude mean of \(-21.30\) with a standard deviation of \(0.55\) and the SLSN-UV test sample has very similar values with a mean of \(-21.25\) and standard deviation of \(0.55\). We also measure the mean and spread for the bumpy (and may-be bumpy) population in our sample. These 10 SLSNe from the Literature Sample have a mean absolute magnitude of \(-21.25\) with standard deviation of \(0.54\), and \(7\) of these 10 included in the SLSN-UV test sample have a mean of \(-21.39\) and standard deviation of \(0.51\). These suggest that the various sub-samples are a good representation of the whole literature set and their selection has not introduced any arbitrary biases.
The scatter in the uncorrected \(250\,\mathrm{nm}\) band peak magnitude distributions for the full Literature Sample as well as the SLSN-UV test sample are higher than those measured by IS14 in the \(400\,\mathrm{nm}\) band. Differences between these two works include the rest-frame filters probed, redshift range of the samples (IS14 has \(0.1<z<1.2\)), and the sample selection criteria. Notably, Lunnan et al. (2018) in the PS1 SLSN sample looked at peak magnitude distribution at \(260\,\mathrm{nm}\) rest frame and found a even higher scatter of \(1.15\).
### Peak magnitude - Rise time relation
The maximum brightness of a SLSN has been shown to be dependent on the shape of its optical light curve or, more specifically, the rate of decline of the optical light curve. This correlation reduces the scatter observed in the _uncorrected_ peak magnitude distributions and has been used to demonstrate that SLSNe have the potential to be standardised for measuring cosmological distances (IS14; 120). As described in Section 4.1, the scatter in the raw \(250\,\mathrm{nm}\) band peak absolute magnitudes is \(\sim\)0.55 mag. Here we investigate whether this scatter decreases using the correlation among the peak magnitude and the rise time in the rest-frame UV.
Previous works on SLSN-I standardisation (e.g., IS14; 120) use the declining part of the SLSN light curve to characterise its shape and correlate the peak magnitude in \(400\,\mathrm{nm}\) band with the decline rate over different time scales (10, 20, and 30 days). In this work, we choose instead to study the rising behaviour of the SLSN, characterised by rise time \(\tau_{\rm{rise}}^{\rm{A1m}}\) (see Section 3.2.1), and determine the relationships in the UV bands (see Section 3.1). Rise time here is defined as the time elapsed in the rest-frame as the SLSN rises from 1 magnitude below peak magnitude to the peak. In order to better characterise the rising trend of the light curve, it would have been useful also to measure the rise time earlier from the peak, for instance 2 magnitudes before peak. However, the data at hand do not accommodate this investigation, since few objects (a statistically small number) have such early data.
The interpolated light curves are used to measure the peak absolute magnitudes and \(\tau_{\rm{rise}}^{\rm{A1m}}\) for all the SLSNe. The errors on the peak absolute magnitude are estimated with GPR and are of the order \(\sim 0.05\) mag. As this error is small compared to the photometric errors, we add in quadrature an error floor of \(0.15\) mag to peak magnitude errors. This will help account for uncertainties from the \(K\)-correction method adopted here (e.g., Smith et al., 2018). The error floor is added in the variance of the likelihood to the peak magnitude error term. We use only the SLSN-UV test sample (Section 2.1.1) to perform the final correlation fits. The fitting method employs a Bayesian approach as described in Section 3.4.
Figure 7 plots the peak absolute magnitude \(M_{0}(250)\) versus the rise time \(\tau_{\rm{rise}}^{\rm{A1m}}\) for the literature sample. The errors on both
Figure 6: Uncorrected peak absolute magnitude (\(M_{0}(250)\)) distributions of the full Literature Sample (22 objects) and the SLSN-UV test sample (13 objects). Normal distributions with the respective means and standard deviations are plotted in dashed lines.
parameters are estimated as described in Section 3. Noting that the SLSN-UV test sample is a subset pulled from the literature sample, the SLSN-UV test sample are highlighted in dark blue while the remaining SLSNe are shown in light blue. SLSNe which show a bump (or possible bump) are marked in red circles (or yellow squares).
The literature sample includes SLSNe with large peak epoch uncertainties, as seen in Figure 7, and the events with the most complete data, i.e., the SLSN-UV test sample, are clustered in the short rise-time region of the plot. Focusing on this sample of 13 SLSNe, we observe a correlation where the brighter SLSNe in the 250 nm band rise faster. To quantify this relation and its scatter, we fit a linear function as in equation 5 where the \(x\) is \(\tau_{\rm rise}^{\rm\lambda 1m}\) using Bayesian inference.
The results are shown in Figure 8 that presents the linear fit to the SLSN-UV test sample, along with the posterior lines and sigma limits. The intrinsic scatter of the correlation is measured to be 0.29 mag (roughly half the scatter for uncorrected magnitudes) with root mean square error (RMSE) of 0.35. The scatter in the 250 nm band rise time correlation is comparable to the scatter in the decline rate relationship in the 400 nm band measured by Inserra et al. (2020). The Pearson's \(r\) coefficient is found to be 0.80.
Figure 8 also shows the 3 bumpy (red crosses) and 4 may-be bumpy (yellow squares) SLSNe that are present in the SLSN-UV test sample. These seven objects are observed to reside toward one side of the linear fit, possibly suggesting a similar relationship for bumpy objects but with longer rise times and/or brighter peak mags (i.e., they appear shifted in the x-axis or y-axis (or both) compared to the ones without a pre-peak bump), perhaps due to the added luminosity fade of the pre-peak bump. Under an assumption that bumpy SLSNe are a different population, potentially adding extra scatter in the relationship, we fit a linear relation excluding these 7 bumpy objects using our Bayesian framework as shown in Figure 9. We see that while the correlation parameters (\(b^{0}\) and \(b^{1}\)) remain similar (annotated in Figure 8 & 9), the intrinsic scatter reduces from 0.29 to 0.20 and the RMSE reduces from 0.35 to 0.15. This is a significant improvement if the underlying hypothesis is true, i.e., that bumpy SLSNe are a different population than those observed without a pre-peak bump. If so, these SLSNe can be eliminated from any cosmological sample based on their observed photometric behaviour. Finally, we fit a linear relation to the set of 7 Bumpy SLSNe data separately, as plotted in Figure 10. We find that the correlation parameters are similar to the two relations above with an intrinsic scatter of 0.22 and RMSE of 0.25, which is tighter than the values obtained with the full SLSN-UV test sample.
As mentioned in Section 2, the classification of three HSC SLSNe (HSC16apuo, HSCadga, HSCauzg) as type-I is unclear. We also evaluate the results excluding them that results in a linear fit scatter of 0.30 and a RMSE of 0.35. Hence, the fit results do not change significantly.
### Peak magnitude - Colour relation
Past studies have shown SLSN peak magnitudes in rest-frame optical bands to be dependent on their colour at peak and on the rate of change of colour (Inserra and Smartt, 2014). We make measurements of the colour evolution of the SLSNe in our data sample and investigate their correlation with the peak absolute magnitude in the 250 nm band. Due to the paucity in data cadence of the SLSNe, the colour estimation is done using the GPR fitted curves that allows for a uniform method for measuring the colour and provides the errors on the values. Table 2 lists the colour parameters for all the SLSNe in the Literature Sample.
Section 3.2.1 gives the definition of _colour at peak_ and _Delta colour_, a quantity to measure the change of colour between two epochs in a light curve. The colours here are computed as \(M_{d}(250)-M_{d}(310)\) where \(d\) is the epoch (0 for peak). The peak magnitudes in the 310 nm band for our sample are estimated in the same way as for the 250 nm band using the observed filter closest to 310 nm in the rest frame of the SLSN. The \(K\)-corrections for the peak magnitude in 310 nm band (\(M_{0}(310)\)) are calculated using the
Figure 8: \(M_{0}(250)\) vs. \(\tau_{\rm rise}^{\rm\lambda 1m}\) correlation for the SLSN-UV test sample. The black solid line is the linear fit obtained using Bayesian regression. The light grey lines show the posterior fits with dashed lines representing the 1 and 2 \(\sigma\) confidence intervals. The fit parameters, RMSE and Pearson’s \(r\) coefficient are given in the lower legend.
Figure 7: Absolute peak magnitudes estimated in the 250 nm synthetic band versus rise time \(\tau_{\rm rise}^{\rm\lambda 1m}\) for the literature sample. The SLSN-UV test sample, comprising 13 of the 22 SLSNe, are highlighted with dark blue circles. The bumpy and may-be bumpy SLSNe are marked with red crosses and yellow squares, respectively.
15 000 K blackbody curve. The colour at peak is then measured as \(M_{0}(250)-M_{0}(310)\). We also calculate colour at 15 days before the peak (\(d=-15\)) in a similar way. Furthermore, _Delta colour_ on the rising curve is then calculated as the difference between the colour at peak and colour at \(-15\) days; \((250-310)_{-15}-(250-310)_{0}\), and is written as \(\Delta(250-310)_{-15}\). All the objects in the literature sample have sufficient data to calculate the colours. However, there are two peculiar cases at high redshift (\(z>2\)) where the available filters were bluer than required. HSC16apuo (\(z=3.22\)), and SN2213-1745 (\(z=2.05\)) have effective rest-frame colour approximately as \(185\)nm \(-\)\(212\)nm and \(205\)nm \(-\)\(252\)nm, respectively. We discuss the results below.
Figure 11 plots the peak absolute magnitude \(M_{0}(250)\) versus the colour at peak \((250-310)_{0}\) (first panel), colour at 15 days before peak \((250-310)_{-15}\) (second panel), and _Delta colour_\(\Delta(250-310)_{-15}\) (last panel), for the 13 SLSNe in the key SLSN-UV sample. The plots also show the Bayesian linear fits to these objects along with their \(1\sigma\) and \(2\sigma\) confidence intervals. The intrinsic scatter and RMSE of all the three relations are annotated on their respective plots. The bumpy (may-be bumpy) SLSNe in the sample are marked with red cross (yellow squares).
We observe a weak correlation between peak magnitudes \(M_{0}(250)\) and colour at peak, with a model intrinsic scatter of 0.46 mag and RMSE of 0.43. Fainter objects in the 250 nm filter appear to be redder at their peak. This is similar to the corresponding observation by IS14 in the 400 nm band. For the colour at 15 days before peak (second panel), the intrinsic scatter from fit is found to be 0.44 mag and RMSE of 0.64. The large errors in the colour data may contribute to the lower intrinsic scatter compared to the RMSE. The colour at 15 days before peak has larger errors because at early epochs many objects either have very sparse data or no data leading to inflated errors on the interpolated magnitudes. The correlation of peak magnitude with the change in colour over 15 days \(\Delta(250-310)_{-15}\) is shown in the third panel. We find that during the rising phase, brighter objects tend to become redder faster. This fit has an intrinsic scatter of 0.31 mag and RMSE of 0.54. For the decline phase, IS14 found an opposite relationship; fainter objects become redder faster over 30 days after peak. We note that HCS16apuo (number 17 on the plots) is not included in the last two correlation fits because it does not have data 15 days before the peak, and hence, no reliable data points are available for the analysis.
### Rise rate and Decline rate
Here we measure the rate of the SLSN light curve evolution during its rising and declining phases to study their relationship with the peak absolute magnitude. Rise rate (\(\Delta M_{-15}\)) is measured as change in the magnitude over 15 days before the maximum, and decline rates (\(\Delta M_{15}\), \(\Delta M_{30}\)) are measured analogously over 15 or 30 days after the peak (see section 3.2.1). As described in section 4.2, we observed a fairly good correlation (\(\sigma_{int}=0.29\)) of peak magnitudes in UV (\(M_{0}(250)\)) with the rise time (\(\tau_{\rm rise}^{\rm\Delta 1m}\)) for the key SLSN-UV sample. Therefore a correlation with the rate of rise is an expected result. On the other hand, one of the most important SLSNe-I cosmological correlations explored by IS14 and I20 is the dependence of 400 nm band peak magnitude on the decline rate. We explore a similar relation here, but in the 250 nm band.
Figure 12 plots the SLSN-UV test sample peak magnitude with the rise rate \(\Delta M_{-15}\) (first panel), with the decline rate over 15 days \(\Delta M_{15}\) (second panel), and lastly decline rate over 30 days \(\Delta M_{30}\) (last panel). We observe a correlation between the rise rate and the peak absolute magnitude for the SLSN-UV test sample. Given that we observe a linear relationship between the rise time and the peak magnitude, and the rise rise has dimension \(1/\)time, we fit a function of the form \(M=b^{0}+b^{1}\log(\Delta M_{15})\) instead of a linear fit. The best fit function estimated is shown in Figure 12 along with \(1\sigma\) and \(2\sigma\) confidence intervals. The intrinsic scatter of the relation is 0.41 mag. HSC16apuo (number 17) is not included in this fit, as it does not have sufficient early data as mentioned earlier.
The second and third panel of Figure 12 show the decline rate plots for 15 and and 30 days after maximum, respectively. For peak magnitude in UV (\(M_{0}(250)\)), we do not observe any correlation with the decline rate measured over 15 and 30 days. This is in contrast with the tight correlation (\(\sigma=0.33\)) observed between peak magnitude and decline rate by IS14 and I20 in the optical 400 nm band. In addition, our analysis also doesn't find correlation among the SLSNe-I rise and decline rates in the UV. Figure 13 shows rise rate vs decline rate for SLSN-UV test sample.
Figure 10: Same as Figure 8, but for the bumpy (and may-be bumpy) SLSNe from the SLSN-UV test sample.
Figure 9: Same as Figure 8, but for the SLSNe without a pre-peak bump among the SLSN-UV test sample.
### Exploring the data for the highest redshift SLSNe-I
Following our primary aim to investigate SLSN correlations in the rest frame UV, a sub-sample of the highest redshift SLSNe-I in our data enable an exploration of the rise time relation to wavelengths bluer than 250 nm. Studying the highest energy wavelength behaviour of SLSNe-I is vital to understand the physics behind these extreme events. A standardisation at wavelengths as close to Lyman-\(\alpha\) as possible would enable the cosmological use of the highest redshift SLSNe-I.
As mentioned earlier, one challenge in our investigation at wavelengths shorter than \(\sim 3000\) A is the presence of strong broad absorption features. These features can potentially introduce scatter in any peak magnitude relation. In absence of a large sample of SLSN UV spectra, we make the same assumptions as we have for the 250 nm band, that all SLSNe-I exhibit similar spectral features near peak. Considering the SLSNe-I spectral library of (Quimby et al., 2018; Yan et al., 2018), this assumption is reasonable and provides an important first step towards investigating high redshift SLSNe as cosmological probes in the FUV.
For our current SLSNe sample (in terms of their redshift and available data), the bluest regions which guarantee a statistically useful number of SLSNe cover a synthetic filter centred at 190 nm (see Section 3.1 for details). Six SLSNe in the literature SLSN-UV sample have photometric data coverage close to 190 nm in their rest frame, and their data quality in the bluest filter passes the quality criteria defined in Section 2.1.1. Similar to other synthetic bands, we fit light curves in the observed filters that are closest to 190 nm in the supernova rest-frame and employ GPR to estimate the peak
Figure 11: Absolute peak magnitude \(M_{0}(250)\) vs. colour at peak \((250-310)_{0}\) (first panel), with colour at 15 days before the peak \((250-310)_{-15}\) (second panel), and with change in colour over 15 days before the peak \(\Delta(250-310)_{-15}\) (third panel) for the SLSN-UV Test sample. Dashed lines show the linear fit functions estimated using Bayesian regression and the shaded regions mark the 1 and 2 \(\sigma\) confidence intervals. The bumpy (may-be bumpy) SLSNe in the sample are marked with red cross (yellow squares).
Figure 12: Absolute peak magnitude \(M_{0}(250)\) vs. rise rate over 15 days \(\Delta M_{-15}\) (first panel), decline rate over 15 days \(\Delta M_{15}\) (second panel), and decline rate over 30 days \(\Delta M_{30}\) (last panel) for the SLSN-UV test sample. The dashed line in the left panel shows the best fit function estimated using Bayesian regression and the shaded regions mark the 1 and 2\(\sigma\) confidence intervals.
absolute magnitude \(M_{0}(190)\) and the rise time \(\tau_{\rm rise}^{\rm Al\,Im}\). The error on rise time is calculated by Monte Carlo resampling (section 3.2.2). We calculate cross-filter \(K\)-corrections for the peak magnitudes using the _Gaia16a9_ spectrum as a standard. Any results obtained here can be scaled and corrected to a better standard in the future with a larger spectral sample.
Figure 14 shows the peak magnitude in 190 nm filter \(M_{0}(190)\) versus the rise time (\(\tau_{\rm rise}^{\rm Al\,Im}\)) for these six SLSNe-I. Supernovae have been observed to typically evolve faster at bluer wavelengths, and the 190 nm rise times of about \(\sim\)-5-10 days confirm this expectation when compared to the redder bands, i.e., 250 nm where the rise times are longer (\(\sim\)5-20 days).
The light curve peak absolute magnitude is observed to be correlated with rise times, however, with a steeper slope as compared to 250 nm. The fit function obtained with the Bayesian regression analysis is shown as the dashed line in Figure 14 along with 1\(\sigma\) and 2\(\sigma\) confidence interval limits. The intrinsic scatter of the fit is 0.87 mag and the posteriors of the correlation coefficients are very sensitive to the prior information. We provide normal priors motivated by the values of the correlation parameters estimated with the least square fitting method. The slope of the relation is \(0.37\pm 0.19\), a steeper value as compared to the \(0.09\pm 0.02\) estimated for the \(M_{0}(250)\) - \(\tau_{\rm rise}^{\rm Al\,Im}\)relation. This result, along with the other results presented here, albeit with small data sets, motivate further exploration of SLSNe as cosmological tools at high redshift.
### Are Bumpies different?
Nicholl and Smartt (2016) first suggested that pre-peak bumps may be ubiquitous in SLSNe light curves and may not be different populations. That is, the pre-peak bumps for those without evidence in their light curves may be the result of insufficient depth and/or lack of early data. However, there has not been any conclusive study on this topic to date. Angus et al. (2019) confirmed existence of SLSNe without any pre-peak bump within DES data. We assess the peak magnitude vs. rise time correlation in 250 nm band (Section 4.2) where we separately mark the bumpy SLSNe-I in the SLSN-UV test sample. In Figure 8, bumpy (and may-be-bumpy) SLSNe show a similar peak magnitude vs. rise time relationship as the non-bumpy objects in the 250 nm filter. However, we note an apparent offset between the bumpy and non-bumpy \(M_{0}(250)\)-\(\tau_{\rm rise}^{\rm Al\,Im}\)relationships, where the former appear slightly shifted to the upper right suggesting that they are either brighter, slower evolving, or both. This is indicated in the measured scatters when the correlation is determined separately with each of them, as shown in Figures 9 and 10. No clear trends are found in correlations with other parameters, such as colour or decline rate (Figures 11 and 12). A more luminous peak and/or a slower rise time can be attributed to an addition of a separate pre-peak bump when convolved with the main burst light curve, if indeed the main bursts follow a standard relation.
To explore the populations further, we measure the peak absolute magnitude and rise times in the 310 nm synthetic band for the SLSN-UV test sample, owing to the data availability (Figure 15). Two objects namely HSCadga and HSCapuo with redshifts 2.4 and 3.2 respectively are not included here because they lack data in rest-frame 310 nm filter. The bumpy (may-be bumpy) SLSNe are marked with red cross (yellow squares). In Figure 15 SLSNe-I identified having a confirmed pre-peak bump are shifted to the upper right similar to what was observed for the 250 nm correlation although the may-be-bumpy SLSNe do not stand out. Additionally, the whole population has a relatively broader distribution compared to \(M_{0}(250)\). Given the very limited data set, one can not draw a strong conclusion on whether or not pre-peak bumps in SLSN-I are ubiquitous, but the present analysis suggests that they could be different sub-class.
A KS test on the two samples gives a p-value \(>>\) 0.05 indicating that we can not reject that both SLSN-UV and the bumpy sample SLSNe are drawn from the same distribution. However, we do observe a significant reduction in scatter in peak magnitude-rise time correlation when Bumpy SLSNe are treated separately suggesting two distinct sub-populations. Larger samples by future surveys with
Figure 14: \(M_{0}(190)\) versus \(\tau_{\rm rise}^{\rm Al\,Im}\) for \(z>1.5\) SLSNe with sufficiently blue photometry in the rest-frame. The dashed line is the linear fit to the data obtained using Bayesian regression and the shaded regions show the 1\(\sigma\) and 2\(\sigma\) confidence intervals. The fit parameters and Pearson’s \(r\) coefficient are given in the legend.
Figure 13: Rise rate (\(\Delta M\)–15) vs. Decline rate (\(\Delta M_{15}\)) plot for the SLSN-UV test sample.
deep photometric sensitivities and early and consistent cadence data are needed to properly test this hypothesis.
## 5 Summary and perspectives
High redshift SLSNe are, and will be detected by their rest-frame UV emission with current and future optical/NIR surveys. This work presents a preliminary attempt to explore the rest-frame UV of SLSNe type I in context of their use as cosmological probes at high redshift. If standardisable in the UV, SLSNe would provide an effective tool for measuring cosmological parameters from \(z\sim 0\)-20. SLSNe would provide a complementary method to SNe Ia to measure the Hubble diagram up to \(z\sim 1.2\) as well as a means to extend it beyond that, well into the epoch of deceleration, enabling us to distinguish between various dark energy models.
The data sample compiled for the work is a set of 22 published SLSNe-I in redshift range \(z\sim 1\)-3, referred to as the literature sample. We apply data quality cuts to the literature sample in order to distinguish objects with the best and/or most complete available UV/NUV data, and this sub-set of 13 SLSNe-I is called the SLSN-UV test sample. All the cosmological correlations are determined using the SLSN-UV test sample. We also identify SLSNe which have, or most likely have, an early pre-peak bump in their light curve, and refer to them as bumpy SLSNe. With the aim of exploring rest-frame UV, and given the redshift range of the sample and available data, we chose a synthetic filter centered at 250 nm to analyse the peak magnitude correlations. All the SLSNe light curves are fit using GPR to avoid any assumption on the light curve shape (see section 3.2), and these GP interpolated light curves are used to estimate the peak absolute magnitudes (\(M_{0}(250)\)) and the light curve properties (along with their uncertainties). We apply cross-filter \(K\)-corrections to the estimated peak magnitudes from the observer frame filter into the 250 nm band using the _Gaia16apd_ UV spectrum at peak, and into the 310 nm band (used for calculating colors) with a 15 000 K blackbody function assuming them as standards. Peak magnitude correlations are modelled using a Bayesian framework which constraints the posteriors of the correlation coefficients along with the intrinsic scatter in the relation. The main results of the work are summarised in the following.
* \(r_{\rm rise}^{\rm\Delta Im}\)correlation becomes tighter with \(\sigma_{int}=0.2\) and RMSE = 0.15. With or without the bumpy SLSNe-I, this result strongly encourage further investigations into their use as high redshift cosmological probes in the rest-frame UV.
* For the colour evolution of SLSN light curves, we correlate the peak magnitude with three quantities; colour at peak, colour at 15 days before peak, and the rate of change of colour between these two epochs. The colour terms are calculated using 250 nm and 310 nm magnitudes. We observe correlations, albeit weak, for the three color quantities with the peak magnitude \(M_{0}(250)\) for the SLSN-UV test sample. Relatively stronger relationship is seen between \(M_{0}(250)\) and delta color \(\Delta(250-310)_{-15}\) with a scatter of 0.37.
* Peak magnitude versus rise rate relation (\(\Delta M_{-15}\)) reproduces the result similar to that observed for the rise time. More interestingly though, we do not observe any correlation of peak magnitudes in 250 nm band with the decline rate over 15 and and 30 days (\(\Delta M_{15}\) and \(\Delta M_{30}\)). This is contrary to what has been observed for peak magnitude in 400 nm band IS14; I20.
* Six high redshift SLSNe from the SLSN-UV test sample have photometric data that enable an analysis at a synthetic band centred at 190 nm in their rest-frame. We explore this band in order to investigate the peak magnitude versus rise time relation at bluer wavelengths for using SLSNe at very high redshifts. We observe a correlation with an intrinsic scatter \(\sigma_{int}\) = 0.87 (RMSE = 0.67).
* We also perform peak magnitude-rise time correlation in the 310 nm band for the test sample SLSNe which have the required data. This is done to examine whether Bumpy SLSNe show any distinguishing trend in this band similar to the 250 nm. We find that the three SLSNe with confirmed pre-peak bumps in their light curve are shifted to the upper right corner of the plot similar to what was observed for the 250 nm correlation. This indicates that pre-peak bumps might not be a universal phenomena among SLSNe-I, however, it is difficult to draw a conclusive result given the small data set. Additionally, comparing the rise times at three different wavelength bands, we observe that \(r_{\rm rise}^{\rm\Delta Im}\)shows a consistent decrease with decreasing wavelength, with rise times of \(\sim\)10-30 days at 310 nm, \(\sim\)10-20 days at 250 nm, and \(\sim\)5-10 days at 190 nm.
The results obtained in this work are highly promising for the use of SLSNe-I as high redshift cosmological probes in their rest-frame UV. We have adopted an unbiased approach for data selection, using all available data with appropriate photometry. However, the data set is yet small for a very robust analysis. Larger samples in future are strongly encouraged to test the observed correlations. Furthermore, SLSNe-I UV spectroscopy is critical to test for absorption-line consistency at wavelengths from 1216 (Lyman-\(\alpha\)) to \(\sim\)3000 A, where SLSNe-I continua show absorption.
We note that if the Bumpy and non-Bumpy populations are indeed different, the peak magnitude vs. rise time relationship is significantly improved when eliminating Bumpy (and may-be-bumpy) SLSNe-I and is almost comparable to SNe Ia, as demonstrated in Section 4.2, with the important caveat of a small sample. With suffi
Figure 15: \(M_{0}(310)\) versus \(r_{\rm rise}^{\rm\Delta Im}\)also measured in 310 nm band for SLSN-UV test sample. The bumpy (may-be bumpy) SLSNe in the sample are marked with red cross (yellow squares)
ciently deep photometry, Bumpy SLSNe-I can be identified by their distinct light curves and removed from cosmological samples. Additionally, we would like to highlight an interesting inference that at UV wavelengths, SLSNe peak magnitudes exhibit a relatively tighter correlation with the rising phase of the light curve (here \(r_{\rm rise}^{\rm A1\,m}\)) instead of the decline which is the common parameter used in literature for SLSNe relations in optical bands. With the present sample, we do not observe any trend for \(M_{0}(250)\) with the decline rate. Due to data limitations, we can not perform this analysis in the 400 nm band for a direct comparison with 1S14; 120. It would be natural to explore rise time correlations at optical wavelengths with a larger sample in future.
Finally, a key challenge for surveys that aim to detect very high redshift SLSNe is the very long (1 +\(z\)) time-dilated search baselines needed to detect their evolution. For example, for an assumed rest-frame 50-day rise and overall 200-day evolution at z \(\sim\) 15 becomes \(\sim\)2 and \(\sim\)9 years, respectively. Supernovae are observed to typically evolve faster at shorter wavelengths and this work confirms this relation to the NUV and FUV, with rise timescales increasingly shorter, and as short as \(\sim\)5-10 days for rise from 1 magnitude to peak at wavelengths near \(\sim\)1900A. The comparatively shorter evolution timescales for SLSNe-I in the UV compared to the optical will help mitigate the expected extremely long duration evolution of SLSNe at the highest redshifts (z \(\sim\) 6-20), making surveys more practical and enabling their detection in surveys with cadences designed for other types of lower redshift supernovae.
Surveys for high redshift SLSNe require both deep imaging capability (m \(\gtrsim\) 26 per filter, per epoch) and wide fields to discover these relatively rare events. However, their high utility outlined in Section 1, including galaxy and stellar evolution ISM, CGM, and IGM probes, detection of the deaths of Population III stars, searching for pair-instability events, and cosmological probes warrant such surveys. SLSNe can be detected to \(z\sim 6\) using optical facilities such as CTIO Dark Energy Camera, NAOJ Subaru Hyper-SuprimeCam, and the future Keck Wide-Field Imager. SLSNe to \(z\sim 20\) can be detected by facilities such as the Nancy Roman Space Telescope, Euclid, the University of Tokyo Atacama Observatory SWIMS, and the Kunlun Dark Universe Survey Telescope (KDUST).
## Acknowledgements
NK is supported by a grant from VILLUM FONDEN (project number 16599). NK would like to thank Charlotte Angus, Mat Smith, Luca Izzo for the very helpful discussions. We also thank L.Yan for providing us the reduced spectra for Gaia16apd. Part of this research was funded by the Australian Research Council Centre of Excellence for Gravitational Wave Discovery (OzGrav), CE170100004 and the Australian Research Council Centre of Excellence for All-sky Astrophysics in 3 Dimensions (ASTRO-3D), CE170100013. MB acknowledges financial support from MIUR (PRIN 2017 grant 20179ZF5KS).
## Data Availability
All data underlying this article is available within the article and enlisted in Table 1 and Table 2. In addition, individual SLSNe-I light curves and spectra analysed in this article are available to the public via their corresponding papers which are referenced in Table 1.
|
2309.10995 | Statistical analysis of the gravitational anomaly in {\it Gaia} wide
binaries | The exploration of the low acceleration $a<a_{0}$ regime, where $a_{0}=1.2
\times 10^{-10}$m s$^{-2}$ is the acceleration scale of MOND around which
gravitational anomalies at galactic scale appear, has recently been extended to
the much smaller mass and length scales of local wide binaries thanks to the
availability of the {\it Gaia} catalogue. Statistical methods to test the
underlying structure of gravity using large samples of such binary stars and
dealing with the necessary presence of kinematic contaminants in such samples
have also been presented. However, an alternative approach using binary samples
carefully selected to avoid any such contaminants, and consequently much
smaller samples, has been lacking a formal statistical development. In the
interest of having independent high quality checks on the results of wide
binary gravity tests, we here develop a formal statistical framework for
treating small, clean, wide binary samples in the context of testing
modifications to gravity of the form $G \to \gamma G$. The method is validated
through extensive tests with synthetic data samples, and applied to recent {\it
Gaia} DR3 binary star observational samples of relative velocities and internal
separations on the plane of the sky, $v_{2D}$ and $r_{2D}$, respectively. Our
final results for a high acceleration $r_{2D}<0.01$pc region are of
$\gamma=1.000 \pm 0.096$, in full accordance with Newtonian expectations. For a
low acceleration $r_{2D}>0.01$pc region however, we obtain $\gamma=1.5 \pm
0.2$, inconsistent with the Newtonian value of $\gamma=1$ at a $2.6 \sigma$
level, and much more indicative of MOND AQUAL predictions of close to
$\gamma=1.4$. | X. Hernandez, V. Verteletskyi, L. Nasser, A. Aguayo-Ortiz | 2023-09-20T01:29:04Z | http://arxiv.org/abs/2309.10995v2 | # Statistical analysis of the gravitational anomaly in \(Gaia\) wide binaries.
###### Abstract
The exploration of the low acceleration \(a<a_{0}\) regime, where \(a_{0}=1.2\times 10^{-10}\)m s\({}^{-2}\) is the acceleration scale of MOND around which gravitational anomalies at galactic scale appear, has recently been extended to the much smaller mass and length scales of local wide binaries thanks to the availability of the _Gaia_ catalogue. Statistical methods to test the underlying structure of gravity using large samples of such binary stars and dealing with the necessary presence of kinematic contaminants in such samples have also been presented. However, an alternative approach using binary samples carefully selected to avoid any such contaminants, and consequently much smaller samples, has been lacking a formal statistical development. In the interest of having independent high quality checks on the results of wide binary gravity tests, we here develop a formal statistical framework for treating small, clean, wide binary samples in the context of testing modifications to gravity of the form \(G\rightarrow\gamma G\). The method is validated through extensive tests with synthetic data samples, and applied to recent _Gaia_ DR3 binary star observational samples of relative velocities and internal separations on the plane of the sky, \(v_{2D}\) and \(r_{2D}\), respectively. Our final results for a high acceleration \(r_{2D}<0.01\)pc region are of \(\gamma=1.000\pm 0.096\), in full accordance with Newtonian expectations. For a low acceleration \(r_{2D}>0.01\)pc region however, we obtain \(\gamma=1.5\pm 0.2\), inconsistent with the Newtonian value of \(\gamma=1\) at a \(2.6\sigma\) level, and much more indicative of MOND AQUAL predictions of close to \(\gamma=1.4\).
keywords: gravitation -- stars: kinematics and dynamics -- binaries: general -- statistics
## 1 Introduction
In the context of the debate surrounding the identification of low acceleration gravitational astronomical anomalies as either the result of a change in gravity at those scales, or as indication of the existence of a dominant dark matter component, wide binaries have been identified as capable of providing relevant independent insights, Hernandez et al. (2012). Solar mass star binaries on circular orbits with separations larger than 0.035 pc (7000 au) lie in the regime where accelerations fall below \(a_{0}\), where \(a_{0}=1.2\times 10^{-10}\)m s\({}^{-2}\) is the characteristic acceleration scale of MOND, an indicative threshold at which observed galactic dynamics show the above mentioned gravitational anomalies, e.g. Milgrom (1983), Lelli (2017).
Under a modified gravity interpretation one expects the appearance of gravitational anomalies at accelerations larger than \(a_{0}\) by a factor of a few, due to the presence of a smooth transition between regimes. In the particular case of the wide binaries treated, gravitational anomalies would be expected at separations smaller than the 0.035 pc mentioned above for an additional reason: the mean total masses per binary system are of only 1.5 \(M_{\odot}\). Indeed, recently Hernandez et al. (2022), Chae (2023a) and Hernandez (2023) using _Gaia_ wide binaries have reported gravitational anomalies appearing at separations above 0.01 pc (2000 au).
Given the inferred local volume density of dark matter, its total content expected within a wide binary orbit is negligible in comparison to the masses of the stars themselves. Further, given the assumed velocity dispersion of the hypothetical dark matter particles of the Milky Way halo of
160 km s\({}^{-1}\), clustering on scales with dynamical equilibrium velocities of \(<1\)km s\({}^{-1}\), as applies to local wide binaries, would require orders of magnitude of cooling, for a component which by construction must be dissipationless. Thus, any gravitational anomaly of the type encountered at galactic scales and beyond, found in wide binary stars, cannot comfortably be ascribed to the presence of dark matter.
There are details to be taken into account when performing such a wide binary gravity test. Crucially, because the orbital timescales of the systems in question are of many thousands of years, the test can only be undertaken statistically by examining large samples of wide binaries and comparing observed distributions of relative internal velocities to various competing models, e.g. Hernandez et al. (2012), Pittordis & Sutherland (2018), Banik & Zhao (2018), Hernandez et al. (2019), Acedo (2020), Pittordis & Sutherland (2023), Hernandez (2023) and Chae (2023a). Another key concern is the presence of kinematic contaminants in local wide binary samples, cases where the two stars of a candidate binary do not in fact form a bound pair, but only a close flyby event e.g. Pittordis & Sutherland (2019), and hidden tertiaries, cases where one or both of the stars in a bound binary might in fact be an unresolved binary itself, e.g. Banik & Zhao (2018), Clarke (2020). In either of the above cases, the observed relative velocity between both identified components will be the result of the internal gravitational attraction between both stars, and also of unrelated physical ingredients; the initial conditions of the hyperbolic flyby or the internal dynamics of the unresolved binary.
One approach has been to attempt to model one (e.g. hidden tertiaries in Chae 2023a) or both of these kinematic contaminants (e.g. Pittordis & Sutherland 2023) and account for their effects so as to identify the underlying behaviour of gravity after having modelled out the contribution of kinematic contaminants. These studies have in one case recently reported the presence of a gravitational anomaly consistent with MOND appearing for separations on the plane of the sky, \(r_{2D}\), larger than \(r_{2D}>0.01\) pc, while calibrating a hidden tertiary model in the Newtonian high acceleration \(r_{2D}<0.01\) pc regime and carefully excluding from the sample flyby events using isolation and relative radial velocity cuts, Chae (2023a). In the other case, attempting to model simultaneously both sources of kinematic contaminants, and not considering the high acceleration Newtonian region for calibration or consistency checks, Pittordis & Sutherland (2023) report a better fit to Newtonian gravity than to a modified gravity model tested, looking only at the \(r_{2D}>0.01\)pc regime.
An independent approach is to attempt a thorough cleaning of all kinematic contaminants before performing any gravity test with a wide binary sample, where very careful selection strategies are required, leading to much smaller samples than the ones mentioned above. This has been reported in Hernandez et al. (2022) and Hernandez (2023), showing the appearance of a gravitational anomaly at the same \(r_{2D}\) threshold as reported by Chae (2023a), although lacking any formal statistical analysis of the details of such an anomaly.
The stringent requirements of a clean sample limit strongly the final numbers of binaries considered, yielding almost two orders of magnitude fewer binaries than those used in large samples where removal of kinematic contaminants is much less thorough, e.g. Pittordis & Sutherland (2023) or Chae (2023). However, at the expense of numbers, there is a significant increase in gain in certainty that the binaries included do in fact represent the physics one is trying to asses, e.g. Hernandez (2023), Chae (2023b). This makes both approaches complementary, largely independent and valuable avenues towards a final answer on this subject.
The present paper develops and presents an application of a formal statistical method to infer a gravity model where Newton's constant is re-scaled by a fixed factor, as expected for Solar Neighbourhood wide binaries under MONDian models (e.g. Banik & Zhao 2018), parameterised as \(G\rightarrow\gamma G\), using a small clean sample of local wide binaries from the _Gaia_ DR3. Optimal values of \(\gamma\) relevant to both the high acceleration \(r_{2D}<0.01\)pc and the low acceleration \(r_{2D}>0.01\)pc regimes are obtained. This analysis includes a full probabilistic treatment of the probability density functions (PDFs) for the two projection angles of the wide binary orbits involved, the sampling of a distribution of semi-major axes, ellipticities, orbital phase angles and relative velocity errors.
Section (2) summarises the sample selection strategy and first order results, Section (3) develops the probabilistic treatment of the problem, which is applied in Section (4) to the _Gaia_ DR3 sample previously described. Lastly, Section (5) includes a final discussion of the results obtained and their implications.
Figure 1: Log-Log plot of 1D relative velocities between the two components of each wide binary in the sample on the plane of the sky as a function of the observed separation of the two components on the same plane, small grey dots, RA and Dec. measurements appear separately. The large dots with error bars give binned averages for the 1D relative velocities, green and red for RA and Dec., respectively. Horizontal bars show the extent of the bins and vertical ones the \(1\sigma\) confidence intervals on the quantities given. The red line shows a \(v_{2D}\propto r_{2D}^{-1/2}\) fit, which is followed by the date in the \(r_{2D}<0.01\)pc range. In the low acceleration \(r_{2D}>0.01\) pc range, a similar fit is shown which appears a factor of 1.62 above the previous one. If this factor is interpreted as due to an effective change in \(G\rightarrow\gamma G\), one would infer \(\gamma=1.27\) from this rough comparison.
## 2 Sample selection and preliminary results
The DR3 _Gaia_ wide binary sample analysed in this paper is a small extension of the sample described and treated in Hernandez (2023), where all the details describing this sample can be found. In summary, the sample comprises 667 wide binaries within a distance of \(D<125\) pc from earth and a minimum signal-to-noise ratio in parallax of \((S/N)_{\varpi}=100\), where an initial binary candidate selection criteria, following El-Badry & Rix (2018), selects a list of main sequence stellar pairs such that twice the separation on the plane of the sky, \(2r_{2D}\), is less than the separation along the line of sight, to within three times the \(1\sigma\) confidence interval of this last quantity. Such binary companion candidates are sought up to a separation on the plane of the sky \(r_{2D}<0.5\) pc. This initial list returns many binary candidates with shared stars, that are removed to construct a catalogue where each binary system is isolated from all other _Gaia_ sources to within 0.5 pc, almost an order of magnitude larger than the largest internal separation used of \(r_{2D}=0.06\) pc. Then, quality cuts are imposed to leave only binaries where both stars have \(R_{p}\), \(G\) and \(B_{p}\) signal-to-noise values \(>20\), and where both stars have a reported radial velocity measurement in the catalogue.
Requiring a radial velocity measurement for all stars allows to calculate all astrometric corrections including not only full spherical geometry corrections, but also perspective effects, e.g. Smart (1968), and also ensures each individual star has a high quality single star spectroscopic, photometric and astrometric _Gaia_ solution, something which strongly eliminates hidden tertiaries. Indeed, many sources lack a reported radial velocity measurement precisely because of a poor single stellar solution.
Next, a series of cuts are introduced to further reduce to a minimum the probability of any kinematic contamination remaining in the sample. Following Belokurov et al. (2020) and Penoyre et al. (2020) a careful selection of a region of the main sequence in the CMD diagram below the old turn-off points of the stars obtained is performed (see Hernandez 2023), for all stars involved. This excludes photometric binaries, and minimises the probability of keeping unresolved hidden binaries. Indeed, the two authors above estimate through extensive simulations reproducing _Gaia_ DR2 observational constrains, that the probability of keeping unresolved hidden tertiaries in samples out to 1kpc is below 5%, after selecting the CMD region described above and imposing a _Gaia RUVE_ single star solution quality index cut of \(<1.4\). Here we impose a much more stringent _RUVE_\(<1.2\) limit, and remain within a much smaller distance of only \(D<125\)pc, using the more accurate DR3.
Finally, a cut in the upper allowed value of the CLASSPROB_DSC_COMBMOD_BINARYSTAR _Gaia_ DR3 parameter, henceforth \(B_{P}\), of \(B_{P}<0.4\) is introduced. This parameter gives an assessment of the likelihood that a single _Gaia_ source might be in fact a binary star, not an actual statistical probability, but at present only a qualitative assessment (G. Gilmore, private communication). For this reason this last cut was relaxed from the \(B_{P}<0.2\) used in Hernandez (2023), which allows for an increase of about 50% on the total final numbers of binary systems included. Still, the use of all the above parameters sequentially ensures that final average data quality values are well above the individual thresholds introduced. The relevant observational parameters of the sample used are given in Table (1), where we can see for example, final mean signal-to-noise in parallax of close to 900, mean values of _RUVE_ of 1.01 and mean values of \(B_{P}\) of 0.12 for the samples used. All cuts on individual stars are implemented such that if either both or even just one of the components of a candidate binary fail the test, the binary candidate is removed from consideration.
Regarding the exclusion of flybys, having radial velocities for all stars, we exclude from consideration any binary candidate where the difference between the radial velocity measurements of both components exceeds 4 km s\({}^{-1}\). Binaries with relative internal velocities on the plane of the sky, \(v_{2D}\), above 4 km s\({}^{-1}\) are likewise excluded. Given the pairwise relative velocity distribution of field stars in the Solar Neighbourhood is a Gaussian with a \(1\sigma\) value close to 60 km s\({}^{-1}\), and that the average interstellar separation is close to 1pc, the expected number of flybys satisfying simultaneously \(r_{2D}<0.06\)pc and relative velocities both along the line of sight and on the plane of the sky below 4 km s\({}^{-1}\), is negligible in our final sample.
Then, a signal-to-noise quality cut of 1.5 is applied to the resulting binary relative velocity values, with binaries where the velocity signal-to-noise ratio in either RA or Dec is below this threshold are removed. This excludes cases where either of the two velocity components are poorly measured, which occurs in 15% of cases for each RA and Dec. As this % is small, the chances of both RA and Dec components having poorly measured relative velocities is quite small. This ensures no small relative velocity cut is being introduced, just a filter on cases where either component of \(v_{2D}\) is suspect. Thus, a final close to 34% of cases were removed through this cut. After this cut, the average \(v_{2D}\) signal-to-noise values are of \(<v_{2D}/\sigma_{v}>=18.4\) when \(r_{2D}<0.01\)pc, and \(<v_{2D}/\sigma_{v}>=7.9\) when \(r_{2D}>0.01\)pc, much larger than the 1.5 quality cut filter. To further exclude the possibility that any non-Newtonian signal in the low acceleration
Figure 2: The figure illustrates the projection of the elliptical orbit described by the relative separation, \(\vec{r}\) and relative velocity, \(\vec{v}\), of both stars in the plane of their orbit, yellow, onto the plane of the sky, shown in blue, see text.
\(r_{2D}>0.01\)pc region might be the result of kinematic contaminants or noise, any binaries with \(v_{2D}>1\)km s\({}^{-1}\) are also removed, in this region only e.g. Chae (2023a).
Resulting relative velocities in both RA and Dec. as a function of \(r_{2D}\) are shown in Fig.(1). Binned mean values in these quantities are shown by the circles with error bars, for RA and Dec. measurements, green and red, respectively. The red line gives a \(v\propto r_{2D}^{-1/2}\) scaling fitted to the \(r_{2D}<0.01\)pc range, which as shown for a very similar clean sample in Hernandez (2023), is an accurate fit to Newtonian expectations of Jiang & Tremaine (2010). However, we see a regime change on crossing \(r_{2D}=0.01\) pc, 2000 au, where the averaged binned velocity values shift to another \(v\propto r_{2D}^{-1/2}\) scaling, which appears slightly above. Reading this boost factor from the graph suggests an underlying model where \(G\rightarrow\gamma G\) on reaching the low acceleration \(r_{2D}>0.01\)pc regime, with a value of \(\gamma=1.27\). This estimate is suggestive of the \(\gamma=1.43\pm 0.06\) reported by Chae (2023a), in good agreement also with MOND AQUAL expectations.
The estimate of \(\gamma\) described above is crude for a number of reasons; the details of the fit are somewhat subjective to exactly which sets of mean points are used for each fit, points which in turn depend on the details of the binning performed. Also, this potentially crucial gravitational anomaly is being inferred from a discrete parameter of a complex velocity distribution, with the necessary lack of robustness and loss of information intrinsic to any binning procedure.
For this reason in the following section we develop a formal probabilistic model to carefully include all details of the inherent probability density functions (PDFs) at play: two for the relevant projection angles, one for a sampling of an orbital phase, one for a distribution of semi-major axes and one for a sampling of an ellipticity distribution, all for a given observed set of \(r_{2D}\), \(v_{2D}\) values, and particular total binary masses, \(M_{T}\). This will allow a formal testing of the \(G\rightarrow\gamma G\) hypothesis and return best fit inferred values of \(\gamma\), both in the high acceleration \(r_{2D}<0.01\) pc and in the low acceleration \(r_{2D}>0.01\) pc regimes, paying attention to the details of un-binned distributions of relative velocities, and no longer focusing on specific moments of these distributions. Full statistical, \(\sigma_{st}\), resolution, \(\sigma_{re}\), and systematic, \(\sigma_{sy}\), \(1\sigma\) confidence intervals on inferred values of \(\gamma\), will be developed and presented.
## 3 Statistical framework
The model which we shall test is one where gravity is purely Newtonian, but where the actual value of the gravitational constant is allowed to vary by a scale factor, such that \(G\rightarrow\gamma G\). A full probabilistic model will be presented such that use of all information content of the data is what determines the inferred value of \(\gamma\) and its corresponding confidence interval, under a flat prior assumption which neither enhances nor diminishes the probability of obtaining \(\gamma=1\) either in the high acceleration \(r_{2D}<0.01\) pc region, or in the low acceleration \(r_{2D}>0.01\) pc one. The orbits of the binary stars will hence be assumed to be Keplerian ellipses, and the sample will be assumed to be free of kinematic contaminants, in accordance to the strict sample selection criteria described in the previous section. Systematics regarding a scenario where this last assumption could be invalid, will be considered in the final section.
Each observed binary star, as described in the previous section, consists of two inferred masses, \(m_{1j}\) and \(m_{2j}\), from which a total mass per binary of \(M_{Tj}=m_{1j}+m_{2j}\) follows, a measured separation on the plane of the sky, \(r_{2Dj}\), a relative velocity between the two components on the plane of the sky, \(v_{2Dj}\), and an error on this last quantity, \(\sigma_{vj}\). With use of _Gaia_ FLAME masses for most of the stars included, and of magnitude-mass scalings calibrated using _Gaia_ FLAME masses, uncertainties in the masses will be below 10%, close to 5% on the average, Hernandez (2023), Chae (2023a). Given the upper distance of the sample of only 125pc, yielding average signal-to-noise values for the parallax of our _Gaia_ sample of close to 1000, with medians of 764.4 and 740.6 for the primaries and secondaries, respectively (see Hernandez 2023), implies that the errors on \(r_{2D}\) and \(M_{T}\) will be much smaller than those on \(v_{2D}\). This is particularly relevant in the critical \(r_{2D}>0.01\)pc region where final mean \(<v_{2Dj}/\sigma_{vj}>=7.9\), 13% errors. Further, the dependence of velocity on only the square root of separation and the square root of mass implies that adding in quadrature, errors on velocity based inferences will be dominated, by well over an order of magnitude, by the satellite reported \(\sigma_{vj}\) quantities. Hence, we shall consider these last errors fully and consistently in the statistical and probabilistic model constructed, and ignore the errors on both \(r_{2D}\) and \(M_{T}\). A full validation of the entire scheme through the extensive use of synthetic samples will also be included.
From a Bayesian perspective, our first step is to calculate the probability that a given data point might arise from a particular model, i.e., from a particular value of \(\gamma\). We hence have to calculate the probability density distributions for both \(r_{2D}\) and \(v_{2D}\), given a value of \(\gamma\). These are obtained from the probability density functions determining
Figure 3: \(P(\tilde{v})\) curves from 1.5\(\times 10^{10}\) samples of equation (10) presented using 5000 \(\tilde{v}\) bins for \(\gamma=1\). The various curves are for different assumed ellipticity distributions parameterised using \(P(e)=(1+\alpha)e^{\alpha}\): \(\alpha=0.6\), yellow, \(\alpha=0.8\), orange, \(\alpha=1.0\), thermal, red, \(\alpha=1.2\), purple and \(\alpha=1.4\), blue, respectively. For any arbitrary value of \(\gamma\), these same curves can be trivially re-scaled using equation (10).
the details of a binary orbit, and the projection of the relative velocity and separation on the plane of the sky of both components, as follows:
For an inclination angle of the orbital plane of a binary system of \(i\) to the line of sight, the inclination of the relative velocity vector between both components, \(\vec{v}\) will be \(i_{v}\), where \(\sin i_{v}=\sin i|\cos(\beta+\phi-\phi_{0})|\), and that of the relative position of both components, \(\vec{r}\), will be \(i_{r}\), where \(\sin i_{r}=\sin i|\cos(\phi-\phi_{0})|\), with \(\beta\) the angle between \(\vec{r}\) and \(\vec{v}\), and \(\phi_{0}\) the phase angle of the radius vector having the largest inclination. The geometric set-up of the problem is summarised in Fig. (2), with the plane of the orbit shown in yellow, and the plane of the sky in blue. For a system with semi-major axis, \(a\), total mass \(M_{T}=m_{1}+m_{2}\) and ellipticity \(e\), the instantaneous relative velocity in 3D between the two components will be given by:
\[v^{2}=\frac{\gamma GM_{T}}{a}\left(\frac{2a}{r}-1\right), \tag{1}\]
where \(r\) is given by:
\[r=\frac{a(1-e^{2})}{1+e\cos\phi}, \tag{2}\]
where \(\phi\) is the true anomaly, the orbit phase angle measured from the pericentre. Using the last equation, equation (1) can be written as:
\[v^{2}=\frac{\gamma GM_{T}}{a}\left(\frac{1+e^{2}+2e\cos\phi}{1-e^{2}}\right). \tag{3}\]
Two auxiliary constant quantities which will be of use are the expressions for the magnitude of the angular momentum of the orbit, and the orbital period:
\[L=rv\sin\beta=[\gamma GM_{T}a(1-e^{2})]^{1/2}, \tag{4}\]
\[\tau=2\pi a\left(\frac{a}{\gamma GM_{T}}\right)^{1/2}. \tag{5}\]
We can now write the 2D projections of \(\vec{v}\) and \(\vec{r}\) using the inclination angle of the orbital plane, \(i\), as:
\[r_{2D}=r\cos i_{r}=\frac{a(1-e^{2})}{1+e\cos\phi}\left[1-\sin^{2}i\cos^{2}( \phi-\phi_{0})\right]^{1/2} \tag{6}\]
and,
\[v_{2D}=v\cos i_{v}=\left(\frac{\gamma GM_{T}}{a}\right)^{1/2} \tag{7}\] \[\left[\frac{(1+e^{2}+2e\cos\phi)(1-\sin^{2}i\cos^{2}(\beta+\phi- \phi_{0}))}{(1-e^{2})}\right]^{1/2}.\]
For the angle between \(\vec{v}\) and \(\vec{r}\) we have:
\[\sin\beta=\frac{L}{rv}=\frac{1+e\cos\phi}{(1+e^{2}+2e\cos\phi)^{1/2}}. \tag{8}\]
Since we are assuming \(r_{2D}\) is an observed quantity with very little uncertainty for each nearby _Gaia_ binary pair, we can eliminate the dependence of \(v_{2D}\) on \(a\) by writing \(v_{2D}\) in terms of \(r_{2D}\):
\[v_{2D}=\left(\frac{\gamma GM_{T}}{r_{2D}}\right)^{1/2}\left[1-\sin^{2}i\cos^{ 2}(\phi-\phi_{0})\right]^{1/4} \tag{9}\]
\[\left[\frac{1+e^{2}+2e\cos\phi-\sin^{2}i(e\sin\phi_{0}-\sin(\phi-\phi_{0}))^{ 2}}{1+e\cos\phi}\right]^{1/2}\]
\[\mbox{introducing }\tilde{v}=(GM_{T}/\tau_{2D})^{-1/2}v_{2D}\mbox{ we can write:}\]
\[\left[\frac{1+e^{2}+2e\cos\phi-\sin^{2}i(e\sin\phi_{0}-\sin(\phi-\phi_{0}))^{ 2}}{1+e\cos\phi}\right]^{1/2} \tag{10}\]
A distribution function for \(\tilde{v}\) can now be obtained from the above equation since the distribution functions for the angles involved are well known, with isotropy implying \(P(i)\propto\sin i\), \(\phi_{0}\) being uniformly distributed between 0 and \(2\pi\) and \(e\) having a distribution function which we take from the parametric form given in Hwang et al. (2022), \(P(e)=(1+\alpha)e^{\alpha}\). The distribution function for the angle \(\phi\) can now be obtained through the time spent at each phase interval using \(L=r^{2}d\phi/dt\) as follows:
\[\tau=\int_{0}^{\tau}dt=\int_{0}^{2\pi}\frac{r^{2}d\phi}{L}=\int_{0}^{2\pi} \frac{a^{2}(1-e^{2})^{2}d\phi}{L(1+e\cos\phi)^{2}}. \tag{11}\]
Using equations(4) and (5) for \(L\) and \(\tau\) we get:
\[1=\int_{0}^{2\pi}\frac{(1-e^{2})^{3/2}d\phi}{2\pi(1+e\cos\phi)^{2}}=\int_{0}^{ 2\pi}P(\phi)d\phi \tag{12}\]
and therefore,
\[P(\phi)=\frac{(1-e^{2})^{3/2}}{2\pi(1+e\cos\phi)^{2}} \tag{13}\]
Hence, distributions functions for two projection angles \(i\) and \(\phi_{0}\), for the true anomaly \(\phi\), and for the ellipticity, \(e\), fully determine the probability density distribution of \(\tilde{v}\gamma^{-1/2}\), for an assumed value of \(\gamma\), from which an observed \(r_{2D}\), and observed \(M_{T}\) and an assumed value of \(\gamma\) will yield a PDF for \(v_{2D}\). Or inversely, a set of observed \(r_{2D}\) and \(v_{2D}\), observed \(M_{T}\) values and an assumed value of \(\gamma\) will yield an empirical \(\tilde{v}\gamma^{-1/2}\) distribution. The PDF for \(a\) will necessarily be the one present in the data, through the elimination of \(a\) in favour of \(r_{2D}\) included in the use of eq.(2).
The PDF resulting from equation (10) is the master equation of the problem, and the one against which empirically inferred \(P_{I}(\tilde{v})\) PDFs will be compared. Unfortunately, this PDF does not seem to be analytical, in particular given the continuous dependence of \(P(e)\) on \(\alpha\), which we wish to leave as a free parameter given recent evidence of continuous variations of the ellipticity distribution of _Gaia_ wide binaries with \(r_{2D}\) reported by Hwang (2022). Thus, we perform high quality numerical samplings of \(P(\phi_{0})\), \(P(i)\), \(P(e)\) and \(P(\phi)\) to obtain large samples of \(5.1\times 10^{10}\) values of \(\tilde{v}\) from equation (10), which are then binned with a resolution of \(\sqrt{2}/5000\) for values of \(\alpha\) of 0.6, 0.8, 1.0 (corresponding to a thermal ellipticity distribution), 1.2 and 1.4, covering the
range of values of \(\alpha\) found by Hwang (2022) for _Gaia_ wide binaries in the \(r_{2D}\) range covered by our sample. These curves are shown in Fig. (3) for \(\gamma=1\), with a colour code which will be maintained throughout. For any arbitrary value of \(\gamma\), these same curves can be trivially re-scaled using equation (10). Notice the two critical points in \(\tilde{v}\) values where all \(\alpha\) curves closely cross. This feature can be used in wide binary gravity tests when large samples are involved, to eliminate systematic uncertainties due to unknown details in the ellipticity distribution and its possible \(r_{2D}\) dependences.
Next we construct inferred \(P_{I}(\tilde{v})\) functions as described below. Assuming Gaussian errors, the observation of a \(v_{2Dj}\) value and its accompanying \(\sigma_{vj}\) confidence interval has to be viewed as a Gaussian PDF centred on \(v_{2Dj}\) and having a standard deviation of \(\sigma_{vj}\). Hence, each observed binary will have associated to it a \(\tilde{v}\) distribution given by:
\[P_{Ij}(\tilde{v})=\frac{1}{\sigma_{ij}\sqrt{2\pi}}e^{-(\tilde{v}-\tilde{v}_{j })^{2}/2\sigma_{ij}^{2}}, \tag{14}\]
where \(\tilde{v}_{j}=(GM_{Tj}/r_{2Dj})^{-1/2}v_{2Dj}\) and \(\sigma_{\tilde{v}j}=(M_{T}/r_{2D})^{-1/2}\sigma_{vj}\), for a binary with an observed \(r_{2Dj}\) value and inferred masses. A first empirical \(P_{I}(\tilde{v})\) can now be constructed as:
\[P_{I}(\tilde{v})=\sum_{j=0}^{j=Not}P_{Ij}(\tilde{v}). \tag{15}\]
The above procedure also has the advantage of eliminating any need for \(\tilde{v}\) binning (e.g. Pittordis & Sutherland 2023) when comparing observed wide binary samples and theoretical \(\tilde{v}\) distributions.
Since we are testing the hypothesis of Keplerian orbits and a fixed value of \(\gamma\), the above function will be truncated at \(\tilde{v}\gamma^{1/2}<0\) and \(\tilde{v}>\sqrt{2}\gamma^{1/2}\) and then normalised, as the Gaussian extensions of both low and high \(\tilde{v}\) values will lead to unphysical tails at \(\tilde{v}<0\) and \(\tilde{v}>\sqrt{2}\gamma^{1/2}\). Finally, in the interest of sampling the range of \(v_{2D}\) values consistent with the reported \(\sigma_{vj}\) values, Gaussian re-samplings of the original \(v_{2Dj}\) values are performed to obtain alternative sets of \(v_{2Dj}\) values at the same fixed \(\sigma_{vj}\), \(r_{2Dj}\) and \(M_{Tj}\) values, with the final inferred \(P_{I}(\tilde{v})\) curve being the average of a large sample of 500 such \(v_{2Dj}\) re-samplings. This last step produces a mild smoothing of the \(P_{I}(\tilde{v})\) curves, as the average signal-to-noise of the relative velocity values in our sample is of 15.7, necessary to obtain well defined final goodness-of-fit curves, particularly for the small \(N_{tot}\)\(r_{2D}>0.01\) pc region where \(<v_{2Dj}/\sigma_{vj}>=7.9\) and total numbers are of only \(N_{tot}=108\). Convergence was tested and negligible changes in all of the reported parameters resulted, for a range of 100-1000 such error re-samplings.
Once a \(P_{I}(\tilde{v})\) has been constructed as described above for a given observed sample and an assumed value of \(\gamma\), it can be compared to the theoretical curves of \(P(\tilde{v})\), for any desired value of \(\alpha\). A sweep of values of \(\gamma\) will then be performed for each relevant value of \(\alpha\), to obtain comparisons of the empirical \(P_{I}(\tilde{v})\) curves to the master equation of the problem as a function of both \(\gamma\) and \(\alpha\). The comparison between an empirical \(P_{I}(\tilde{v})\) and a theoretical \(P(\tilde{v})\) one will be carried out through a Kolmogorov-Smirnov test, which yields a goodness-of-fit parameter as \((N_{tot})^{1/2}D_{KS}\), where \(D_{KS}\) is the largest vertical difference at a fixed \(\tilde{v}\) value between the cumulative distributions being compared and \(N_{tot}\) is the total number of observed wide binaries involved in any particular comparison. This comparison was in practice performed using a \(\sqrt{2}/5000\)\(\tilde{v}\gamma^{-1/2}\) resolution.
Thus a maximum goodness-of-fit \(\gamma_{BF}\) is obtained for every observed sample treated and every assumed value of \(\alpha\). To obtain an internal confidence interval on \(\gamma_{BF}\) we proceed again through a Monte Carlo method, producing a syn
Figure 4: Left(a): the red curve shows the inferred \(P_{I}(\tilde{v}\gamma^{-1/2})\) curve for the 466 data points in the \(r_{2D}<0.01\)pc range, for the parameters minimising the KS test distance, \(\alpha=0.6\) and \(\gamma=1.000\). The blue curve gives the theoretical \(P(\tilde{v}\gamma^{-1/2})\) curve of this optimal comparison, at \(\alpha=0.6\). Right(b): the red curve shows the inferred \(P_{I}(\tilde{v}\gamma^{-1/2})\) curve for one particular of the 50 synthetic data samples produced, in the \(r_{2D}<0.01\)pc range, for the parameters minimising the KS test distance for the data, \(\alpha=0.6\) and \(\gamma=1.000\). All synthetic samples have identical sets of \(r_{2Dj}\), \(M_{Tj}\) and \(\sigma_{vj}\) values as the observed sample and differ only in having \(v_{2Dj}\) values obtained from a random sampling of equation (10). The blue curve is the same as in the left panel. All curves have a \(\tilde{v}\gamma^{-1/2}\) resolution of \(\sqrt{2}/5000\).
thetic wide binary sample having exactly the same number of binaries, the exact same set of \(r_{2Dj}\) values, same \(M_{Tj}\) values, and same \(\sigma_{vj}\) values, but having \(v_{2Dj}\) values produced from a random sampling of equation (10), at the maximum-goodness-of-fit \(\gamma\) and \(\alpha\) obtained for the observed sample. Hence, a statistical sample with the same data and error structure of the best fit solution is produced, and treated exactly the same as the original data sample, yielding a corresponding synthetic \(\gamma_{SBF}\) best fit solution. This process is repeated 50 times to obtain a set of \(\gamma_{SBFk}\) values, which are then found to be well described by a Gaussian distribution, whose standard deviation becomes the statistical \(1\sigma\) confidence interval of the original \(\gamma_{BF}\) obtained for the observed sample.
## 4 Results
### The \(r_{2D}<0.01\) pc Sample
We now describe the application of the statistical framework introduced in the previous section to the observed _Gaia_ wide binary sample presented in Section (2). We begin with the \(r_{2D}<0.01\) high acceleration sub-sample, where equations (14) and (15) are used to produce an empirical \(P_{I}(\tilde{v}\gamma^{-1/2})\) PDF, for an assumed value of \(\gamma\). This function is then compared through the KS test against the theoretical \(P(\tilde{v}\gamma^{-1/2})\) curves of Fig. (2), with the process repeated using a sweep of 50 evenly spaced values of \(\gamma\) in the range \(0.7<\gamma<1.3\). An overall best fit value of \(\gamma_{BF}=1.000\) is found for this sample at \(\alpha=0.6\) for the theoretical curve.
The empirical \(P_{I}(\tilde{v}\gamma^{-1/2})\) curve is shown by the red curve in the left panel of Fig.(4). The underlying discreteness of the sample is still evident, despite having \(N_{tot}=466\) binary pairs, with cases where the inferred confidence interval in \(v_{2Dj}\), \(\sigma_{vj}\), through reported _Gaia_ parameters is very small resulting in very narrow Gaussian distributions through equation (14) and hence the sharp peaks appearing in the curve shown. The blue curve gives the best fit theoretical model at \(\alpha=0.6\) and \(\gamma_{BF}=1.000\).
The Kolmogorov-Smirnov comparison at the optimal parameters found is shown in the left panel of Fig.(5), which gives the cumulative distributions corresponding to the curves shown in the left panel in Fig.(4), using matching colours, a good fit is obtained with a \(\sqrt{N_{tot}}D_{KS}=0.628\). The KS \(\sqrt{N_{tot}}D_{KS}\) parameters of each of the 50 \(\gamma\) sweeps for the values of \(\alpha\) considered in the theoretical curves is presented in the left panel of Fig.(6), where different colour curves correspond to different values of \(\alpha\) in the theoretical \(P(\tilde{v}\gamma^{-1/2})\) curves used in the KS comparison against the inferred \(P_{I}(\tilde{v}\gamma^{-1/2})\) coming from the data. We see all curves showing extremely well defined minima in all cases very close to the Newtonian value of \(\gamma=1\). We see also a small systematic offset appearing where assuming larger values of \(\alpha\) leads to higher inferred values of \(\gamma\). As one assumes ellipticity distributions moving from sub-thermal to the \(\alpha=1\) thermal case and beyond, higher ellipticities appear leading to more elongated orbits. These more elongated orbits also imply stars spend longer periods of time at the large distance, slow moving phases of their orbits, and hence the small shift in the \(P(\tilde{v}\gamma^{-1/2})\) curves of Fig.(3) towards smaller values of \(\tilde{v}\) as one goes to higher values of \(\alpha\). This effect in turn leads to larger values of inferred \(\gamma\), as for a given observed set of \(v_{2Dj}\) values, larger assumed values of \(\gamma\) will lead to smaller inferred values of \(\tilde{v}\gamma^{-1/2}\).
There are strong theoretical expectations favouring the thermal ellipticity distributions of \(\alpha=1\), e.g. Kroupa (2008), but also recent direct observational determinations of this parameter precisely for _Gaia_ wide binaries by Hwang et al. (2022). This last study, see their Fig. (7), finds a value of
Figure 5: Left(a): the red curve shows the cumulative distribution corresponding to the inferred \(P_{I}(\tilde{v}\gamma^{-1/2})\) curve for the 466 data points in the \(r_{2D}<0.01\)pc range, for the parameters minimising \(D_{KS}\), \(\alpha=0.6\) and \(\gamma=1.000\). The blue curve gives the cumulative distribution corresponding to the theoretical \(P(\tilde{v}\gamma^{-1/2})\) curve of this optimal comparison, at \(\alpha=0.6\). Right(b): the red curve shows the cumulative distribution corresponding to the inferred \(P_{I}(\tilde{v}\gamma^{-1/2})\) curve for one particular of the 50 synthetic data samples produced, in the \(r_{2D}<0.01\)pc range, for the parameters minimising \(D_{KS}\) for the data, \(\alpha=0.6\) and \(\gamma=1.000\). All synthetic samples have identical sets of \(r_{2Dj}\), \(M_{Tj}\) and \(\sigma_{vj}\) values as the observed sample and differ only in having \(v_{2Dj}\) values obtained from a random sampling of equation (10). The blue curve is the same as in the left panel. All curves have a \(\tilde{v}\gamma^{-1/2}\) resolution of \(\sqrt{2}/5000\).
\(\alpha\) which varies from \(\alpha=0.6\) to \(\alpha=1.2\) for the \(r_{2D}\) range covered by our \(r_{2D}<0.01\) pc sample. Not wishing to overinterpret this last result, which is the first published reference on the subject, we prefer not to modify the probabilistic model presented to include an explicit variation of \(\alpha\) with \(r_{2D}\), but rather keep the ranges reported by Hwang et al. (2022) as an uncertainty range for this parameter and add a systematic confidence interval due uncertainties in \(\alpha\), to our inference procedure. This \(\sigma_{sy}\) will be defined as half the range in \(\gamma_{BF}\) obtained over the range in \(\alpha\) covered by the Hwang et al. (2022) results for the range in \(r_{2D}\) covered by a particular data set. For this first case, this systematic uncertainty will be of \(\sigma_{sy}=0.036\). To this we must also add an uncertainty due to the resolution in the implementation of the \(\gamma\) sweep undertaken, of \(\sigma_{re}=(0.6/50)/2=0.006\).
Lastly, to estimate the statistical uncertainty internal to the method, we turn to the Monte Carlo method as described in the previous section. Using as input parameters the best fit \(\gamma_{BF}=1.000\), \(\alpha=0.6\) parameters found, and keeping the full \(N_{tot}\), \(r_{2Dj}\), \(\sigma_{vj}\) and \(M_{Tj}\) sets of parameters of the observed sample, a set of 50 synthetic observations is produced where \(v_{2D}\) is produced by sampling the best fit \(P(\bar{v}\gamma^{-1/2})\) curve at \(\alpha=0.6\) and assuming \(\gamma=1\). Each of these synthetic samples is then treated exactly as the original observational sample, to yield a synthetic \(\gamma_{BFS}\) value. These 50 different \(\gamma_{BFSk}\) values have a distribution which is well fitted by a Gaussian with a standard deviation of 0.054, which hence becomes the internal statistical \(1\sigma\) confidence interval of our method for the case considered, fully accounting for the PDFs of the two projection angles of the problem, the sampling of the ellipticity distribution, the true anomaly distribution and the distribution inherent to the velocity errors of the observed sample. This last sequence of obtaining synthetic observations is also repeated for the best fit \(\gamma_{BF}\) values at the other \(\alpha\) parameters considered, the resulting internal statistical errors are reported in Table (2).
As a final consistency check on the full method one can now check that the deviation between the centroids of the \(\gamma_{BFSk}\) distributions are consistent with the input \(\gamma_{BF}\) values to within the \(1\sigma\) internal statistical confidence intervals found, which can be checked to be the case in the last column of Table (2). The right panels of Figs.(4), (5) and (6) are analogous to the left panels, but show results for one particular synthetic sample, at the overall best fit parameters of \(\gamma=1.000\) and \(\alpha=0.6\). This curves are seen to be qualitatively and quantitatively consistent with the previous ones of the left panels, but do exhibit significant variations within this sample of 50 synthetic realisations, e.g. the actual values of \(\bar{v}\gamma^{-1/2}\) at which particular sharp peaks occur shift from realisation to realisation, sometimes overlapping more, or less, while maintaining an overall consistency, as seen in Fig.(5), where the real data curve of the left panel is actually a slightly better fit to the theoretical model than the particular synthetic data set which was produced directly from sampling the model itself. In the right panel of Fig.(6) we see the particular synthetic sample presented having an overall best fit at \(\alpha=1.2\), with optimal KS values occurring at positions slightly displaced from those of the actual data sample shown in the left panel of this figure. This small deviations are what give rise to the internal statistical and systematic errors inferred as described above.
As a final result for the inference of \(\gamma\) for the \(r_{2D}<0.01\)pc we hence obtain: \(\gamma=1.000\pm\sigma_{st}\pm\sigma_{sy}\pm\sigma_{re}\) where \(\sigma_{st}=0.054\), \(\sigma_{sy}=0.036\) and \(\sigma_{re}=0.006\) and therefore
Figure 6: Left(a): the figure shows KS test goodness-of-fit \(\sqrt{N_{tot}}D_{KS}\) parameters for the 466 observed data point \(P_{I}(\bar{v}\gamma^{-1/2})\) curves in the \(r_{2D}<0.01\)pc range, as a function of the assumed value of \(\gamma\), for four different assumptions on the ellipticity distribution present, parameterised through \(P(e)=(1+\alpha)e^{\alpha}\): \(\alpha=0.6\), yellow, \(\alpha=0.8\), orange, \(\alpha=1.0\), thermal, red, \(\alpha=1.2\) and purple, respectively. The data very clearly identify the Newtonian value of \(\gamma=1\) as the optimal fit parameter. Right(b): the figure shows KS test goodness-of-fit \(\sqrt{N_{tot}}D_{KS}\) parameters for \(P_{I}(\bar{v}\gamma^{-1/2})\) curves for one particular synthetic sample produced using the best fit \(\alpha=0.6\) and \(\gamma=1.000\) parameters obtained for the observed data, in the \(r_{2D}<0.01\)pc range, as a function of the assumed value of \(\gamma\), for four different assumptions on the ellipticity distribution present, parameterised through \(P(e)=(1+\alpha)e^{\alpha}\): \(\alpha=0.6\) – yellow, \(\alpha=0.8\) – orange, \(\alpha=1.0\) (thermal) – red and \(\alpha=1.2\) – purple. The range of recovered best fit parameters from the full sample of 50 synthetic data realisations yields the internal statistical confidence interval for the \(\gamma\) parameter recovered from the observed data of \(\sigma_{st}=0.054\), and of \(\sigma_{sy}=0.036\) due to systematic uncertainties in the ellipticity distribution, for the inferred \(\gamma\). All curves have a \(\gamma\) resolution of 0.012.
\(\gamma=1.000\pm 0.096\), a result fully consistent with Newtonian expectations.
### The \(r_{2D}>0.01\) pc Sample
We now turn to the \(r_{2D}>0.01\) pc sample, which will be treated in the same way as the previous one, with the only important difference being the smaller number of observed binaries, which is now of only \(N_{tot}=108\). Figs.(7), (8) and (9) are analogous to Figs.(4), (5) and (6), and show the inferred \(P_{I}(\tilde{v})\) for the real data, compared to the optimal model curve, the same comparison for the corresponding cumulative distributions, and the \(\sqrt{N_{tot}}D_{KS}\) values of all the \(\gamma\) sweeps undertaken in the left panels. Fig. (7) differs slightly from Fig. (4) in that the Newtonian model has been added in purple, separately from the best fit one in blue. One extra step in this case, where smaller signal-to-noise \(v_{2D}\) values are present, is to take care that the overall data structure of the observed wide binary distribution is maintained. It can happen during the velocity noise re-sampling phase that a small number of \(v_{2D}\) values are shifted above the 1 km s\({}^{-1}\) upper limit imposed on the data in this low acceleration \(r_{2D}>0.01\) pc range as a safeguard against the inclusion of kinematic contaminants mentioned in section 2. Whenever this happens, the data point in question is simply removed from consideration.
Given the results of Hwang et al. (2022), this time only three values of \(\alpha\) were considered, which are the ones relevant for the \(r_{2D}\) range our second sample covers, \(\alpha=1.0\), \(\alpha=1.2\) and \(\alpha=1.4\). Again, variations due to this range of ellipticity distributions will be considered a systematic on the final results. As is natural due to the much reduced numbers involved, both the empirical \(P_{I}(\tilde{v}\gamma^{-1/2})\) curve and the example synthetic one, show stronger variations with respect to the underlying models than the previous sample. However, it is still the case that synthetic curves produced from sampling the assumed underlying model are qualitatively and quantitatively analogous to the inferred curve produced from the real data, as can be seen from the cumulative distributions of Fig.8.
In Fig.(9) we see \(\sqrt{N_{tot}}D_{KS}\) curves which are much noisier, but which still retain clearly defined optimal values, showing the same systematic drift with assumed \(\alpha\) as described for the previous sample. Indeed, as seen in the last column of Table (2), the consistency check on the method is still positive, as the centroids of recovered \(\gamma_{BFSb}\) samples are still well within the internal \(1\sigma\) statistical confidence intervals of the input parameters.
This time we obtain \(\sigma_{st}=0.143\), \(\sigma_{sy}=0.048\) and \(\sigma_{re}=0.008\), for an overall best fit value of \(\gamma_{BF}=1.5\pm 0.2\). This very substantial offset from Newtonian expectations is significantly larger than the uncertainties due to \(\alpha\), as is evident when comparing the small difference between \(P(\tilde{v})\) curves for different values of \(\alpha\) in Fig. (3) to Fig. (7), where the offset between the best fit model and the Newtonian one is significantly larger. Thus, after fully accounting for all resolution, systematic and statistical confidence intervals, we see inferred values of \(\gamma\) which are inconsistent with Newtonian expectations, at a \(2.6\sigma\) level. It is interesting that this result is consistent with the value for this effective boost in \(G\) recently reported by Chae (2023a) in the same \(r_{2D}>0.01\)pc range, using an independent approach where hidden tertiaries are not removed from the sample, but included in the modelling of the observed internal relative velocities for a much larger sample of close to 10,000 _Gaia_ wide binaries. Chae (2023a) reports an inferred value of \(\gamma=1.43\pm 0.06\), in full consistency with results presented here. As mentioned by Chae (2023a), these results are in close accordance with AQUAL expectations, a suggestion which is strongly rein
Figure 7: Left(a): the red curve shows the inferred \(P_{I}(\tilde{v}\gamma^{-1/2})\) curve for the 108 data points in the \(r_{2D}>0.01\)pc range, for the parameters \(D_{KS}\), \(\alpha=1.2\) and \(\gamma=1.512\). The blue curve gives the theoretical \(P(\tilde{v}\gamma^{-1/2})\) curve of this optimal fit, at \(\alpha=1.2\). For this same value of \(\alpha\) the Newtonian model appears in purple, for comparison. Right(b): the red curve shows the inferred \(P_{I}(\tilde{v}\gamma^{-1/2})\) curve for one particular of the 50 synthetic data samples produced, in the \(r_{2D}>0.01\)pc range, for the parameters minimising \(D_{KS}\) for the data, \(\alpha=1.2\) and \(\gamma=1.512\). All synthetic samples have identical sets of \(r_{2D}j\), \(M_{Tj}\) and \(\sigma_{vj}\) values as the observed sample and differ only in having \(v_{2D}j\) values obtained from a random sampling of equation (10). The blue and purple curves are the same as in the left panel. All curves have a \(\tilde{v}\gamma^{-1/2}\) resolution of \(\sqrt{2}/5000\).
forced by the agreement of our results with those of Chae (2023a).
The use of full Monte Carlo simulations re-sampling the velocities while keeping the \(r_{2D}\) values, masses, and crucially, the Gaia inferred \(v_{2D}\) errors, explicitly probes the effects of the actual velocity errors present on the inference obtained. As seen in the second section of Table 2, inferred values of \(\gamma\) for the best fit parameters, particularly for the best fit \(\alpha=1.2\) (\(r_{2D}>0.01\) pc), are always well within \(1\sigma_{st}\) of the input values, showing that the error structure of the data does not introduce any bias in the inference of \(\gamma\). The only effect of the small numbers and larger errors present in this region is a larger statistical confidence interval - 2.4 times larger on average than what results for the high acceleration region, where the sample is larger and relative errors smaller.
Notice that the overall velocity distributions, for both the high and low acceleration regions, remain consistent with the model expectations, see Fig. 4, 5, 7 and 8, where no deficiency of low \(\tilde{v}\) cases are seen. The inferred values of \(\gamma\) obtained are the result of the overall distribution match, not of a low \(\tilde{v}\) truncation. Having removed binaries with substantially higher noise level than the average has not biased the velocity distributions away from the model expectations of elliptical orbits, but has indeed removed systems with poorly determined velocity parameters. One of the causes of a poor proper motion determination is a poor single-stellar photometric, astrometric or photometric fit, which in turn can be due to the presence of hidden tertiaries. Hence, all the data quality cuts introduced serve to limit the presence of any such contaminants. Notice also the very close consistency of our results with the recent study of Chae (2023b), who also treats a small sample relatively cleared of hidden tertiaries. Although the sample details of this last study differ from ours, the results are consistent.
Another consistency check can now be performed comparing results of the two samples presented as summarised in Table (2). From fundamental statistical scalings, one should expect that the ratio between the \(\sigma_{st}\) values obtained should scale close to inversely with the square root of the \(N_{tot}\) numbers of both samples. The ratio of statistical \(1\sigma\) confidence intervals for the two best fit parameters of the samples treated is of \(0.143/0.054=2.65\), while the square root of the ratio of wide binaries in these two samples is of \(\sqrt{466/108}=2.08\), not far from the number above.
We end this section with Fig.(10) which summarises the results for the two samples considered, showing \(\gamma_{BF}\) values and the sum of their internal statistic and resolution \(1\sigma\) confidence intervals, as a function of the assumed values of \(\alpha\), for both samples discussed.
## 5 Discussion
Given the validation of the method presented through thorough checks using synthetic data samples and the perfect agreement with well established Newtonian gravity \(\gamma=1\) in the high acceleration \(r_{2D}<0.01\)pc regime, the results obtained for the \(r_{2D}>0.01\)pc region become important. These show a clear \(\gamma=1.5\pm 0.2\) non-Newtonian behaviour of gravity in the low acceleration \(r_{2D}>0.01\) pc region, corresponding to an \(a\approx 4a_{0}\) threshold for the mean binary masses of the sample, Hernandez (2023). Though not strongly conclusive at a \(2.6\sigma\) level, our results become compelling given the very close qualitative and quantitative agreement with the independent assment of Chae (2023a) and Chae (2023b), and are highly suggestive of MOND, given the AQUAL expectation for \(\gamma\simeq 1.4\) for the wide binaries treated.
Despite the stringent clearing of all kinematic contaminants from the sample used, a procedure which has been validated in Hernandez et al. (2022) and Hernandez (2023), and
Figure 8: Left(a): the red curve shows the cumulative distribution corresponding to the inferred \(P_{I}(\tilde{v}\gamma^{-1/2})\) curve for the 108 observed data points in the \(r_{2D}>0.01\)pc range, for the parameters minimising \(D_{KS}\), \(\alpha=1.2\) and \(\gamma=1.512\). The blue curve gives the cumulative distribution corresponding to the theoretical \(P(\tilde{v}\gamma^{-1/2})\) curve of this optimal comparison, at \(\alpha=1.2\). Right(b): the red curve shows the cumulative distribution corresponding to the inferred \(P_{I}(\tilde{v}\gamma^{-1/2})\) curve for one particular of the 50 synthetic data samples produced, in the \(r_{2D}>0.01\)pc range, for the parameters minimising \(D_{KS}\) for the data, \(\alpha=1.2\) and \(\gamma=1.512\). All synthetic samples have identical sets of \(r_{2Dj}\), \(M_{Tj}\) and \(\sigma_{ej}\) values as the observed sample and differ only in having \(v_{2D}\) values obtained from a random sampling of equation (10). The blue curve is the same as in the left panel. All curves have a \(\tilde{v}\gamma^{-1/2}\) resolution of \(\sqrt{2}/5000\).
the recent direct observational results of Hwang et al. (2022), one can explore to what level our low acceleration \(\gamma>1\) result might arise from a failure of said cleaning methods and/or from the validity of the ellipticity distributions used. To this end we now produce a synthetic \(\gamma=1\) sample using the data structure of our 108 binary \(r_{2D}>0.01\) pc sample, using an ellipticity distribution parameter of \(\alpha=0.6\), and run the inference method assuming an \(\alpha=1.4\) model. Thus, we explore the maximum systematic offset that reasonable uncertainties in the \(\alpha\) parameter might induce. Also, allowing for some presently unknown mistake in the _Gaia_ catalogue which might result in the current reported confidence intervals being underestimated, we take \(v_{2Dj}\) errors four times larger than what results from standard error propagation analysis, \(\sigma_{\nu j}=4\times\sigma_{\nu j}\). This will boost the average \(v_{2D}\) values of the sample, as small \(v_{2Dj}\) cases will mostly end up at higher \(|v_{2D}|\) values, and again bias the reconstruction procedure towards a higher \(\gamma_{BF}\) region.
The results of this experiment are shown in Fig.(11) where the three panels are analogous to the left panels of Figs.(4), (5) and (6). An inferred value of \(\gamma_{BF}=1.23\pm 0.148\) resulted. Not only is this still inconsistent with results from the real data sample in the low acceleration region of \(\gamma=1.5\pm 0.2\), but even though a higher value of \(\gamma\) resulted, the qualitative and quantitative structure of the inferred \(P_{I}(\tilde{v})\) is inconsistent with what is seen for the data. In the left panel of Fig. (11) we see the synthetic curve deviates more strongly from the best-fit model than those in the preceding section, the much enhanced noise level shifting the curve towards a flatter distribution with a much less well defined peak. This is confirmed in the next two panels, where the comparison is markedly poorer than when using the real data sample, with \(\sqrt{N_{tot}}D_{KS}\) values which grow by a factor of more than 3. Therefore, we see that the detailed distribution comparisons performed allow to distinguish and flag samples where the input assumptions deviate from the model. This final test is much more discordant in the details than either the data sample, or any of the synthetic samples produced in the previous section, and even so, pushing this \(\gamma=1\) model to the limit only allows to reach \(\gamma=1.23\).
Although not explicitly included in this test, the presence of hidden tetriaries acts in a very similar way to noise, since the extra velocity component of an inner binary sometimes adds and sometimes subtracts from the wide binary relative velocity, depending on the orientation of the inner binary orbit with respect to the wide binary one. Hidden tetriaries hence increase mean \(v_{2D}\) values while modifying the overall velocity distributions, much as noise does. Indeed, Pittordis & Sutherland (2023) explicitly note the degeneracy between an assumed hidden tertiary fraction and an assumed flyby fraction, where flybys are modelled through a random sampling of asymptotic hyperbolic relative velocities, again, much like noise, when comparing results against observed \(\tilde{v}\) distributions.
A last caveat to mention is the possible presence of a fraction of hidden tetriaries in our sample, which can not be conclusively rejected at this point, despite the very careful exclusion strategies implemented. Any such presence would bias results towards larger values of \(\gamma\), growing in relevance towards larger binary separations. We do stress that the consistency of the full \(\tilde{v}\) distributions found with theoretical expectations for pure elliptical binaries, see Figs. 4, 5, 6 and 7, argues against any significant remaining hidden tertiary fraction. Similarly, the recent results of Chae (2023b),
Figure 9: Left(a): the figure shows KS test \(\sqrt{N_{tot}}D_{KS}\) parameters for the 108 observed data points \(P_{I}(\tilde{v}\gamma^{-1/2})\) curves in the \(r_{2D}>0.01\)pc range, as a function of the assumed value of \(\gamma\), for three different assumptions on the ellipticity distribution present, parameterised through \(P(e)=(1+\alpha)e^{\alpha}\): \(\alpha=1.0\), thermal, red, \(\alpha=1.2\), purple and \(\alpha=1.4\), blue, respectively. The data very clearly identify the Newtonian values of \(\gamma>1.3\) as optimal fit parameters. Right(b): the figure shows KS test \(\sqrt{N_{tot}}D_{KS}\) parameters for \(P_{I}(\tilde{v}\gamma^{-1/2})\) curves for one particular synthetic sample produced using the best fit \(\alpha=1.2\) and \(\gamma=1.512\) parameters obtained for the observed data, in the \(\tau_{2D}>0.01\)pc range, as a function of the assumed value of \(\gamma\), for three different assumptions on the ellipticity distribution present, \(\alpha=1.0\) (thermal) – red, \(\alpha=1.2\) – purple and \(\alpha=1.4\) – blue. The range of recovered best fit parameters from the full sample of 50 synthetic data realisations yields the internal statistical confidence interval for the parameters recovered from the observed data, of \(\sigma_{st}=0.143\), and of \(\sigma_{sy}=0.048\) due to systematic uncertainties in the ellipticity distribution, for the inferred \(\gamma\). All curves have a \(\gamma\) resolution of 0.016.
where an independent hidden tertiary cleansing scheme was applied to a similar _Gaia_ wide binary sample, yielding results consistent with no remaining hidden tertiaries in the high acceleration \(r_{2D}<0.01\)pc region, and consistent with our results of \(\gamma=1.5\pm 0.2\) for \(r_{2D}>0.01\)pc, argue in the same direction. Notice the lack of any evidence for a change in the hidden tertiary fraction with binary separation in the regime of relevance, e.g. Tokovinin et al. (2002) and Tokovinin, Hartung & Hayward (2010). It is fortunate that the resent results of Manchanda et al. (2023) show that any such hidden tertiary fraction can be found or discarded with current available observational follow-up techniques, by either future astrometric accelerations in the 10-year Gaia data, and/or speckle or coronagraphic imaging on 8m telescopes, making a definitive settling of this point possible in the near future.
The final test described in this section strongly suggest we are seeing a modified gravity phenomenology. Whilst the assumption of Newtonian or GR gravity refers to very precise theories, modified gravity models, particularly covariant extensions to GR, appear in a great variety of forms and flavours. To cite but a few, Milgrom (1983), Bekenstein (2004), Moffat & Toth (2008), Zhao & Famaey (2010), Capozziello & De Laurentis (2011), Verlinde (2016), Barrientos & Mendoza (2018), Hernandez et al. (2019b) or Skordis & Zlosnik (2021), all following distinctly different theoretical approaches.
One interesting conclusion of our results is that the transition between the Newtonian and the modified gravity regimes appears to be fairly abrupt. Even for a discontinuous transition in gravitational regime with acceleration, this transition will appear smoother in the data analysed, due to the unavoidable presence of wide binary stars with orbits that cross this transition. Yet, in the data analysed no intermediary transition regime is evident. This could well be due to the poor sampling given the small numbers of wide binaries remaining in the very clean samples used, or to an actual abrupt transition, e.g. as suggested by schemes where the change in gravity at low accelerations stems from quantum effects, e.g. Capozziello & De Laurentis (2011). In terms of particular models, it is clear that our results are closely consistent with MOND AQUAL \(\gamma=1.4\) predictions e.g. Chae (2023a).
Two points are identified as crucial towards increasing the precision of our inferences: an increase in the numbers of wide binaries used, and a reduction of the systematic uncertainties through an improved empirical and theoretical understanding of the ellipticity distribution of the binaries used, and its possible variations with \(r_{2D}\). It is thus clear that future _Gaia_ data releases will help significantly towards a definitive answer from wide binary gravity tests, which presently lean towards modified gravity scenarios where there is no need to invoke hypothetical dark matter components to understand galactic dynamics.
## Acknowledgements
The authors acknowledge the input of the referee, Will Sutherland, as important towards having reached a more balanced and complete final version. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ (https://www. cosmos.esa.int/gaia), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, https://www. cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. _Gaia_ data retrieval and initial processing (up to figure 1) was performed using software developed jointly with Stephen Cookson. Xavier Her
Figure 10: Left(a):the figure shows best fit values of \(\gamma\) for the \(r_{2D}<0.01\) pc observed data sample as a function of the assumed value of \(\alpha\). The vertical bars give \(1\sigma\) confidence intervals through the addition of the standard deviation of the sample of recovered values of \(\gamma\) from 50 Monte Carlo synthetic data samples constructed at the best fit parameters found for the data, and one half of the \(\gamma\) resolution of the implementation. The horizontal blue line gives the Newtonian value of \(\gamma=1\). Full consistency of our inferences with this value is evident for this first data sample. Right(b): the figure is analogous to the left panel, but for the \(r_{2D}>0.01\) pc data sample. Our inferences are inconsistent with the the horizontal blue line showing the Newtonian value of \(\gamma=1\) at a \(2.6\sigma\) level, and much more suggestive of AQUAL expectations of close to \(\gamma=1.4\).
nandez and Alex Aguayo acknowledge financial assistance from CONAHCYT and PAPIIT IN102624. L. Nasser gratefully acknowledges the support from the NSF award PHY - 2110425.
## Data availability
All data used in this work will be shared on reasonable request to the author.
|
2307.16690 | Model predictive control for the prescription of antithyroid agents | Although hyperthyroidism is a common disease, the pharmaceutical therapy is
based on a trial-and-error approach. We extend a mathematical model of the
pituitary-thyroid feedback loop such that the intake of one antithyroid agent,
namely methimazole (MMI), can be considered and use a model predictive control
(MPC) scheme to determine suitable dosages. | Maylin Menzel, Tobias M. Wolff, Johannes W. Dietrich, Matthias A. Müller | 2023-07-31T14:03:13Z | http://arxiv.org/abs/2307.16690v1 | # Model predictive control for the prescription of antithyroid agents
###### Abstract
Although hyperhydroidism is a common disease, the pharmaceutical therapy is based on a trial-and-error approach. We extend a mathematical model of the pituitary-thyroid feedback loop such that the intake of one antithyroid agent, namely methylamazole (MMI), can be considered and use a model predictive control (MPC) scheme to determine suitable dosages.
## 1 Introduction
Thyroid disorders are widespread medical conditions. The cause for these conditions is often related to a disturbance of the pituitary-thyroid feedback loop, an important control loop of the endocrine system. Under normal circumstances, the pituitary gland releases thyroid-stimulating hormone (\(TSH\)) upon stimulation of thyrotropin-releasing hormone (\(TRH\)). Inside the thyroid gland, a stimulation of \(TSH\) leads to the production and secretion of the thyroid hormones triiodothyronine (\(T_{3}\)) and thyroxine (\(T_{4}\)). In turn, \(T_{4}\) inhibits the release of \(TSH\) from the pituitary gland.
In case of hyperhydroidism, the thyroid gland produces too much \(T_{3}\) and/or \(T_{4}\) and the increased hormone concentrations lead to symptoms like heart palpitation, weight loss, and goiter and may be even life-threatening in the case of thyroid storm. Treatment of hyperhyroidism aims to reduce the thyroid function and normalize the hormone concentrations. The current treatment options for patients are thyroidectomy, radioiodine therapy and antithyroid agents. Definitive therapy reduces the amount of functional thyroid tissue by either removing parts of it or damaging it with radioactive radiation. These forms of therapy are irreversible and apart from the obvious risks of surgery and exposure to radioactive radiation, they often lead to hypothyroidism. Anithyroid agents, which are necessary for pretreatment of definitive therapy, reversibly inhibit the production of thyroid hormones and are the only treatment option that rarely induces permanent hypothyroidism.
## 2 Model extensions and controller design
The results discussed within this abstract are based on [1]. In order to render this work as self-contained as possible, in the following we recall the details of the model extensions and the controller design from [1].
The mechanisms of the pituitary-thyroid feedback loop are modeled in [1] as a system of six nonlinear first order differential equations which describe different hormone concentrations. To model hyperhyroidism, we increase the secretory capacity of the thyroid gland (named \(G_{T}\)) in the first term of the differential equation of \(T_{4,th}\), i.e., (compare also [1, Eq. (A.1)])
\[G_{T}\frac{TSH(I)}{TSH(I)+D_{T}}. \tag{1}\]
The antithyroid agent \(MMI\) decreases the production of thyroid hormones by inhibiting the enzyme thyroid peroxidase (\(TPO\)) which catalyzes an important step in the production process, the conversion of inorganic iodide (\(I_{C}\)) to organic bound iodide (\(I_{Tg}\)). To consider this mechanism, we multiply the term in equation (1) of the differential equation related to \(T_{4,th}\) with the state \(I_{Tg}\). The differential equation for \(I_{Tg}\) is shown in [1, Eq. (A.7)].
To model the effect of \(MMI\) inside the thyroid gland, we first have to determine the plasma concentration \(MMI_{Plas}\) after an \(MMI\) intake. In this abstract, we consider the most common form of application, which is the oral intake of \(MMI\). The plasma concentration resulting from a single orally taken dosage \(u(t_{o})\) can be described as
\[MMI_{Plas}(t)=\frac{u(t_{o})u_{o}}{v(k_{a}-k_{a})}(e^{-k_{a}t}-e^{-k_{a}t}). \tag{2}\]
According to [2], the bioavailability \(f\) of \(MMI\) is 93 %. The remaining parameters denote the volume of distribution \(V\) as well as the elimination constant \(k_{a}\) and the absorption constant \(k_{a}\), \(V=28.8\) L, \(k_{e}=0,1857\) h\({}^{-1}\) and \(k_{a}=11\) h\({}^{-1}\) based on [3].
Next, we determine the resulting concentration of \(MMI_{th}\), denoting the intrathyroid \(MMI\) concentration. We choose heuristically
\[\frac{dMMI_{1}}{dt}(t)=MMI_{2}(t) \tag{3}\]
\[\frac{dMM_{t}}{dt}(t)=-a_{0}MM_{1}(t)-a_{1}MM_{2}(t)+MM_{Plas}(t) \tag{4}\]
\[MMI_{th}(t)=b_{0}MM_{1}(t)+b_{1}MM_{2}(t) \tag{5}\]
to model this process. The parameters are estimated in a least-squares sense based on data from [4] and result in \(b_{1}=690.3\cdot 10^{-6}\), \(b_{0}=37\cdot 10^{-9}\), \(a_{1}=92.2\cdot 10^{-6}\) and \(a_{0}=2.5\cdot 10^{-9}\).
Third, we determine the remaining activity of \(TPO\) (\(TPO_{2}\)) in relation to the concentration of \(MMI_{th}\) as well as its substrate \(I_{c}\). To this end, we choose heuristically
\[TPO_{a}(t)=c_{0}\left(1+exp\left(-c_{1}\left(-MMI_{th}(t)^{\frac{1}{\alpha}}+ c_{3}\right)\right)\right)^{-1}. \tag{6}\]
and identify the parameters in a least-squares sense based on data from [5]. When considering a typical concentration of \(I_{c}\), the resulting values are \(c_{0}=0.9\), \(c_{1}=84.1\cdot 10^{3}\), \(c_{2}=1.3\) and \(c_{3}=80.5\cdot 10^{-6}\). An increased intrathyroid \(I_{c}\) concentration which could, e.g., occur after the intake of contrast agents for radiographic imaging, results in the parameters \(c_{0}=1\), \(c_{1}=175.8\cdot 10^{3}\), \(c_{2}=5\) and \(c_{3}=97.6\cdot 10^{-3}\). Then, we multiply equation (6) with \(TPO(t)\) within the state equation of \(Irg\).
Finally, we implement a model predictive control (MPC) scheme to determine the dosages of \(MMI\). At each sampling instance \(t\) (with \(t\in\mathbb{N}_{0}\)), we measure the system's state \(x(t)\) and determine the optimal input sequence over some control horizon \(T\), in this abstract 15 days, with respect to a cost function. We then apply the first element in the optimal sequence to the system. The cost function chosen for this abstract penalizes deviations from the targeted state, variations from the last input and the magnitude of the input. Additionally, we consider input constraints to limit the maximal dosage. A more detailed mathematical description of the MPC can be found in [1].
## 3 Simulation Results
We simulate two different clinically relevant scenarios. For each of these, we execute two simulations, once for a nominal case (dotted lines) and a second time for a case with disturbances (continuous lines). In the latter, we include a measurement noise and an exemplary model-plant mismatch. The added measurement noise follows a Gaussian distribution with \(\mu=0\) and \(\sigma=0.11\) and is truncated at \(\pm 0.3\) to avoid negative hormone concentrations. To simulate the model-plant mismatch, we increase the values of the maximum activity of \(5\)'-deiodinase type I (\(G_{D1}\)) and the maximum activity of the direct \(T_{3}\) synthesis (\(6_{T3}\)) by 15% and the value of the maximum activity of \(5\)'-deiodinase type II (\(G_{p2}\)) by 5% (see [1, Eq. (1)], [1, Eq. (A.3)], and ([1, Eq. (A.4)]). Furthermore, we consider disturbances in the form of forgotten dosages at days 4, 12 and 34 as well as accidental doubled intakes at days 13 and 26. The targeted hormone concentrations are represented by dashed lines. In comparison to [1], we simulate a more challenging and practically relevant situation. Here, we consider a higher noise level, a more severe model-plant mismatch, accidental doubled dosages, as well as forgotten dosages in the steady state.
Fig. 1 represents a normal case of hyperhyproidism where the patient is treated with one daily oral dosage. In Fig. 2, we consider a patient with constant increased levels of iodide concentrations, where the patient also takes in one oral dosage per day. The systems start in their hyperthyroid steady states.
In both cases, the determined dosages are in line with clinical guidelines, e.g., that higher \(MMI\) dosages are necessary to normalize the hormone concentrations in the case of increased iodide concentrations [6].
## 4 Conclusion
In this abstract, we determined optimal medication dosages for the treatment of hyperthyroidism with a model of the pituitary-thyroid feedback loop and an MPC yielding promising results. Currently, we assume that all states are measurable, which is not the case in clinical practice. Therefore, an interesting subject of future research is the implementation of a nonlinear observer.
## 5 Author's Statement
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 948679).
|
2309.05432 | Improved method to determine the $Ξ_c-Ξ_c'$ mixing | We develop an improved method to explore the $\Xi_c- \Xi_c'$ mixing which
arises from the flavor SU(3) and heavy quark symmetry breaking. In this method,
the flavor eigenstates under the SU(3) symmetry are at first constructed and
the corresponding masses can be nonperturbatively determined. Matrix elements
of the mass operators which break the flavor SU(3) symmetry sandwiched by the
flavor eigenstates are then calculated. Diagonalizing the corresponding matrix
of Hamiltonian gives the mass eigenstates of the full Hamiltonian and
determines the mixing. Following the previous lattice QCD calculation of
$\Xi_c$ and $\Xi_c'$, and estimating an off-diagonal matrix element, we extract
the mixing angle between the $\Xi_c$ and $\Xi_c'$. Preliminary numerical
results for the mixing angle confirm the previous observation that such mixing
is incapable to explain the large SU(3) symmetry breaking in semileptonic
decays of charmed baryons. | Hang Liu, Wei Wang, Qi-An Zhang | 2023-09-11T13:08:20Z | http://arxiv.org/abs/2309.05432v2 | # An improved method to determine the \(\Xi_{c}-\Xi^{\prime}_{c}\) mixing
###### Abstract
We develop an improved method to explore the \(\Xi_{c}-\Xi^{\prime}_{c}\) mixing which arises from the flavor SU(3) and heavy quark symmetry breaking. In this method, the flavor eigenstates under the SU(3) symmetry are at first constructed and the corresponding masses can be nonperturbatively determined. Matrix elements of the mass operators which break the flavor SU(3) symmetry sandwiched by the flavor eigenstates are then calculated. Diagonalizing the corresponding matrix of Hamiltonian gives the mass eigenstates of the full Hamiltonian and determines the mixing. Following the previous lattice QCD calculation of \(\Xi_{c}\) and \(\Xi^{\prime}_{c}\), and estimating an off-diagonal matrix element, we extract the mixing angle between the \(\Xi_{c}\) and \(\Xi^{\prime}_{c}\). Preliminary numerical results for the mixing angle confirm the previous observation that such mixing is incapable to explain the large SU(3) symmetry breaking in semileptonic decays of charmed baryons.
## I Introduction
Remarkably recent experimental measurements of decay widths of semileptonic charmed baryon decays have revealed a significant breakdown of flavor SU(3) symmetry [1; 2; 3; 4], a pivotal tool extensively employed for deciphering weak decays of heavy mesons. This pattern is in contradiction with the data on heavy bottom meson and baryon decays [5] which to a good accuracy respects the flavor SU(3) symmetry. In the pursuit of understanding this phenomenon, mechanisms were explored in the work [6], with a very compelling contender being the incorporation of \(\Xi_{c}-\Xi^{\prime}_{c}\) mixing [7]. Subsequently, very interesting works [7; 8; 9; 10] have explored the impact from \(\Xi_{c}-\Xi^{\prime}_{c}\) mixing in weak decays of charmed and doubly-charmed baryons, and some interesting phenomena was discussed [11].
In a recent analysis to determine \(\Xi_{c}-\Xi^{\prime}_{c}\) mixing [12], four kinds of two-point correlation functions constructed by two kinds of baryonic operators are calculated using the technique of lattice QCD. Via the lattice data, two distinct methods are employed to extract the \(\Xi_{c}-\Xi^{\prime}_{c}\) mixing angle which is determined as \(\theta=(1.2\pm 0.1)^{\circ}\). This small value is consistent with a previous lattice investigation in Ref. [13], and determinations using QCD sum rules [14; 15].
In this work, we will not concentrate on the inconsistency in the angles obtained from the nonpertubative determination and the global fit. Instead, we focus on one ambiguity in defining the mixing angle between \(\Xi_{c}\) and \(\Xi^{\prime}_{c}\) in the lattice simulation, which is equivalent to the construction of flavor SU(3) eigenstates in the simulation. Previous lattice QCD determination [12] made use of the two-point correlation functions, in which an ambiguity exists in choosing the interpolating operators and accordingly in the extraction of the mixing angle. In this work, we will develop an improved method to explore the \(\Xi_{c}-\Xi^{\prime}_{c}\) mixing. In this method, the flavor eigenstates under the SU(3)symmetry are constructed at first and the corresponding masses are nonperturbatively determined. Three-point correlation functions made of the mass operator that breaks the SU(3) symmetry and the interpolating operators are then calculated. Taking a ratio with respect to the two-point correlation function removes the dependence in the interpolating operators and diagonalizing the corresponding matrix of Hamiltonian unambiguously gives the mass eigenstates of the full Hamiltonian and determines the corresponding mixing. Using the previous lattice QCD calculation of \(\Xi_{c}\) and \(\Xi^{\prime}_{c}\), and updating an off-diagonal matrix element, we extract the mixing angle between the \(\Xi_{c}\) and \(\Xi^{\prime}_{c}\). Though a sign ambiguity is left, preliminary numerical results for the mixing angle confirm the previous observation that such mixing is incapable to explain the large SU(3) symmetry breaking in semileptonic charmed baryon decays. This leaves the problem of large SU(3) symmetry breaking observed in charmed baryon decays unresolved.
The rest of this paper is organized as follows. In Sec. II, we will give the theoretical formalism and the numerical results are collected in Sec. III. We summarize this work in the last section.
## II Theoretical formalism
### \(\Xi_{c}\) and \(\Xi^{\prime}_{c}\) in SU(3) symmetry and mixing
In the QCD Lagrangian
\[\mathcal{L} = \bar{\psi}(i\not{D}-M)\psi \tag{1}\]
with
\[\psi=\ \left(\begin{array}{c}u\\ d\\ s\end{array}\right),\quad M=\ \left(\begin{array}{ccc}m_{u}&0&0\\ 0&m_{d}&0\\ 0&0&m_{s}\end{array}\right), \tag{2}\]
the masses of three quarks are different and explicitly break the flavor SU(3) symmetry. In this work we will assume the isospin symmetry and adopt \(m_{u}=m_{d}\neq m_{s}\). That way, \({\cal L}\) can be divided into 2 parts: the \(SU(3)_{F}\) symmetry conserving term \({\cal L}_{0}\) and breaking term \(\Delta{\cal L}\), where the latter one comes from the deviation between \(u/d\) and \(s\) quark masses:
\[\Delta{\cal L}\ =\ -\bar{s}(m_{s}-m_{u})s. \tag{3}\]
Therefore, the Hamiltonian is correspondingly derived as
\[H = \int d^{3}\vec{x}\left[\frac{\partial{\cal L}}{\partial\dot{ \psi}(\vec{x})}\dot{\psi}(\vec{x})+\frac{\partial{\cal L}}{\partial\dot{\bar{ \psi}}(\vec{x})}\dot{\bar{\psi}}(\vec{x})-{\cal L}\right] \tag{4}\] \[\equiv H_{0}+\Delta H,\]
with
\[\Delta H=(m_{s}-m_{u})\int d^{3}\vec{x}\bar{s}s(\vec{x}). \tag{5}\]
In the heavy quark limit with \(m_{c}\rightarrow\infty\), the heavy quark decouples from the light quark system. The interpolating operator for a \(J^{P}=(1/2)^{+}\)_usc_-type baryon can be defined as
\[O\ =\ \epsilon^{abc}(q^{Ta}C\Gamma s^{b})\Gamma^{\prime}P_{+}\tilde{c}^{c}, \tag{6}\]
where \(\tilde{c}\) denotes the heavy quark field in heavy quark effective theory (HQET) satisfying \(\gamma^{0}\tilde{c}=\tilde{c}\). \(P_{+}=(1+\gamma^{0})/2\) is the positive parity projector. The totally antisymmetric tensor \(\epsilon^{abc}\) is used to sum over all color indices and guarantee the antisymmetric color wavefunction. The transposition \(T\) acts on a Dirac spinor, and \(C=\gamma^{0}\gamma^{2}\) is the charge conjugation matrix. The Dirac matrix \(\Gamma\) and \(\Gamma^{\prime}\) are related to the internal spin structures of the heavy baryon.
Neglecting \(\Delta H\), the heavy baryon can be classified according to the flavor SU(3) symmetry as \(3\otimes 3=\bar{3}\oplus 6\), in which \(\bar{3}\) denotes the antisymmetric of light quark pair and its angular momentum is \(J_{qs}=0\), and 6 denotes the symmetric case with \(J_{qs}=1\). Then the interpolating operators for the \(J^{P}=(1/2)^{+}\)_usc_-type baryon can be chosen as [16]:
\[O^{\bar{3}}_{SU(3)}\ =\ \epsilon^{abc}(q^{Ta}C\gamma_{5}s^{b})P_{+}\tilde{c}^{c} \tag{7}\]
\[O^{6}_{SU(3)}\ =\ \epsilon^{abc}(q^{Ta}C\tilde{\gamma}s^{b})\cdot\tilde{ \gamma}\gamma_{5}P_{+}\tilde{c}^{c}. \tag{8}\]
These operators unambiguously define the corresponding flavor eigenstates \(|\Xi^{\bar{3}}_{c}\rangle\) and \(|\Xi^{6}_{c}\rangle\), which also act as the eigenstates of \(H_{0}\):
\[H_{0}|\Xi^{\bar{3}/6}_{c}(\vec{p}=0)\rangle\ =\ m_{\Xi^{\bar{3}/6}_{c}}|\Xi^{ \bar{3}/6}_{c}(\vec{p}=0)\rangle, \tag{9}\]
where \(m_{\Xi^{\bar{3}/6}_{c}}\) are the mass eigenvalues in the case \(\vec{p}=0\).
When adding the \(SU(3)_{F}\) breaking term \(\Delta H\), the mixing between \(\Xi_{c}\) and \(\Xi^{\prime}_{c}\) can occur (actually in the charmed baryon system, generating the \(\Xi_{c}-\Xi^{\prime}_{c}\) mixing also requests to break the heavy quark symmetry). One can easily see that the breaking effect is characterized by \(\Delta m_{s}=m_{s}-m_{u}\). Here we assume \(\Delta H\)'s effects form \(|\Xi_{c}\rangle\) and \(|\Xi^{\prime}_{c}\rangle\) mass eigenstates.
\[\left(\begin{array}{c}|\Xi_{c}\rangle\\ |\Xi^{\prime}_{c}\rangle\end{array}\right)=\left(\begin{array}{cc}\cos\theta &\sin\theta\\ -\sin\theta&\cos\theta\end{array}\right)\left(\begin{array}{c}|\Xi^{\bar{3} }_{c}\rangle\\ |\Xi^{\bar{6}}_{c}\rangle\end{array}\right), \tag{10}\]
and in reverse, one has
\[\left(\begin{array}{c}|\Xi^{\bar{3}}_{c}\rangle\\ |\Xi^{\bar{6}}_{c}\rangle\end{array}\right)\ =\ \left(\begin{array}{cc}\cos\theta&-\sin \theta\\ \sin\theta&\cos\theta\end{array}\right)\left(\begin{array}{c}|\Xi_{c}\rangle \\ |\Xi^{\prime}_{c}\rangle\end{array}\right), \tag{11}\]
where \(\theta\) is the mixing angle, and the mass eigenstates are orthogonal:
\[H\left|\Xi_{c}\right\rangle=m_{\Xi_{c}}\left|\Xi_{c}\right\rangle,\quad H \left|\Xi^{\prime}_{c}\right\rangle=m_{\Xi^{\prime}_{c}}\left|\Xi^{\prime}_{ c}\right\rangle, \tag{12}\]
\(m_{\Xi_{c}}\) and \(m_{\Xi^{\prime}_{c}}\) denote the physical baryon masses.
### Determination of the mixing angle
In the following we will give the method to extract the mixing through the calculation of Hamiltonian's matrix elements. Let us start from the spin-averaged matrix of mass eigenstates
\[M_{E}(\vec{p})\equiv\int\frac{d^{3}\vec{p^{\prime}}}{(2\pi)^{3}} \tag{13}\] \[\times\left(\begin{array}{cc}\langle\Xi_{c}(\vec{p})|H|\Xi_{c}( \vec{p^{\prime}})\rangle&\langle\Xi_{c}(\vec{p})|H|\Xi^{\prime}_{c}(\vec{p^{ \prime}})\rangle\\ \langle\Xi^{\prime}_{c}(\vec{p})|H|\Xi_{c}(\vec{p^{\prime}})\rangle&\langle \Xi^{\prime}_{c}(\vec{p})|H|\Xi^{\prime}_{c}(\vec{p^{\prime}})\rangle\end{array} \right).\]
Since the \(\Xi_{c}\) and \(\Xi^{\prime}_{c}\) are the eigenstates of the full Hamiltonian, the above matrix is diagonal. In particular, if \(\vec{p}=0\), \(E^{2}_{\vec{p}}=m^{2}\), one has
\[M_{E}(\vec{p}=0)\ \equiv\ \left(\begin{array}{cc}2m^{2}_{\Xi_{c}}&0\\ 0&2m^{2}_{\Xi^{\prime}_{c}}\end{array}\right). \tag{14}\]
When one rotates the external states from energy eigenstates to \(SU(3)_{F}\) flavor eigenstates, the nondiagonal terms will be nonzero due to the mixing effect
\[M_{F}(\vec{p}) \equiv \int\frac{d^{3}\vec{p^{\prime}}}{(2\pi)^{3}}\left(\begin{array}{cc }\langle\Xi_{c}^{3}(\vec{p})|H|\Xi_{c}^{3}(\vec{p^{\prime}})\rangle&\langle\Xi_ {c}^{3}(\vec{p})|H|\Xi_{c}^{6}(\vec{p^{\prime}})\rangle\\ \langle\Xi_{c}^{6}(\vec{p})|H|\Xi_{c}^{3}(\vec{p^{\prime}})\rangle&\langle\Xi _{c}^{6}(\vec{p})|H|\Xi_{c}^{6}(\vec{p^{\prime}})\rangle\end{array}\right) \tag{15}\] \[= \int\frac{d^{3}\vec{p^{\prime}}}{(2\pi)^{3}}\left(\begin{array}[] {cc}\langle\Xi_{c}^{3}(\vec{p})|(H_{0}+\Delta H)|\Xi_{c}^{3}(\vec{p^{\prime}} )\rangle&\langle\Xi_{c}^{3}(\vec{p})|\Delta H|\Xi_{c}^{6}(\vec{p^{\prime}}) \rangle\\ \langle\Xi_{c}^{6}(\vec{p})|\Delta H|\Xi_{c}^{3}(\vec{p^{\prime}})\rangle& \langle\Xi_{c}^{6}(\vec{p})|(H_{0}+\Delta H)|\Xi_{c}^{6}(\vec{p^{\prime}}) \rangle\end{array}\right).\]
The contributions from \(H_{0}\) vanish in the nondiagonal terms due to the orthogonality between \(|\Xi_{c}^{3}\rangle\) and \(|\Xi_{c}^{6}\rangle\). When considering the conservation of momentum and the external states are rest (\(\vec{p}=0\)), above matrix can be reduced to
\[M_{F} (\vec{p}=0)=\left(\begin{array}{cc}2m_{\Xi_{c}^{3}}^{2}&0\\ 0&2m_{\Xi_{c}^{6}}^{2}\end{array}\right)+(m_{s}-m_{u}) \tag{16}\] \[\times \left(\begin{array}{cc}\langle\Xi_{c}^{3}|\bar{s}s(\vec{x}=0)| \Xi_{c}^{3}\rangle&\langle\Xi_{c}^{3}|ss(\vec{x}=0)|\Xi_{c}^{6}\rangle\\ \langle\Xi_{c}^{6}|\bar{s}s(\vec{x}=0)|\Xi_{c}^{3}\rangle\rangle&\langle\Xi_ {c}^{6}|ss(\vec{x}=0)|\Xi_{c}^{6}\rangle\end{array}\right),\]
where we have omitted the momentum in external states \(\Xi_{c}^{3}(\vec{p}=0)\) and \(\Xi_{c}^{6}(\vec{p}=0)\) and the space coordinate in the scalar operator \(\bar{s}s(\vec{x}=0)\).
It is necessary to point out that all the elements of the above matrix can be calculated using nonperturbative tools like lattice QCD. The off-diagonal term should be equal and in total there are five quantities (including two masses and three independent matrix elements within Eq.(16)) to be calculated. Diagoanlizing this matrix provides us a straightforward way to extract the mixing angle.
Interestingly, physical masses can be experimentally measured or numerically determined from lattice QCD. In this case, one can actually determine the mixing angle by only calculating the off-diagonal matrix elements. To show this feasibility, one can perform a rotation from the mass eigenstates basis to the flavor eigenstates basis and obtain the relations between the elements of matrix \(M_{F}\):
\[M_{F,11} = 2\cos^{2}\theta m_{\Xi_{c}}^{2}+2\sin^{2}\theta m_{\Xi_{c}}^{2},\] \[M_{F,12} = 2\cos\theta\sin\theta(m_{\Xi_{c}}^{2}-m_{\Xi_{c}^{\prime}}^{2}),\] \[M_{F,21} = 2\cos\theta\sin\theta(m_{\Xi_{c}}^{2}-m_{\Xi_{c}^{\prime}}^{2}),\] \[M_{F,22} = 2\sin^{2}\theta m_{\Xi_{c}}^{2}+2\cos^{2}\theta m_{\Xi_{c}^{ \prime}}^{2}, \tag{17}\]
where only the \(\vec{p}=0\) case is considered. Therefore, one can establish a relation between the correlation functions and Eq. (17):
\[M_{F,11} = 2m_{\Xi_{c}^{3}}^{2}+(m_{s}-m_{u})M_{\bar{s}s}^{\bar{3}-\bar{3}}\] \[= 2\cos^{2}\theta m_{\Xi_{c}}^{2}+2\sin^{2}\theta m_{\Xi_{c}^{ \prime}}^{2},\] \[M_{F,22} = 2m_{\Xi_{c}^{6}}^{2}+(m_{s}-m_{u})M_{\bar{s}s}^{6-6}\] \[= 2\sin^{2}\theta m_{\Xi_{c}}^{2}+2\cos^{2}\theta m_{\Xi_{c}^{ \prime}}^{2},\] \[M_{F,12} = (m_{s}-m_{u})M_{\bar{s}s}^{\bar{3}-6}\] \[= 2\cos\theta\sin\theta(m_{\Xi_{c}}^{2}-m_{\Xi_{c}^{\prime}}^{2}),\] \[M_{F,21} = (m_{s}-m_{u})M_{\bar{s}s}^{\bar{6}-\bar{3}}\] \[= 2\cos\theta\sin\theta(m_{\Xi_{c}}^{2}-m_{\Xi_{c}^{\prime}}^{2}).\]
with the abbreviated matrix elements as
\[M_{\bar{s}s}^{F-I} \equiv \langle\Xi_{c}^{F}(\vec{p}=0)|\bar{s}s(x=0)|\Xi_{c}^{I}(\vec{p^{ \prime}}=0)\rangle, \tag{19}\]
where \(I,F=\bar{3},6\) denotes the \(SU(3)_{F}\) representation of initial/final states. It is clear that the mixing angle can be extracted through the off-diagonal terms of \(M_{F}\) once the \(M_{s\bar{s}}^{3-6}\) or \(M_{\bar{s}s}^{6-\bar{3}}\) is obtained from lattice QCD and \(m_{\Xi_{c}}^{2}\) and \(m_{\Xi_{c}^{\prime}}^{2}\) are experimentally determined.
Before closing this section, we wish to stress again that the masses \(m_{\Xi_{c}^{3/6}}\) are eigenvalues of \(H_{0}\) under the \(SU(3)_{F}\) symmetry while the \(m_{\Xi_{c}}/m_{\Xi_{c}^{\prime}}\) are the physical masses of \(\Xi_{c}\) and \(\Xi_{c}^{\prime}\).
### Lattice determination of the matrix elements
On the lattice, the masses \(m_{\Xi_{c}^{3,6}}\) can be determined through the simulation of the two-point function (2pt) with _usc_-type in Euclidean space, which is defined as:
\[C_{2}^{\bar{3}/6}(t)=\int d^{3}\vec{y}T^{\prime}_{\gamma^{\prime} \gamma}(O_{\gamma,SU(3)}^{\bar{3}/6}(\vec{y},t)\bar{O}_{\gamma^{\prime},SU(3)}^{ \bar{3}/6}(\vec{0},0)). \tag{20}\]
Here \(\gamma\) and \(\gamma^{\prime}\) are spinor indices and \(T^{\prime}\) is a projection matrix. The interpolating operators for the anti-triplet and sextet baryons are used as [16]:
\[O_{SU(3)}^{\bar{3}} = \epsilon^{abc}(q^{Ta}C\gamma_{5}s^{b})P_{+}c^{c}, \tag{21}\] \[O_{SU(3)}^{G} = \epsilon^{abc}(q^{Ta}C\bar{\gamma}s^{b})\cdot\bar{\gamma}\gamma_{5}P _{+}c^{c}. \tag{22}\]
It should be noticed that in the above definition, we have used the charm quark field defined in QCD, not in HQET. This will not affect the flavor SU(3) symmetry.
Inserting the hadronic states, keeping the lowest two hadrons and using \(T^{\prime}=I\), one can obtain:
\[C_{2}^{\bar{3}/6}(t) = f_{\Xi_{c}^{3/6}}^{2}m_{\Xi_{c}^{3/6}}^{4}e^{-m_{\Xi_{c}^{3/6}}^{ 3/6}}(1+d_{i}e^{-\Delta m_{\Xi_{c}^{3/6}}^{4/6}}).\]
where \(f_{\Xi_{c}^{3/6}}\) denotes the decay constant of \(\Xi_{c}^{\bar{3}}\) or \(\Xi_{c}^{6}\):
\[\langle\bar{k}|\bar{O}_{SU(3)}^{\bar{3}/6}(0,0)|0\rangle = f_{\Xi_{c}^{3/6}}m_{\Xi_{c}^{3/6}}^{2}\bar{u}(\vec{k}). \tag{24}\]
and \(\Delta m_{\Xi_{c}^{3/6}}\) describes the mass difference between the first excited states and ground states, and \(d_{i}\) characterizes the excited contributions to the two-point correlation.
The \(M_{\bar{s}s}^{F-I}\) can be extracted through the analysis of the three-point function (3pt) as
\[C_{3}^{F-I}(t_{\rm seq},t)=\int\frac{d^{3}\vec{q}}{(2\pi)^{3}}\int d^{3}\vec{y}d^{3 }\vec{y}^{\prime}d^{3}\vec{x}e^{i\vec{q}\cdot\vec{x}}T_{\gamma^{\prime}\gamma} \left\langle O_{\gamma,SU(3)}^{F}(\vec{y},t_{\rm seq})\bar{s}s(\vec{x},t)\bar{O }_{\gamma^{\prime},SU(3)}^{I}(\vec{y^{\prime}},0)\right\rangle, \tag{25}\]
where we choose \(T_{\gamma^{\prime}\gamma}\) as the identity matrix to simplify the expressions and the superscript \(F/I\) mean the final state and the initial state which can be \(\Xi_{c}^{3}/\Xi_{c}^{6}\). The momentum transfer \(\vec{q}=0\) comes from the conservation of momentum of the rest initial and final state. An illustration of the three-point correlation function is shown in Fig. 1.
By inserting a complete set of eigenstates of the Hamiltonian \(H_{0}\) between the operators, we can simplify Eq. (25) as
\[C_{3}^{F-I}(t_{\rm seq},t) = \frac{M_{\bar{s}s}^{F-I}}{\sqrt{4m_{\Xi_{c}^{I}}m_{\Xi_{c}^{F}}} }f_{\Xi_{c}^{I}}f_{\Xi_{c}^{F}}m_{\Xi_{c}^{I}}^{2}m_{\Xi_{c}^{F}}^{2}e^{-\left( m_{\Xi_{c}^{I}}-m_{\Xi_{c}^{F}}\right)t}e^{-m_{\Xi_{c}^{F}}t_{\rm seq}}\left(1+c_{ 1}e^{-\Delta m_{\Xi_{c}^{F}}t}\right)\left(1+c_{2}e^{-\Delta m_{\Xi_{c}^{F}}( t_{\rm seq}-t)}\right),\]
where \(m_{\Xi_{c}^{I}}\) and \(m_{\Xi_{c}^{F}}\) are the ground-state energies of \(\Xi_{c}^{3}\) and \(\Xi_{c}^{6}\) and \(c_{i}\) are parameters decoding the excited state contaminations. \(\Delta m_{\Xi_{c}^{I}}\) and \(\Delta m_{\Xi_{c}^{F}}\) describe the mass differences between the first excited states and ground states.
Combining the 3pt and 2pt, one can remove the dependence on nonperturbative decay constants. However there is a remnant ambiguity in determining the sign of the \(M_{\bar{s}s}^{F-I}\). From Eq. (23), one can notice that the two-point correlation contains the square of decay constant, while the three-point function in Eq. (III.2) is proportional to the decay constant for the initial state and final state. Thus if the initial state and final states are different, the determination of \(M_{\bar{s}s}^{F-I}\) and accordingly the \(\theta\) has a sign problem from the 3pt.
Keeping in mind this ambiguity, one can make use of the following ratio to suppress the contributions from the excited states:
\[R = \sqrt{\frac{C_{3}^{FI}(t_{\rm seq},t)C_{3}^{FI}(t_{\rm seq},t_{ \rm seq}-t)}{C_{2}^{I}(t_{\rm seq})C_{2}^{F}(t_{\rm seq})}}. \tag{27}\]
Combing Eq.(23) and (III.2), \(R\) can be parameterized as
\[R = \frac{\left|M_{\bar{s}s}^{F-I}\right|}{2\sqrt{m_{\Xi_{c}^{I}}m_{ \Xi_{c}^{F}}}}\left(\frac{(1+c_{1}e^{-\Delta m_{\Xi_{c}^{I}}t})(1+c_{1}e^{- \Delta m_{\Xi_{c}^{F}}t}(t_{\rm seq}-t))(1+c_{2}e^{-\Delta m_{\Xi_{c}^{F}}t}) (1+c_{2}e^{-\Delta m_{\Xi_{c}^{F}}(t_{\rm seq}-t)})}{(1+d_{1}e^{-\Delta m_{ \Xi_{c}^{F}}t_{\rm seq}})(1+d_{2}e^{-\Delta m_{\Xi_{c}^{F}}t_{\rm seq}})} \right)^{1/2} \tag{28}\] \[\simeq \frac{\left|M_{\bar{s}s}^{F-I}\right|}{2\sqrt{m_{\Xi_{c}^{F}}m_{ \Xi_{c}^{F}}}}\left(\frac{(1+c_{1}e^{-\Delta m_{\Xi_{c}^{F}}t}+c_{2}e^{- \Delta m_{\Xi_{c}^{F}}t}(t_{\rm seq}-t))(1+c_{1}e^{-\Delta m_{\Xi_{c}^{F}}t}) +c_{2}e^{-\Delta m_{\Xi_{c}^{F}}t})}{(1+d_{1}e^{-\Delta m_{\Xi_{c}^{F}}t_{\rm seq }})(1+d_{2}e^{-\Delta m_{\Xi_{c}^{F}}t_{\rm seq}})}\right)^{1/2},\]
where the nonperturbative decay constants have been eliminated and temporal dependence of \(R\) becomes symmetric under \(t\leftrightarrow(t_{\rm seq}-t)\), which allows one to extract the values of \(\left|M_{\bar{s}s}^{F-I}\right|\) conveniently.
In practice, we adopt the initial state \(I=\bar{3}\) and final state \(F=6\) to generate the correlation functions related to the off-diagonal term of \(M_{F}\), and then extract the \(\left|M_{\bar{s}s}^{6-\bar{3}}\right|\) numerically. Based on Eq.(18), the mixing angle can be evaluate from the formula
\[\sin 2\theta=\pm\frac{(m_{s}-m_{u})M_{\bar{s}s}^{6-\bar{3}}}{m_{\Xi_{c}^{F}}^{2} -m_{\Xi_{c}^{F}}^{2}}, \tag{29}\]
Figure 1: An illustration of the three-point correlation functions Eq.(III.2) on the lattice.
where the \(\pm\) reveals the sign ambiguity from 3pt, and cannot be uniquely fixed for the time being.
## III Numerical results
As shown in the previous section, one can determine the mixing angle by calculating the five quantities in Eq. (16). In addition, one can also make use of \(m_{\Xi_{c}}\) and \(m_{\Xi_{c}}^{\prime}\) and obtain the mixing angle through the simulation of the off-diagonal matrix element. In the following estimate, we will adopt the latter strategy for an illustration.
Our numerical calculations are based on the lattice QCD calculations with the gauge configurations generated by the Chinese Lattice QCD (CLQCD) collaboration with \(N_{f}=2+1\) flavor stout smeared clover fermions and Symanzik gauge action. These configurations have been applied to explore different physical quantities as in Refs. [17; 18; 19; 20].
For the estimation of the off-diagonal matrix element, we choose one set of lattice ensembles with the lattice spacing \(a=0.108\)fm. The detailed parameters of the ensemble are listed in Table 1. The bare strange quark mass is determined such that the mass of \(\eta_{s}\) is around \(700\)MeV, and the bare charm quark mass is tuned to accommodate the spin-average value of the \(J/\psi\) and \(\eta_{c}\) masses. More details can be found in Ref. [12]. The quark propagators are computed using the Coulomb gauge fixed wall source at one source time slice. By choosing different reference time slices, we perform \(432\times 20\) measurements on C11P29S ensemble.
The masses for the \(\Xi^{\bar{3}}\) and \(\Xi_{c}^{6}\) states are extracted by fitting the 2pt via the two-state parametrization in Eq. (23), and the corresponding results are shown in Fig. 2. Choosing the time-slices \([7a,20a]\), we obtain good fits with \(\chi^{2}/\text{d.o.f}=0.55\) and \(\chi^{2}/\text{d.o.f}=0.33\), and obtain \(m_{\Xi^{\bar{3}}}=2.2986\pm 0.0057\)GeV and \(m_{\Xi_{c}^{\bar{3}}}=2.4557\pm 0.0091\)GeV.
We simulate the three-point function \(C_{3}^{6-\bar{3}}(t_{\text{seq}},t)\), and adopt the parameterization in Eq. (28) to determine the parameter \(|M_{ss}^{6-\bar{3}}|\) with the fitted results shown in Fig. 3. To determine the mixing angle, we quote the masses of \(\Xi_{c}\) and \(\Xi_{c}^{\prime}\) as \(m_{\Xi_{c}}=2.468\)GeV and \(m_{\Xi_{c}^{\prime}}=2.578\) from Particle Data Group [5]. For the quark masses, their results depend on the scale, which should be compensated by the renormalization of the \(\bar{s}s\) operator. Since the aim of this paper is to demonstrate the improved method used in this work, we take two values for the quark masses and include their differences as a systematic uncertainty, which in principle could be removed by a more sophisticated analysis on the lattice. Particle Data Group gives the \(m_{s}-m_{u}\simeq 0.090\)GeV which corresponds to \(\mu=2\) GeV, while the running effects from 2GeV to 1 GeV approximately gives a factor 1.35 [5]. So we take \(m_{s}-m_{u}\simeq 0.12\)GeV with \(\mu=1\) GeV as the central value in the estimate. The result for the mixing angle is shown in Tab. 2, from which one can see that the mixing angle is about \(1^{\circ}\). The first uncertainty in \(\theta\) is statistical, and the second uncertainty arises from the quark mass difference.
A few remarks are given in order.
* It is necessary to point out that the lattice renor
\begin{table}
\begin{tabular}{|c|c|c|} \hline & \(|M_{ss}^{6-3}|\) & \(|\theta|\) \\ \hline C11P29S & \(0.155(14)\)GeV & \((0.97\pm 0.08\pm 0.25)^{\circ}\) \\ \hline \end{tabular}
\end{table}
Table 2: Results of the \(|M_{ss}^{6-\bar{3}}|\) through joint fit, and the corresponding results for the mixing angle \(\theta\).
malization of the 3pt and the scale dependence in quark masses are not taken into account in the above estimate.
* Despite the undetermined sign, the absolute value for \(\theta\) indicates that it is insufficient to account for the large SU(3) symmetry breaking effects in semileptonic weak decays of charmed baryons [1; 2; 3; 4], and leaves the large SU(3) symmetry breaking problem unresolved.
* Numerical results show that the three-point function \(C_{3}^{6-3}(t_{\rm seq},t)\) is negative. From Eq. (26), one can see that if the decay constants for \(\Xi_{c}^{3}\) and \(\Xi_{c}^{6}\) have the same sign, the obtained mixing angle will be positive.
* One can calculate the diagonal matrix element of the Hamiltonian, namely \(M_{F;11}\) and \(M_{F;22}\), which does not contain the sign ambiguity in the determination of \(M_{ss}^{F-I}\). However from Eq. (18), one can see that the square of cosine and sine of \(\theta\) appears in the relation, and thus still can not be uniquely determined.
## IV Summary
In this work, we have developed an improved method to explore the \(\Xi_{c}-\Xi_{c}^{\prime}\) mixing which arises from the flavor SU(3) and heavy quark symmetry breaking effects. The recipe in this method is summarized as follows.
* First, the flavor eigenstates are constructed under the flavor SU(3)symmetry. The corresponding masses can be determined via an explicit nonperturbative calculation using lattice QCD simulation or QCD sum rules.
* The SU(3) symmetry breaking contributions are treated as perturbative corrections. Matrix elements of the mass operators which breaks the flavor SU(3) symmetry sandwiched by the flavor eigenstates are then calculated.
* Diagonalizing the corresponding matrix of Hamiltonian gives the mass eigenstates of the full Hamiltonian and determines the corresponding mixing.
* Using the physical masses from data, one can actually determine the mixing angle by only calculating the off-diagonal matrix elements.
Following the previous lattice QCD calculation of \(\Xi_{c}\) and \(\Xi_{c}^{\prime}\)[12], and estimating an off-diagonal matrix element, we have extracted the mixing angle between the \(\Xi_{c}\) and \(\Xi_{c}^{\prime}\), with a sign ambiguity. Preliminary numerical results for the mixing angle confirm the previous observation that such mixing is not able to explain the large SU(3) symmetry breaking in semileptonic charmed baryon decays.
It should be pointed out that in this method only the leading order contributions from the symmetry breaking terms are taken into account, and it is based on a perturbative expansion in terms of \((m_{s}-m_{u})/\Lambda\) with \(\Lambda\) being the hadronic scale. In the \(\Xi_{c}-\Xi_{c}^{\prime}\) mixing the heavy quark symmetry also needs to be broken, introducing a factor \(\Lambda/m_{c}\). Other interesting examples such the \(K_{1}(1270)\) and \(K_{1}(1400)\) mixing also due to the flavor SU(3) symmetry breaking can be analyzed similarly.
Though in our illustration the lattice QCD has been used to calculate the matrix element, this method can be applied with other nonperturbative approaches like the QCD sum rules [15].
## Acknowledgements
We thank Liuming Liu, Peng Sun, Wei Sun, Jin-Xin Tan, Yi-Bo Yang for the collaboration on Ref. [12] and valuable discussions, and CLQCD for providing the lattice ensembles. W. Wang would like to thank Feng-Kun Guo, Jia-Jun Wu and Qiang Zhao for inspiring discussions. This work is supported in part by Natural Science Foundation of China under grant No.U2032102, 12125503, 12061131006, 12335003 and 12375069. The computations in this paper were run on the Siyuan-1 cluster supported by the Center for High Performance Computing at Shanghai Jiao Tong University, and Advanced Computing East China Sub-center. The LQCD calculations were performed using the Chroma software suite [21] and QUDA [22; 23; 24] through HIP programming model [25]. |
2309.15621 | A City-centric Approach to Estimate and Evaluate Global Urban Air
Mobility Demand | Urban Air Mobility (UAM) is expected to effectively complement the existing
transportation system by providing fast and safe travel options, contributing
to decarbonization, and providing benefits to citizens and communities. A
preliminary estimate of the potential global demand for UAM, the associated
aircraft movements, and the required vehicles is essential for the UAM industry
for their long-term planning, but also of interest to other stakeholders such
as governments and transportation planners to develop appropriate strategies
and actions to implement UAM. This paper proposes a city-centric forecasting
methodology that provides preliminary estimates of the potential global UAM
demand for intra-city air taxi services for 990 cities worldwide. By summing
all city-specific results, an estimate of the global UAM demand is obtained. By
varying the parameters of the UAM system, sensitivity studies and different
market scenarios are developed and analyzed. Sensitivity analyses show how
strongly demand decreases when air taxi ticket prices increase. Considering low
ticket prices and high vertiport densities, possible market development
scenarios show that there is a market potential for UAM in over 200 cities
worldwide by 2050. The study highlights the significant impact of low ticket
prices and the need for high vertiport densities to drive UAM demand. This
highlights the need for careful optimization of system components to minimize
costs and increase the quality of UAM services. | Lukas Asmer, Roman Jaksche, Henry Pak, Petra Kokus | 2023-09-27T12:38:22Z | http://arxiv.org/abs/2309.15621v1 | # A City-Centric Approach to Estimate and Evaluate Global Urban Air Mobility Demand
###### Abstract
Urban Air Mobility (UAM) is expected to effectively complement the existing transportation system by providing fast and safe travel options, contributing to decarbonization, and providing benefits to citizens and communities. A preliminary estimate of the potential global demand for UAM, the associated aircraft movements, and the required vehicles is essential for the UAM industry for their long-term planning, but also of interest to other stakeholders such as governments and transportation planners to develop appropriate strategies and actions to implement UAM. This paper proposes a city-centric forecasting methodology that provides preliminary estimates of the potential global UAM demand for intra-city air taxi services for 990 cities worldwide. By summing all city-specific results, an estimate of the global UAM demand is obtained. By varying the parameters of the UAM system, sensitivity studies and different market scenarios are developed and analyzed. Sensitivity analyses show how strongly demand decreases when air taxi ticket prices increase. Considering low ticket prices and high vertiport densities, possible market development scenarios show that there is a market potential for UAM in over 200 cities worldwide by 2050. The study highlights the significant impact of low ticket prices and the need for high vertiport densities to drive UAM demand. This highlights the need for careful optimization of system components to minimize costs and increase the quality of UAM services.
Urban Air Mobility, Global UAM demand, Market potential, Market development
## 1 Introduction
Due to the increasing advances in new vehicle concepts and technologies, Urban Air Mobility (UAM) is expected to effectively complement the existing transportation system by providing fast and safe travel options, contributing to decarbonization, and providing benefits to citizens and communities. The concept of UAM is not entirely new. A first wave of urban air mobility took place between the 1950s and the 1980s, mainly enabled by the emergence of turbine-powered helicopters. But the use of helicopters as an urban transportation mode failed to take off due to a lack of profitability and social acceptance [1-3]. Currently, however, the integration of UAM into existing urban transportation systems as a complementary component is becoming more and more conceivable.
The term UAM covers many several applications to meet different transport needs, such as intra-city, airport shuttle or suburban commuter [4]. In general, the term UAM is associated with an air transportation system based on a high-density vertiport network and air taxis services within
an urban environment, enabled by new technologies and integrated into multimodal transportation systems. The transportation is performed by electric aircraft taking off and landing vertically, remotely piloted or with a pilot on board [5]. UAM has the potential to offer various advantages and benefits for different stakeholders. In this respect, UAM is expected to enable safer, cleaner and faster mobility within urban agglomerations. Studies have shown that the use of air taxis can save 15 to 40 minutes on an average standard urban travel time [5]. The European Aviation Safety Agency (EASA) considers UAM as a new safe, secure and more sustainable air transportation system for passengers and cargo in urban environments, enabled by new technologies and integrated into multimodal transportation systems. However, the introduction of UAM is associated with technological, regulatory, infrastructural, social and economic challenges, which require a holistic approach as well as close cooperation between the various stakeholders in order to unlock the full potential of this new mode of transportation. In this process, the UAM system components must be designed to ensure that the system is accessible, affordable, safe, secure, and sustainable for users as well as profitable for operators [6].
While initial eVTOL manufacturers plan to have UAM vehicle certification by 2023 and start first operations in 2024 [7-9], the global market potential of UAM is still unclear. A preliminary estimate of the potential UAM demand, the associated number of flight movements, and the required number of vehicles would be helpful for e.g. manufacturers to plan upcoming production in advance. At the same time, the estimates can be useful for authorities, service providers or research institutions to assess the impact and effects of a potential UAM development from an overall system perspective at an early stage.
One of the main challenges in estimating the potential demand for UAM is that cities and urban agglomerations differ in various aspects (area, population, geographic characteristics, wealth level, cultural background, etc.). From a global perspective, the development of specific transport models for each city, taking into account all transport-related parameters (e.g. mobility patterns, existing transportation infrastructure or transportation policies), is not feasible due to time and cost constraints.
This paper proposes a city-centric forecasting methodology based on a limited set of input parameters relevant for UAM to provide first estimates of the potential global UAM demand, aircraft movements and fleet size for intra-city air taxi services. This model is part of a holistic view of UAM, and is among a set of forecasting models that exist for each of the above use cases.
The remainder of the paper is structured as follows:
Section 2 provides a review of the research project's background. The importance of forecasting methods for estimating the global UAM demand is explored, along with an overview of existing literature in the field. Section 3 outlines the methodology and underlying assumptions of the study, to ensure transparency and replicability. The steps taken to estimate the global UAM demand for intra-city air taxi services are described in detail. The results of the research are presented in section 4. 990 cities worldwide were analyzed using the methodology. The section contains sensitivity analysis as well as multiple market development scenarios highlighted as part of the study. Section 5 contains the conclusion. In this section, the main research findings are summarized, the limitations of the approach are highlighted, and potential future research is discussed. This section emphasizes the significance of the present work and its potential impact on further UAM development.
## 2 Background
Demand forecasting is an essential component in designing the efficiency, sustainability, and profitability of new transportation systems. In particular, when evaluating new modes of transportation, the ability to accurately predict future demand enables various stakeholders to make informed decisions, optimize operations, reduce environmental impacts, and increase customer satisfaction. Forecasting capability is also critical for the evaluation and impact assessment of UAM as a novel urban transportation system. A preliminary estimate of the potential demand for UAM, the associated number of aircraft movements, and the number of vehicles required is fundamental to the further design of UAM. This will help stakeholders to develop appropriate strategies and actions to maximize the benefits of UAM while addressing potential challenges. Thus, a global UAM forecast can help plan vehicle production according to demand, use resources efficiently, coordinate transportation effectively, integrate ground infrastructure according to demand in cities, better support environmental goals, and set appropriate frameworks and safety standards.
However, forecasting the global UAM demand involves a number of challenges. As long as there are no UAM systems, it is not possible to base estimates of future development on historical data. Uncertainties also remain in connection with technological development and market launch, which makes it difficult to make reliable long-term forecasts.
Urban transportation systems are extremely complex, multifaceted and individual, as cities have different characteristics such as population size, built-up area or wealth level, which have a direct impact on the people's mobility behavior in each city and ultimately on the potential demand for UAM. To address all these heterogeneous market conditions in one approach, a method is needed that is as simple and transferable as possible for all cities without the need to create individual, city-specific transport models.
Currently, many international research groups are working on different aspects of UAM in order to find optimal solutions for the implementation of UAM. However, regarding the preliminary estimation of global UAM demand, there is currently little research available. Initial market studies were carried out several years ago by consulting companies such as Roland Berger [10], Horvath and Partner [11], Porsche Consulting [12] and KPMG [13]. These studies primarily provide an overview of the potential opportunities and economic benefits that could result from the introduction of UAM. However, they tend to provide less insight into the underlying methodologies and assumptions, making it difficult to transfer the methodologies and results.
On the other hand, there are a number of publications in the scientific literature that present concrete methodological approaches to determine UAM demand and potential more precisely. These papers usually provide detailed insights into the models, assumptions and data base used. They enable in-depth analysis and a better understanding of the
factors influencing UAM demand. In order to address the complexity of heterogeneous market conditions, these UAM forecasts use an approach that groups cities into clusters and conducts detailed analyses for a representative city in each cluster.
Mayaconda et al. (2020) [14] provide a top-down methodology to estimate the UAM demand which is applied to 31 cities around the world. Based on travelers' willingness to pay for UAM service, the potential UAM traffic volume is estimated. The studies were conducted for the reference year 2035.
Anand et al. (2021) [15] provide a scenario-based evaluation of global UAM demand. The research is based on the previous approach of Mayaconda et al. (2020) and is applied to 542 cities worldwide to determine the global UAM demand. In addition, a scenario-based forecasting approach is used to provide long-term market demand for low and high penetration of UAM services for a 2035-2050 timeframe.
Straubinger et al. (2021) [16] also propose a scenario-based estimation of the global UAM demand. The analysis covers the four dimensions: UAM use cases, city archetypes, market development scenarios, and time horizons from 2020 to 2050, and takes into account different market penetration rates for UAM that vary by the four dimensions, covering uncertainties in prices, travel speeds, network density, access times, and mode choice behavior. By applying the market penetration rates to the conditions of the considered cities, a specific demand is calculated and compared with results from other studies.
Furthermore, there are several studies that have been conducted for smaller geographic areas.
Particularly well-known is the study by Booz Alleein Hamilton [17] commissioned by NASA. They examined the air taxi demand for 10 U.S. cities at a very detailed level, taking into account not only potential demand but also possible systemic constraints such as willingness to pay, infrastructure capacity, time of day, and weather constraints.
A second study commissioned by NASA [18] investigated the market potential of last-mile delivery, air metro services and air taxi services for different U.S. cities. Taking into account the target markets, consumers' willingness to pay and the availability of the technology, demand was determined. By multiplying the total number of expected trips in each city by the percentage of trips eligible for UAM, the market size was calculated.
Rihmja et al. (2022) [19] conducted a demand estimation and feasibility assessment for UAM in Northern California Area. A sensitivity analysis was conducted to examine the impact of cost per passenger mile and number of veriptors on UAM demand. The spatial distribution of UAM demand in the region is also analyzed, with the San Francisco Financial District identified as a major attraction for commuter trips. The results show that low UAM fares and comparable reliability with car travel are necessary to achieve sufficient demand for commuter trips and reduce empty flights.
EASA [20] identified suitable UAM cities in Europe which are the most attractive EU urban target markets for UAM OEMs and UAM operators. The study was conducted for the different sub-use-cases of Urban Air Mobility: airport shuttle, sightseeing, fixed metropolitan network, first aid, medical supply delivery and last-mile delivery. The ranking is based on key performance indicators (KPIs), an infrastructure feasibility assessment for the considered use cases, and a timing feasibility assessment.
Ploetner et al. (2020) [21] investigated the market potential of UAM for public transport in the Munich metropolitan area. An existing agent-based transport model was extended by socio-demographic changes until 2030 and the integration of intermodal UAM services. To simulate the demand for UAM, an incremental logit model was developed. The study defines three UAM networks with different numbers of veriptors and performs sensitivity analyses on factors such as fare, vehicle speed, passenger check-in times at transfer stations, and network size.
Pertz et al. (2022) [22] developed an approach for modeling the UAM commuter demand in Hamburg, Germany. The approach is based on a discrete choice model that predicts commuters' mode choice. For this purpose, predefined traffic cells are used to generate and distribute a door-to-door commuter traffic. By combining the modal split and the market volume for commuting, the market share for commuter UAM traffic in Hamburg is determined. The model offers the possibility to evaluate individual passenger routes and catchment areas as well as to analyze characteristics of travelers and routes with high and low demand.
## 3 Methodology
This paper proposes a forecasting methodology to provide initial estimates of the potential global UAM demand for intra-city air taxi services following the ideas of the traditional four-step transportation model. The four-step model [23] is a widely used approach for the determination of total and mode-specific transport demand, among others. It requires a detailed database including information on population, household size and income, activity patterns, on existing and future transportation infrastructure, supply, pricing etc. At a global stage, these data are not available at a sufficient level of detail. Therefore, a city-centric approach (**FIG 1**) was developed that uses a limited number of parameters to estimate the UAM demand for a city or an urban agglomeration. The characteristics of the city that serve as the main input parameters of the city-centric approach are the number of inhabitants, the urban area, and the country-specific GDP per capita as a proxy for the level of wealth. These data are available for the present and for the future, some of them publicly and some of them commercially. Demographic's World Urban Areas Database [24] serves as database that covers 990 urban agglomerations with more than 500,000 inhabitants and provides the number of inhabitants, the population density and the built-up urban area of each agglomeration in the year 2022. GDP per capita (real, harmonized) is taken from [25]. The number of inhabitants and GDP per capita in the future is determined by applying country-specific growth rates of population [25] and of GDP per capita by [25].
By applying this method to a set of worldwide cities and summing up all city-specific results, an estimate of global UAM demand for intra-city air taxi services is provided. Variation of major characteristics of the UAM transportation system allows different scenarios to be developed and analyzed.
### Schematic city structure
First, each city structure is mapped into a circular structure consisting of square grid cells. Using Hamburg in Germany as an example, **FIG 2** shows how the generic circular city shape is generated from the real extension of the city by transforming it into a grid cell structure. The actual shape of the city is omitted and instead approximated to a circle using the city area and a predefined grid cell size which is the same for all cities. Thus, the individual number of grid cells depends on the city area and the grid cell size.
### Population distribution
Second, the population is distributed among the grid cells with the highest population density in the center grid cell. The population density decreases from the center grid cell with increasing distance to the city outskirts. The distribution of the population is based on two universal patterns which can be observed worldwide: the greater the distance from the city center, the lower the density, and the larger the city, the more distance is needed for the population density to decrease [26]. The population distribution is achieved by using equation (1). It determines a population density factor \(p\) for each grid cell based on its distance \(d\) from the center grid cell in relation to the maximum distance \(d_{\max}\) between the center and the outer grid cell. The factor \(x\) indicates the ratio between the population density in the city center and in the outer grid cell, while the value \(k\) is a reference for the population density in the outer grid cell of the city and is used to shape the progression of the equation.
\[p(d)=e^{\left(\ln(x\cdot k)-\ln(k)\right)},\frac{max-d}{d}+\ln(k) \tag{1}\]
To determine the population density for each grid cell, the total population of the city is distributed among the grid cells in proportion to the size of the individual density factor \(p\).
For all cities examined, it is assumed that the population decreases by a factor _x=10_ and the reference value _k=2_, from the center to the edges of the city (**FIG 3**). Very similar developments can be observed in different cities around the world. In North American, East Asian, and Pacific cities with populations larger than 10 million, the size assumption of this factor is best expressed, while it varies with population size and world region [26].
### Transport demand
Third, a trip table (OD matrix) is constructed, indicating the number of trips between each pair of grid cells. For this purpose, an average daily trip rate per person is used to determine the number of trips originating from a cell. It is assumed that each person makes on average three trips per day, with no distinction made by trip purpose, household income, sex or age. The assumption is based on the "Mobility in Germany" report [27], although this figure may change in relation to socio-demographic characteristics in different countries and cities [28, 29].
Then the resulting trips are distributed to all other cells by using an empirical trip length distribution which is based on GPS car movement data from the U.S. metropolitan region of Dallas, TX [30], shown in **FIG 4**. Only trips that start and end within the city limits are considered in the examination. Trips that go beyond the city limits are out of scope. The diagram reveals that 99 percent of the total population's trips are made up to a distance of 100 km. This entails the characteristic that trips in cities with smaller areas are not completely covered within the city boundaries, but additionally go beyond them. For example, in a city with a maximum distance of 40 km within the urban area, almost 82% of trips are made inside the city boundaries and 18% of trips go beyond them.
Since there are only discrete distances due to the grid cell structure, a special procedure is developed to distribute the trips. Based on the GPS data, equation (2) is determined, which calculates the proportion of trips from the cities' total number of trips as a function of the discrete distances \(x_{i}\)
\[y(x_{i})=0.2051\ *\log(x_{i})+0.0592 \tag{2}\]
Figure 1: Concept of the city-centric forecasting approach.
Figure 3: Population density factor versus distance from the city center
Figure 2: From original city shape to generic shape – Example of the City of Hamburg, Germany
Due to the symmetric shape of the generic city and the arrangement of grid cells, single discrete distances occur between multiple pairs of grid cells. Therefore, the number of total trips per discrete distance must be divided among the corresponding pairs of grid cells. Once this procedure is completed, a predefined percentage of outbound trips is determined for each grid cell. Trips remain within a grid cell for a discrete distance equal to half the length of a grid cell edge. This predefined percentage of outbound trips is then adapted according to the population of the destination grid cell. For equal discrete distances between two or more pairs of cells with the same origin cell, the population size in the destination cell affects how high the trip number is on each pair of cells. In this way, the attractiveness of grid cells is highlighted and considered by grid cell specific characteristics, which in this case is expressed by the size of the population. This allocation represents the final step of the trip distribution calculation that leads to the final OD matrix.
### Transport options
The fourth step is to create the transport options. It is assumed that in addition to the air taxi, there is an alternative mode of ground transportation (AMT) that represents the mode that is currently being used (**FIG 5**). For each OD pair, travel times and travel cost are determined for both modes of transportation.
The alternate mode trip consists of a direct connection between origin and destination. The associated travel time is calculated based on the linear distance between origin and destination and assuming a constant average speed of 18 km per hour [31]. The monetary cost of an alternate mode trip is calculated by using a price per km and a detour of 20 percent. The price per km varies from country country depending on many factors, such as market prices for the vehicle, maintenance and insurance, vehicle age, energy consumption or the structure of charges and taxes [32]. Costs are determined based on results of the EU-funded project COMPETE (**FIG 6**), which analyzed the average operating costs per plm by car in the EU and the USA taking into account key macroeconomic indicators such as information on national fleet structure, average fuel consumption, GDP per capita adjusted for purchasing power, national interest rates and different degrees of liberalization [32]. As the analysis was already carried out in 2014, the operating costs were adjusted to current market conditions [33].
The costs per km of the alternate mode are calculated by:
\[Cost_{samt}=\big{(}6*10^{-6} \tag{3}\]
The air taxi trip consists of three segments: pre-carriage, air taxi flight, and onward carriage. In order to model air taxi trips, first vertiports are evenly distributed by placing them using the sunflower algorithm (**FIG 7**) [34].
The number of vertiports to be placed depends on the city area and a prescribed vertiport density \(vd_{sp}\). Vertiport density varies from city to city, dependent on the city area and GDP per capita, and is determined for each city individually. It is plausible that cities with lower levels of wealth have difficulty building high-quality transportation systems, resulting in lower vertiport density. Additionally,
Figure 4: Empirical trip length distribution of the U.S. metropolitan region of Dallas, TX
Figure 5: Itineraries for air taxi and for alternative mode of transportation
Figure 6: Average costs per plm of the alternate mode versus GDP per capita
Figure 7: Schematic city with 253 grid cells and 40 vertiports
smaller cities typically have shorter distances to be covered, thus lower vertiport density is expected.
For this purpose, a vertiport density \(\nu\)\(\nu\)\(\nu\)is assumed, which can be understood as a target value and is valid for cities with an area larger than 3000 sq. km and a GDP per capita larger than that of the United States. Cities where the area and GDP per capita are greater or equal to the reference values are assigned the vertiport density of the reference city. For all other cities, the vertiport density is scaled down by using a scaling factor for the area (SF\({}_{\text{area}}\), **FIG 8**) and for the GDP per capita (SF\({}_{\text{GOP}}\), **FIG 9**), considering the different city characteristics. The scaling functions are designed in a way that a small deviation in GDP per capita has a significantly larger impact on the scaling factor than the area of the city, which only has an influence when the city has only around 1/5 of the area of the reference city. Depending on how the urban area and GDP per capita of the city under consideration differ from the values for the reference city, the reference vertiport density is adjusted resulting in a city-specific vertiport density \(\nu\)\(\nu\)\(\nu\)\(\nu\):
\[\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu)\(\nu\)\(\nu\)\(\nu\)\(\nu)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu)\(\nu\)\(\nu\)\(\nu)\(\nu\)\(\nu\)\(\nu\)\(\nu)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu)\(\nu\)\(\nu\)\(\nu\)\(\nu\)\(\nu)\(\nu\)\(\nu
availability. Therefore, \(VTT_{\textit{city}}\) is calculated as a function of the GDP per capita:
\[VTT_{\textit{city}}=0.0003*\left(\textit{GDP per capita}_{\textit{ country}}\right) \tag{8}\] \[-0.3404\]
Equation (8) is based on an extensive meta-analysis by Wardman et al. (2016) [38] that considered 3109 monetary valuations from 389 European studies conducted between 1963 and 2011, shown in **FIG 10**.
Finally, the total air taxi demand is obtained by multiplying the air taxi modal split of each OD pair by its respective trip demand and then summing. In addition to air taxi demand, total air taxi movements and fleet size are derived. To calculate both air taxi movements and fleet size, the number of seats per aircraft, the seat load factor (SLF), and the air taxi utilization per hour are taken into account, assuming that the aircraft have four seats, the SLF is 0.5, and the utilization is 0.33 per hour.
The total number of air taxi movements per city is calculated by:
\[movements=\frac{\sum\textit{air taxi trips}_{\textit{city}}}{\left(\textit{seats per aircraft}\right)*\textit{SLF}} \tag{9}\]
The fleet size per city is calculated by:
\[\textit{fleet size} \tag{10}\] \[=\frac{\sum\textit{air taxi flight time}\,p.\,d._{\textit{city}}}{ \left(\textit{seats per aircraft}\right)*\textit{SLF}*utilization\,p.\,h.}\]
In conclusion, all city-specific results are summed up to estimate the global demand for UAM, the global number of air taxi movements and the global fleet size.
## 4 Results
The methodology described above was applied to 990 cities worldwide with populations greater than 500,000 inhabitants. In the first step, sensitivity analyses were performed to better understand the dependencies and effects of UAM specific model parameter. In a second step, different market scenarios on global UAM demand, aircraft movements and fleet size are outlined.
### Sensitivity analyses
In the sensitivity analysis, the effects on two crucial factors, the air taxi ticket price per km and the vertiport density per sq. km are evaluated [39]. The price per air taxi km is directly related to the customers' willingness to pay and, together with travel time, is a key factor influencing the choice of transportation mode. The density of vertiports affects the time needed for access and egress and therefore has a significant impact on the total travel time.
**FIG 11** shows the global demand for UAM as a function of air taxi ticket price at constant vertiport density. At a ticket price of 2.50 E per km, demand is highest for each given vertiport density. An increase in price leads to a decrease in demand. In this case, demand decreases very steeply at first and then more gradually. This curve progression is similar for all vertiport densities. While at the lower vertiport density the demand is practically zero at an air taxi ticket price of about 3.50 E per km, at the higher vertiport density there is still demand up to a price of 4.50 E per km. This is due to the fact that at higher vertiport densities the times for pre-carriage and on-carriage are lower, making the air taxi attractive to a larger share of traffic demand even at higher prices.
### Market Development
The city-centric forecasting approach is used to outline different possible development paths of UAM until the year 2050. Four market development scenarios (S1-S4) are considered with different assumptions regarding the development of vertiport density and air taxi ticket price over time. External market conditions such as population growth and wealth development are identical throughout the scenarios.
The air taxi ticket price affects the affordability and the vertiport density determines the accessibility of UAM services, both important aspects of user acceptance. By defining a high and low vertiport density evolution and an optimistic and conservative price evolution, four market scenarios are elaborated (Table 2).
The assumptions regarding air taxi ticket prices per km are based on studies by Pertz et al. (2023) [40]. Using a cost and revenue model for inner-city air taxi services, they found that under favorable conditions, an air taxi fare of 4.10 6/km is required to operate profitably. Under less favorable conditions, an air taxi fare of 5.70 6/km is required to ensure sound profitability. These values are used as a baseline for the year 2030. For further market development it is assumed that these prices decrease linearly by 1/3 until 2050, shown in **FIG 13**.
As the vertiport density of a city is linked to the reference vertiport density (section 3), development paths are assumed for the reference vertiport density (**FIG 14**). For the scenarios with high vertiport density, it is assumed that the reference density of 0.002 vertiports per sq. km in 2030 will increase to 0.02 vertiports per sq. km in 2050. According to Maykonda, M., et al. (2020) [14], this corresponds to an average access and egress distance of 9 and 3 km, respectively. For the scenarios with low vertiport density, the reference vertiport density increases from 0.001 vertiports per sq. km to 0.01 vertiports per sq. km in the same period. This is equivalent to an average access and egress distance of 12 and 5 km, respectively. Thus, the potential development of vertiport density is significantly lower in the second development path.
The evolution of the daily UAM demand, the number of movements and the corresponding fleet size for the scenarios are shown in **FIG 15**, **FIG 16** and **FIG 17**. They are similar for all four scenarios, but at different levels. Initially, the market grows very slowly in all scenarios, so that there are hardly any significant differences in the results up to 2035. From 2040 onwards, market growth increases, where scenario 1 stands out slightly from the other scenarios. From 2045 onwards, the divergence between the scenarios becomes more pronounced. Market growth is stronger in scenario 1 and 4, which are characterized by optimistic air taxi ticket prices, whereas markets develop only moderately in scenarios 2 and 3 with conservative air taxi ticket prices.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & & \multicolumn{3}{c}{**Vertiport Density**} & **Air Taxi Prices** \\ \hline
**Scenario 1** & High & Optimistic \\ \hline
**Scenario 2** & Low & Conservative \\
**Scenario 3** & High & Conservative \\ \hline
**Scenario 4** & Low & Optimistic \\ \hline \hline \end{tabular}
\end{table}
Table 2: Vertiport density and air taxi ticket prices for the four scenarios
Figure 14: Development of vertiport density over time
Figure 13: Development of air taxi ticket prices over time
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & & \multicolumn{3}{c}{**Vertiport density / sq. km**} \\ \hline & & **0.01** & **0.02** & **0.04** \\ \hline & 2.50 & 34,178,038 & 57,736,491 & 93,253,541 \\ \cline{2-5} & 3.00 & 3,669,134 & 9,020,090 & 21,070,562 \\ \cline{2-5} & 3.50 & 988,826 & 3,193,060 & 9,392,581 \\ \cline{2-5} & 4.00 & 318,304 & 1,327,331 & 4,791,927 \\ \cline{2-5} & 4.50 & 108,973 & 587,926 & 2,585,372 \\ \cline{2-5} & 5.00 & 38,912 & 270,569 & 1,441,318 \\ \cline{2-5} & 5.50 & 14,342 & 127,540 & 820,061 \\ \cline{2-5} & 6.00 & 5,415 & 61,105 & 473,132 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of daily UAM trips for different air taxi ticket prices and vertiport densities
In 2050, the daily UAM demand is about 19 million passengers in Scenario 1. This demand is accompanied by about 9.7 million aircraft movements and a fleet size of about 1 million vehicles. In Scenario 4 daily UAM demand is about 9 million passengers in 2050, accompanied by about 4.5 million aircraft movements and a fleet size of about half a million vehicles.
Scenarios 2 and 3, on the other hand, market size is comparatively low by 2050. For Scenario 2 the daily UAM demand is about 400,000 passengers, with about 200,000 aircraft movements, and a fleet size of about 10,000 vehicles. By 2050, for Scenario 3 the daily UAM demand is of about 1.9 million passengers, with about 900,000 aircraft movements and a fleet size of about 50,000 vehicles.
Furthermore, the results show that there is potential for UAM in only a subset of the 990 cities under consideration, where the demand is high enough and a sufficiently large verifport network can be set up. In 2050, the number of cities where UAM services are conceivable ranges from 135 in scenario 2 to 222 in scenario 1. Among these cities are international metropole regions such as London, Tokyo, or New York but also major German regions like the Rhine-Ruhr region, Berlin, Munich or Hamburg.
It should be noted that the results are highly dependent on the assumptions regarding the development paths for verifport density and air taxi ticket prices.
## 5 Conclusion
This paper proposes a forecasting methodology that provides initial estimates of potential global UAM demand for intra-city air taxi services. The concept is based on a city-centric approach that uses a limited number of parameters to estimate the total transportation demand for each city. A simplified multinomial logit model is used to determine the probability that travelers will choose air taxi for their individual trips within a city, using travel time and travel costs of each mode as input parameters. Based on the resulting UAM demand, cities with potential for UAM services can be identified. By summing all city-specific results, an estimate of the global demand for UAM is obtained. By varying the main characteristics of the UAM transportation system, sensitivity studies can be conducted as well as market development scenarios can be analyzed.
Sensitivity analyses were conducted to investigate the impact of verifport density in the range of 0.01 Verifports per sq. km and 0.04 Verifports per sq. km as well as ticket prices between 2.50 & and 6.00 & per km on UAM demand. As expected, UAM demand is highest when air taxi ticket prices are low. However, it is remarkable how strongly demand declines as air taxi ticket price increases. UAM demand drops on a very low level when the air taxi ticket price is above about 4.00 & per km. On the other hand, demand for UAM rises with an increase in verifport density. This implies that in order to boost the demand for UAM, either the prices should be lowered or the verifport density should be increased. However, more verifports usually mean higher costs and are also problematic in terms of non-user acceptance and land-use. More verifports only make sense if they generate significantly more UAM demand, so that the costs for the additional verifports are exceeded by the additional revenues, which needs to be further investigated in the context of a holistic cost and revenue analysis.
Considering different development paths for air taxi ticket prices and verifport densities, four potential market development scenarios were outlined. The results show that a significant increase in UAM demand is not expected by 2040, regardless of the level of air taxi ticket prices. If air taxi ticket prices are low, demand may increase significantly
Figure 16: Global UAM movements for the various scenarios
Figure 17: Global UAM fleet size for the various scenarios
Figure 15: Global UAM transport demand for the various scenarios
by 2050, creating a mass market. However, if prices remain high, UAM demand in 2050 is likely to remain at the low level of a niche market. The results indicate that a low air taxi ticket price is more important than a high vertiport density for high demand. In the best-case scenario, a low air taxi ticket price and a high density of vertiports could result in a market potential for UAM of 19 million daily trips in over 200 cities worldwide by 2050, with a focus on North America, Europe, and Eastern Asia.
In addition, it can be concluded that the market scenarios outlined could be problematic for market introduction and require "staying power" on the part of manufacturers and operators, as the market development is characterized by low market growth in the initial phase and strong market growth thereafter. It is important to note, however, that the results shown depend on assumptions about the development paths of air taxi ticket prices and vertiport density. Lower air taxi ticket prices at the beginning of market introduction and a rapid decline in prices are conducive to market development.
In summary, the study highlights the critical role of low ticket prices and the importance of high vertiport density for fast access and egress to the UAM system to increase UAM demand. Comparing the results of this study with the findings of Pertz et al. (2023) [40], there is currently a dissonance between the air taxi ticket prices of at least 4.00 Euro per km required for profitable operations and those needed to generate high UAM demand.
This underscores the need to carefully optimize system components to minimize costs and maximize the quality of UAM services. Such an approach would contribute to the economic viability and successful deployment of UAM systems.
In conclusion, it can be stated that as long as UAM is still in the development stage, there are many uncertainties in forecasting global UAM demand. In addition, trying to make a global forecast with limited resources involves a high degree of abstraction. The forecasting approach leaves much room for further research and improvement of the proposed methodology. This includes a critical evaluation of the simplification of real urban structures to circular cities. Furthermore, the number of alternative transportation modes should be extended to distinguish between public and private transport. In this context, the parameters for the mode choice model should also be reviewed and improved. In this study, the same values were often assumed for external factors such as the number of trips per person, the distribution of trip distances, or travel speeds for different cities. Further adjustment of the data to specific city characteristics should be considered. Last but not least, the assumptions for the development of the UAM system components should be improved and integrated into the method.
The flexible design of the forecasting methodology permits the specification of all parameters, with new findings taken into account, in order to enhance long-term demand estimation for UAM and analysis of market potential across global urban areas.
## Competing Interests
Co-Author Henry Pak is also guest editor for the special issue on the HorizonUAM project but has not been involved in the review of this manuscript.
|
2309.13250 | Runs in Random Sequences over Ordered Sets | We determine the distributions of lengths of runs in random sequences of
elements from a totally ordered set (total order) or partially ordered set
(partial order). In particular, we produce novel formulae for the expected
value, variance, and probability generating function (PGF) of such lengths in
the case of an arbitrary total order. Our focus is on the case of distributions
with both atoms and diffuse (absolutely or singularly continuous) mass which
has not been addressed in this generality before. We also provide a method of
calculating the PGF of run lengths for countably series-parallel partial
orders. Additionally, we prove a strong law of large numbers for the
distribution of run lengths in a particular realization of an infinite
sequence. | Tanner Reese | 2023-09-23T04:18:58Z | http://arxiv.org/abs/2309.13250v3 | # Plunges in Sequences of Random Ordered Variables
###### Abstract
We determine the lengths of consecutive descents (plunges) and consecutive ascents (climbs) in sequences of random elements from a partial or total order. In particular, we derive formulas for the expected value, variance, and probability generating function of such lengths in the case of total orders. To do this, we define novel generating functions associated with a measure on a partial order which can be calculated by breaking orders into pieces.
## 1 Introduction
Imagine you are playing a game of darts and each of your throws is closer to the center than the last. Or when rolling a die, each of your rolls is lower than the last. One might wonder how long such a pattern would persist for. We can formalize this as follows. Suppose \(\{X_{i}\}_{i=0}^{\infty}\) is a sequence of independently and identically distributed random variables. When looking at a given index \(X_{i}\), there will almost surely be some \(n\) such that \(X_{i}\geq\ldots\geq X_{i+n}\not\geq X_{i+n+1}\). Then we say \(n\) is the _plunge length_ at \(i\). Similarly if \(X_{i}\leq\ldots\leq X_{i+n}\not\geq X_{i+n+1}\) then we say \(n\) is the _climb length_ at \(i\).
While there has been significant investigation into the behavior and number of ascents, descents, and records in random sequences, it does not appear that the case of consecutive ascents has been considered. The asymptotic theory of order statistics has been well studied with classical results by Renyi [8] and Gnedenko [4]. An overview of this theory can be found in Galambos's book [3]. Though the theory of records and order statistics is closely related to the problem of plunges, here we will be considering arbitrary measures on orders (in some cases partial) as opposed to only real random variables. This requires the use of different methods. Interestingly the concept of consecutive ascents and descents has been addressed in the context of permutations. In particular, the question of how many permutations of \(n\) elements contain exactly \(k\) runs (consecutive ascents or descents) has been examined. Chen and Fu use a grammatical calculus to compute generating polynomials for the values of interest [2]. This is based on prior work using derivatives of polynomials to address a similar question by Ma [5]. Of course, the current work differs from these by looking at consecutive ascents and descents in a probabilistic context.
For any random sequence, the plunge length at index \(0\) will be a non-negative integer random variable \(N\) which is almost surely finite. If we assume our sequence is uniformly distributed on \((0,1)\) then \(\mathbb{E}\left[N\right]=e-2\) and if we start at \(x\in(0,1)\) then \(\mathbb{E}\left[N\right]X_{0}=x\right]=e^{x}-1\). Further one may ask what the variance of the plunge length would be. For a uniform distribution, we have \(\operatorname{var}\left(N\right)=e(3-e)\) and \(\operatorname{var}\left(N\right|X_{0}=x\right)=e^{x}-e^{2x}+2xe^{x}\). One might ask how the results change for different distributions or for a partial order. It happens that every diffuse (lacking atoms) distribution will produce the same results as for \(U(0,1)\). For example, the dart board question from above would exhibit the same behavior as \(U(0,1)\). However for a measure \(\mu\) containing atoms, the values become more difficult to calculate.
Atoms also require us to be more precise about our definition of plunge length. We may require the sequence to be strictly decreasing
\[X_{i}>X_{i+1}>\ldots>X_{i+n}\not\geq X_{i+n+1}\]
then we sat \(n\) is the _strict plunge length_ (or _strict climb length_ when strictly ascending). Alternatively we may simply require the sequence to be (non-strictly) decreasing
\[X_{i}\geq X_{i+1}\geq\ldots\geq X_{i+n}\not\geq X_{i+n+1}\]
then we say \(n\) is the _non-strict plunge length_ (or _non-strict climb length_ when (non-strictly) ascending). Note that it is important that we say non-strictly decreasing instead of non-increasing and use \(\not\geq\) and \(\not\geq\) instead of \(\leq\) and \(<\). We may be working in a partial order where a sequence could fail to increase because elements are incomparable.
We say that \(\not\equiv\) is the non-strict plunge length and \(\not\equiv\) is the strict plunge length at index \(0\). To aid in understanding these variables, we introduce several novel generating functions (primarily the plunge function \(\not\equiv\) and strict plunge function
\(\hat{P}_{\mu}(Z)\)). In section 3, we prove that the probability generating functions of \(\hat{N}\) and \(\hat{N}\) can be derived from \(\hat{P}_{\mu}(Z)\) and \(\hat{P}_{\mu}(Z)\), respectively. In particular, we find that the expected values and variances will be
\[\mathbb{E}\left[\hat{N}\right] =\hat{P}_{\mu}(1)-2\quad\text{ and }\quad\text{var}\left(\hat{N}\right) =\hat{P}_{\mu}(1)-\hat{P}_{\mu}(1)^{2}+2\hat{P}_{\mu}^{\prime}(1)\quad\text { as well as}\] \[\mathbb{E}\left[\hat{N}\right] =\hat{P}_{\mu}(1)-2\quad\text{ and }\quad\text{var}\left(\hat{N}\right) =\hat{P}_{\mu}(1)-\hat{P}_{\mu}(1)^{2}+2\hat{P}_{\mu}^{\prime}(1).\]
We demonstrate some convenient identities for the plunge functions of combinations of order measures in section 4. For example, we say the concatenation of measures \(\lambda\) and \(\mu\) on orders is the measure \(\lambda\|\,\mu\) obtained by "placing the mass" of \(\lambda\) below the mass of \(\mu\). Then \(\hat{P}_{\lambda\|\mu}(Z)=\hat{P}_{\lambda}(Z)\cdot\hat{P}_{\mu}(Z)\) and \(\hat{P}_{\lambda\|\mu}(Z)=\hat{P}_{\lambda}(Z)\cdot\hat{P}_{\mu}(Z)\). In section 5, we use this concatenation rule to prove that the plunge functions for any measure on a total order are
\[\hat{P}_{\mu}(z)=e^{m_{d}z}\prod_{\alpha\in\mathcal{A}}\frac{1}{1-m_{\alpha}z }\quad\text{ and }\quad\hat{P}_{\mu}(z)=e^{m_{d}z}\prod_{\alpha\in \mathcal{A}}(1+m_{\alpha}z)\]
where \(\mathcal{A}\) is the set of atoms of the measure \(\mu\), \(m_{\alpha}\) is the measure of the atom \(\alpha\), and \(m_{d}\) is the measure of the diffuse (non-atomic) portion of \(\mu\). This allows us to immediately calculate the expected values and variances from above for any measure on a total order. We then apply these results in section 6 to some practical examples of random sequences. Finally in section 7, we remark on the unusual property that these plunge functions and the plunge length random variables are insensitive to rearrangement of the order and measure. Even though, other common values based on the order and measure are sensitive to such rearrangements.
## 2 Preliminary Definitions and Notation
Throughout the text, we will refer to partial orders as simply **orders**. To avoid confusion with intervals \((a,b)\), we will use \(\langle a,b\rangle\) to denote pairs and tuples. Whenever \(\infty\) or \(-\infty\) are used in the context of intervals (e.g. \((-\infty,x)\) or \([x,\infty)\)), they are treated as formal upper and lower bounds on the order, not as elements of the order. Occasionally we will specify the order in which an interval is taken using a subscript (e.g. \([a,b]_{T}\subseteq T\)). For any order \(T\), we will use \(\tau_{T}\) to denote the order topology on \(T\) and \(\mathcal{B}_{T}\) to denote the Borel \(\sigma\)-algebra generated by \(\tau_{T}\). For a measure \(\mu\) on an order, we will use \(\|\mu\|\) to denote the total variation of \(\mu\). While there seem to be few works that focus on the measurable spaces of orders, there is significant work on their topological spaces (for example by Nachbin [7]).
**Definition 1**.: Suppose \(T\) is an order and \(x_{1},x_{2},\ldots\in T\). Then for each index \(i\geq 0\), we define the **(non-strict) plunge length** at \(i\) to be the unique \(n\in\mathbb{N}\cup\{\infty\}\) such that
\[x_{i}\geq x_{i+1}\geq\ldots\geq x_{i+n}\ngeq x_{i+n+1}.\]
We say \(n=\infty\) if \(X_{i}\geq X_{i+1}\geq\ldots\). We also define the **strict plunge length** at \(i\) to be the unique \(n\) such that
\[x_{i}>x_{i+1}>\ldots>x_{i+n}\ngeq x_{i+n+1}.\]
Similarly we say the **(non-strict) climb length** and **strict climb length** at \(i\) are the \(n\) such that
\[x_{i}\leq\ldots\leq x_{i+n}\nleq x_{i+n+1}\quad\text{ and }\quad x_{i}<\ldots<x_{i+n} \nleq x_{i+n+1}\;,\,\text{respectively}.\]
For any order \(T\) and finite positive measure \(\mu\) defined on the Borel \(\sigma\)-algebra of \(T\), we say the pair \((T,\mu)\) is an **order measure**. Often we will use \(\mu\) to refer to the order measure since the domain of \(\mu\) determines \(T\). If \(\mu(T)=1\) then we will say \((T,\mu)\) is a **probability** order measure. Additionally if \(T\) is a total order then we say \((T,\mu)\) is a **total** order measure. Because of their atypical behavior in certain circumstances, we will say \((T,\mu)\) is **degenerate** if there exists \(x\in T\) such that \(\mu(\{x\})=\mu(T)\). Otherwise we say that \((T,\mu)\) is **non-degenerate**.
We say that \(x\in T\) is an **atom** of the order measure \((T,\mu)\) if \(\mu(\{x\})>0\). If \((T,\mu)\) has no atoms then we say it is **diffuse**. For an order measure \((T,\mu)\), we will use \(\mathcal{A}\) to denote its set of atoms. Then, for every \(\alpha\in\mathcal{A}\), we will say that \(m_{\alpha}:=\mu(\{\alpha\})\) is the **mass** of \(\alpha\). Notice that because \(\sum_{\alpha\in\mathcal{A}}m_{\alpha}=\mu(\mathcal{A})\leq\|\mu\|<\infty\), \(\mathcal{A}\) must be countable. Then \(T\setminus\mathcal{A}\) will be measurable and we can define the **diffuse mass** as \(m_{d}:=\mu(T\setminus\mathcal{A})\).
We say an order \(T\) is **well-behaved** when the sub-diagonal \(\{(x,y)\in T\times T:x\leq y\}\) is measurable in \(\mathcal{B}_{T}\otimes\mathcal{B}_{T}\). This condition is crucial for ensuring that the events we are interested in are measurable. It also happens to be the necessary and sufficient condition to define the plunge functions (see definition 3) on an order measure. For the remainder of the text, we will use "order" and "order measure" to refer to well-behaved orders and order measures.
**Definition 2**.: Suppose \((T,\mu)\) is an order measure with well-behaved \(T\). For every \(x\in T\), we recursively define
\[\begin{array}{ll}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c}\varoquatebox[origin={}]{c} \varoquatequatebox[origin={}]{c}\varoquatequatebox[origin={}]{c}\varoquatequatebox[origin={}]{c} \varoquatequatequatebox[origin={}]{c}\varoquatequatequatebox[origin={}]{c} \varoquatequatequatequatebox[origin={}]{c}\varoquatequatequatequatebox[origin={}]{c} \varoquatequatequatequatebox[origin={}]{c}\varoquatequatequatequatebox[origin={}]{c} \varoquatequatequatequatequatequatequatebox[origin={}]{c}\varoquatequatequatequatequatebox[origin={}]{c} \varoquatequatequatequatequatequatequatequatequatequatequatebox[origin={}]{c} \varoquatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatebox[origin={}]{c} \varoquatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequatequateatequatequatequatequatequatequatequatequateatequatequateatequatequatequateatequatequatequateatequatequatequateatequateatequatequateatequatequateatequatequateatequatequateatequateatequatequatequateatequatequateatequatequateatequateatequateatequatequateatequateatequateatequatequateatequateatequatequateatequateatequatequateatequateatequatequateatequateatequateatequateatequateatequatequateatequateatequateatequateatequatequateatequateatequateatequateatequatequateatequateatequatequateatequateatequateatequateatequateatequateatequateatequateatequatequateatequateatequatequateatequatequateatequateatequateatequatequateatequateatequatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequatequateatequateatequateatequateatequateatequateatequatequateatequateatequateatequateatequateatequateatequateatequatequateatequateatequatequateatequateatequateatequateatequateatequateatequateatequateatequatequateateatequateatequateatequateatequateatequateatequatequateatequatequateatequateatequateatequatequateatequateatequateatequateatequateatequatequateatequatequateatequateatequatequateatequateatequateatequatequateatequateatequatequateatequateatequateatequateatequatequateatequatequateatequateatequateatequatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequateatequatequateatequateatequateatequateatequatequateatequateatequateatequatequateatequatequateatequatequateatequateatequatequateatequatequateatequatequateatequateatequatequateatequateatequatequateatequatequateatequatequateatequatequatequateatequateatequateatequatequateatequatequateatequatequatequateatequatequateatequateatequatequateatequatequateatequatequateatequatequatequatequatequateatequatequateatequateatequatequatequateatequatequatequatequateatequatequatequatequateatequatequatequatequatequateatequatequatequateatequatequatequateatequatequatequatequatequatequateatequatequatequatequateate
## 3 Predicting Plunge and Climb Lengths using Plunge Functions
Suppose \((T,\mu)\) is a probability order measure. Take \((\Omega,\mathcal{F},\mathbb{P})\) to be a probability space and \(\{X_{i}\}_{i=0}^{\infty}\) is a random sequence of elements in \(T\) independently and identically distributed according to \(\mu\). That is for any index \(i\) and any \(E\in\mathcal{B}_{T}\), we have \(\{X_{i}\in E\}\in\mathcal{F}\) and \(\mathbb{P}\left(X_{i}\in E\right)=\mu(E)\). For each \(i\geq 0\), we can define the random variables \(\hat{N_{i}}\) and \(\hat{N_{i}}\) to be respectively the non-strict and strict plunge lengths at \(i\) in the sequence \(\{X_{i}\}\). In this text, we will be primarily interested in the distributions of \(\hat{N_{i}}\) and \(\hat{N_{i}}\) particularly their expected values and variances. In this section, we will use \((S,\lambda)\) to refer to an arbitrary order measure that may or may not be a probability order measure.
For any index \(i\), if we consider the climb length at \(i\) then using symmetry
\[\mathbb{P}\left(\text{The climb length at index $i$ is $n$}\right) =\mathbb{P}\left(X_{i}\leq\ldots\leq X_{i+n}\nleq X_{i+n+1}\right)\] \[=\mathbb{P}\left(X_{i}\leq\ldots\leq X_{i+n}\right)-\mathbb{P} \left(X_{i}\leq\ldots\leq X_{i+n}\leq X_{i+n+1}\right)\] \[=\mathbb{P}\left(X_{i}\geq\ldots\geq X_{i+n}\right)-\mathbb{P} \left(X_{i}\geq\ldots\geq X_{i+n+1}\right)=\mathbb{P}\left(\hat{N_{i}}=n \right).\]
So the climb length and strict climb length will follow the same distributions as the plunge length and strict plunge length, respectively. Hence it is sufficient to simply examine \(\hat{N_{i}}\) and \(\hat{N_{i}}\).
**Lemma 1**.: _For any order measure \((S,\lambda)\), any \(n\geq 1\), and any \(x\in S\), the following sets are measurable and their measures are_
\[\lambda^{n}(\{\langle a_{1},\ldots,a_{n}\rangle\in S^{n}\,:\,a_{1}\leq\ldots \leq a_{n}\leq x\})=\hat{\ell}_{n}(x)\qquad\lambda^{n}(\{\langle a_{1},\ldots,a _{n}\rangle\in S^{n}\,:\,a_{1}<\ldots<a_{n}<x\})=\hat{\ell}_{n}(x)\]
_and for any \(n\geq 2\),_
\[\lambda^{n}(\{\langle a_{1},\ldots,a_{n}\rangle\in S^{n}\,:\,a_{1}\leq\ldots \leq a_{n}\})=\hat{L}_{n}\qquad\lambda^{n}(\{\langle a_{1},\ldots,a_{n}\rangle \in S^{n}\,:\,a_{1}<\ldots<a_{n}\})=\hat{\tilde{L}}_{n}.\]
Proof.: To simplify notation, we will elide \(\langle a_{1},\ldots,a_{n}\rangle\in S^{n}\). We will prove the first two equalities by induction on \(n\geq 1\). For \(n=1\), we have
\[\lambda^{n}(\{a_{1}\leq x\})=\lambda((-\infty,x]) =\int_{(-\infty,x]}1\;d\lambda(t)=\hat{\ell}_{1}(x)\] \[\lambda^{n}(\{a_{1}<x\})=\lambda((-\infty,x)) =\int_{(-\infty,x)}1\;d\lambda(t)=\hat{\ell}_{1}(x).\]
Then for \(n\geq 1\), we can write the measure of the set as the integral of its slices
\[\lambda^{n}(\{a_{1}\leq\ldots\leq a_{n}\leq x\}) =\int\lambda^{n-1}(\{a_{1}\leq\ldots\leq a_{n}\leq x\})\;d \lambda(a_{n})\] \[=\int_{(-\infty,x]}\lambda^{n-1}(\{a_{1}\leq\ldots\leq a_{n-1} \leq t\})\;d\lambda(t)=\int_{(-\infty,x]}\hat{\ell}_{n-1}(t)d\lambda(t)=\hat{ \ell}_{n}(x)\]
and similarly for \(\hat{\ell}_{n}(x)\). This completes the induction. Next we consider the plunge coefficients. For \(n\geq 2\), using slices again
\[\lambda^{n}(\{a_{1}\leq\ldots\leq a_{n}\}) =\int\lambda^{n-1}(\{a_{1}\leq\ldots\leq a_{n}\})\;d\lambda(a_{n})\] \[=\int\lambda^{n-1}(\{a_{1}\leq\ldots\leq a_{n-1}\leq t\})\;d \lambda(t)=\int\hat{\ell}_{n-1}(t)\;d\lambda(t)=\hat{\tilde{L}}_{n}\]
with a similar argument applying for \(\hat{\tilde{L}}_{n}\).
**Corollary 2**.: _For any \(i\geq 0\), \(n\geq 1\), and \(x\in T\),_
\[\mathbb{P}\left(x\geq X_{i}\geq\ldots\geq X_{i+n}\right)=\hat{\ell}_{n+1}(x) \qquad\mathbb{P}\left(x>X_{i}>\ldots>X_{i+n}\right)=\hat{\ell}_{n+1}(x)\]
\[\mathbb{P}\left(X_{i}\geq\ldots\geq X_{i+n}\right)=\mathbb{P}\left(X_{i}\leq \ldots\leq X_{i+n}\right)=\hat{\tilde{L}}_{n+1}\qquad\mathbb{P}\left(X_{i}> \ldots>X_{i+n}\right)=\mathbb{P}\left(X_{i}<\ldots<X_{i+n}\right)=\hat{\tilde{L }}_{n+1}.\]
Proof.: For any \(n\geq 1\), we define \(\vec{A}_{n+1},\vec{A}_{n+1}(x),\vec{A}_{n+1},\vec{A}_{n+1}(x)\subseteq T^{n+1}\) as
\[\vec{A}_{n+1}:=\{a_{0}\leq\ldots\leq a_{n}\}\qquad\vec{A}_{n+1}(x):=\{a_{0}\leq \ldots\leq a_{n}\leq x\}\]
\[\breve{A}_{n+1}:=\{a_{0}<\ldots<a_{n}\}\qquad\breve{A}_{n+1}(x):=\{a_{0}< \ldots<a_{n}<x\}.\]
Using lemma 1, for all \(x\in T\),
\[\mathbb{P}\left(x\geq X_{i}\geq\ldots\geq X_{i+n}\right)=\mathbb{P}\left( \left\{\omega\in\Omega\,:\,\langle X_{i+n}(\omega),\ldots,X_{i}(\omega)\rangle \in\vec{A}_{n+1}(x)\right\}\right)=\mu^{n+1}\left(\vec{A}_{n+1}(x)\right)= \vec{\ell}_{n+1}(x)\]
\[\mathbb{P}\left(X_{i}\geq\ldots\geq X_{i+n}\right)=\mathbb{P}\left(\left\{ \omega\in\Omega\,:\,\langle X_{i+n}(\omega),\ldots,X_{i}(\omega)\rangle\in \vec{A}_{n+1}\right\}\right)=\mu^{n+1}\left(\vec{A}_{n+1}\right)=\vec{L}_{n+1}\]
and similarly \(\mathbb{P}\left(x>X_{i}>\ldots>X_{i+n}\right)=\mu^{n+1}\left(\breve{\vec{A}}_ {n+1}(x)\right)=\breve{\ell}_{n+1}(x)\) and \(\mathbb{P}\left(X_{i}>\ldots>X_{i+n}\right)=\breve{\tilde{L}}_{n+1}\). Then by identi-calness of the \(\{X_{i}\}\), we get the identity for the ascending cases.
**Proposition 3**.: _For any \(i\geq 0\), \(n\geq 0\), and \(x\in T\),_
\[\mathbb{P}\left(\breve{N}_{i}=n\right)=\vec{L}_{n+1}-\vec{L}_{n+2}\qquad \mathbb{P}\left(\breve{N}_{i}=n\,\Big{|}\,X_{i}=x\right)=\vec{\ell}_{n}(x)- \vec{\ell}_{n+1}(x)\]
\[\mathbb{P}\left(\breve{N}_{i}=n\right)=\breve{\tilde{L}}_{n+1}-\breve{\tilde{ L}}_{n+2}\qquad\mathbb{P}\left(\breve{N}_{i}=n\,\Big{|}\,X_{i}=x\right)=\breve{ \tilde{\ell}}_{n}(x)-\breve{\tilde{\ell}}_{n+1}(x).\]
_Additionally if \(\breve{N}_{i}\) and \(\breve{N}_{i}\) are almost surely finite then their probability generating functions and those of the variables conditioned on \(X_{i}=x\) will be_
\[G_{\breve{N}_{i}}(Z)=\frac{1+(Z-1)\vec{\tilde{P}}_{\mu}(Z)}{Z^{2}}\qquad G_{ \breve{N}_{i}}\big{|}_{X_{i}=x}(Z)=\frac{1+(Z-1)\vec{\tilde{P}}_{\mu}(x;Z)}{Z}\]
\[G_{\breve{N}_{i}}(Z)=\frac{1+(Z-1)\vec{\tilde{P}}_{\mu}(Z)}{Z^{2}}\qquad G_{ \breve{N}_{i}}\big{|}_{X_{i}=x}(Z)=\frac{1+(Z-1)\vec{\tilde{P}}_{\mu}(x;Z)}{Z}.\]
Proof.: Using corollary 2, we observe that
\[\vec{\ell}_{n}(x) =\mathbb{P}\left(x\geq X_{i+1}\geq\ldots\geq X_{i+n}\right)\] \[=\mathbb{P}\left(x\geq X_{i+1}\geq\ldots\geq X_{i+n}\text{ and }X_{i+n}\geq X_{i+n+1}\right)+\mathbb{P}\left(x\geq X_{i+1}\geq\ldots\geq X_ {i+n}\text{ and }X_{i+n}\ngeq X_{i+n+1}\right)\] \[=\vec{\ell}_{n+1}(x)+\mathbb{P}\left(\breve{N}_{i}=n\,\Big{|}\,X _{i}=x\right)\]
and therefore \(\vec{\ell}_{n}(x)-\vec{\ell}_{n+1}(x)=\mathbb{P}\left(\breve{N}_{i}=n\,\Big{|} \,X_{i}=x\right)\). Then using a similar method for the plunge coefficients
\[\vec{L}_{n+1} =\mathbb{P}\left(X_{i}\geq\ldots\geq X_{i+n}\right)\] \[=\mathbb{P}\left(X_{i}\geq\ldots\geq X_{i+n}\text{ and }X_{i+n}\geq X_{i+n+1} \right)+\mathbb{P}\left(X_{i}\geq\ldots\geq X_{i+n}\text{ and }X_{i+n}\ngeq X_{i+n+1}\right)\] \[=\vec{L}_{n+2}+\mathbb{P}\left(\breve{N}_{i}=n\right)\]
implying \(\vec{L}_{n+1}-\vec{L}_{n+2}=\mathbb{P}\left(\breve{N}_{i}=n\right)\). Applying the same arguments, we get the equalities in the strict cases as well. Now we can manipulate the plunge functions to obtain
\[1+(Z-1)P_{\mu}(Z) =1+\sum_{n=0}^{\infty}L_{n}Z^{n+1}-\sum_{n=0}^{\infty}L_{n}Z^{n}=1+ Z+\sum_{n=0}^{\infty}L_{n+1}Z^{n+2}-1-Z-\sum_{n=0}^{\infty}L_{n+2}Z^{n+2}\] \[=\sum_{n=0}^{\infty}(L_{n+1}-L_{n+2})Z^{n+2}=\sum_{n=0}^{\infty} \mathbb{P}\left(N_{i}=n\right)Z^{n+2}=Z^{2}\cdot G_{N_{i}}(Z)\]
and the conditional plunge functions to obtain
\[1+(Z-1)P_{\mu}(x;Z) =1+\sum_{n=0}^{\infty}\ell_{n}(x)Z^{n+1}-\sum_{n=0}^{\infty}\ell_ {n}(x)Z^{n}=1+\sum_{n=0}^{\infty}\ell_{n}(x)Z^{n+1}-1-\sum_{n=0}^{\infty}\ell_{n+ 1}(x)Z^{n+1}\] \[=\sum_{n=0}^{\infty}(\ell_{n}(x)-\ell_{n+1}(x))Z^{n+1}=\sum_{n=0} ^{\infty}\mathbb{P}\left(N_{i}=n\,\big{|}\,X_{i}=x\right)Z^{n+1}=Z\cdot G_{N_{i} }\big{|}_{X_{i}=x}(Z).\]
For any \(x\in T\), we will use \(\lx@overaccentset{{\bullet}}{\mu}_{x}\) and \(\hat{\mu}_{x}\) to refer to \(\mu\) restricted to \((-\infty,x]\) and \((-\infty,x)\), respectively. That is \(\lx@overaccentset{{\bullet}}{\mu}_{x}:=\mu\big{|}_{(-\infty,x]}\) and \(\hat{\mu}_{x}:=\mu\big{|}_{(-\infty,x)}\) so that
\[\lx@overaccentset{{\bullet}}{\mu}_{x}(E):=\mu((-\infty,x]\cap E) \quad\text{ and }\quad\hat{\mu}_{x}(E):=\mu((-\infty,x)\cap E)\]
for all \(E\in\mathcal{B}_{T}\).
**Proposition 4**.: _For any order measure \((T,\mu)\) and \(x\in T\),_
\[\lx@overaccentset{{\bullet}}{P}_{\mu}(x;Z)=\lx@overaccentset{{ \bullet}}{P}_{\mu_{x}}(Z)\quad\text{ and }\quad\lx@overaccentset{{ \circ}}{P}_{\mu}(x;Z)=\lx@overaccentset{{\circ}}{P}_{\mu_{x}}(Z).\]
Proof.: First we will prove by induction that for any \(x,t\in T\) and any \(n\geq 0\),
\[\lx@overaccentset{{\bullet}}{\ell}_{n}(\mu;t)=\lx@overaccentset{{ \bullet}}{\ell}_{n}(\lx@overaccentset{{\bullet}}{\mu}_{x};t)\ \ \text{if }t\leq x\quad\text{ and }\quad\lx@overaccentset{{ \circ}}{\ell}_{n}(\mu;t)=\lx@overaccentset{{\circ}}{\ell}_{n}( \lx@overaccentset{{\bullet}}{\mu}_{x};t)\ \ \text{if }t<x.\]
When \(n=0\) we have \(\ell_{0}(\mu;t)=1=\ell_{0}(\mu_{x};t)\) for all \(t\leq x\). Otherwise, \(n\geq 1\) and for any \(t\in(-\infty,x]\) then \((-\infty,t]\subseteq(-\infty,x]\) so
\[\lx@overaccentset{{\bullet}}{\ell}_{n}(\mu;t)=\int_{(-\infty,t]} \lx@overaccentset{{\bullet}}{\ell}_{n-1}(\mu;r)\ d\mu(r)=\int_{(- \infty,t]}\lx@overaccentset{{\bullet}}{\ell}_{n-1}(\lx@overaccentset{{ \bullet}}{\mu}_{x};r)\ d\lx@overaccentset{{\bullet}}{\ell}_{n}(r) \ d\lx@overaccentset{{\bullet}}{\ell}_{n}(r)=\lx@overaccentset{{ \bullet}}{\ell}_{n}(\lx@overaccentset{{\bullet}}{\mu}_{x};t)\]
and by a similar argument \(\lx@overaccentset{{\bullet}}{\ell}_{n}(\mu;t)=\lx@overaccentset{{ \bullet}}{\ell}_{n}(\lx@overaccentset{{\bullet}}{\mu}_{x};t)\) completing the induction. Second we will prove that \(\ell_{n}(\mu;x)=L_{n}(\mu_{x})\). When \(n=0\) we have \(\ell_{0}(\mu;x)=1=L_{0}(\mu_{x})\). Otherwise, \(n\geq 1\) and because \((-\infty,x]\) and \((-\infty,x)\) are the supports for \(\lx@overaccentset{{\bullet}}{\mu}_{x}\) and \(\hat{\mu}_{x}\), respectively, we get
\[\lx@overaccentset{{\bullet}}{\ell}_{n}(\mu;x)=\int_{(- \infty,x]}\lx@overaccentset{{\bullet}}{\ell}_{n-1}(\mu;t)\ d\mu(t)=\int \lx@overaccentset{{\bullet}}{\ell}_{n-1}(\lx@overaccentset{{ \bullet}}{\mu}_{x};t)\ d\lx@overaccentset{{\bullet}}{\mu}_{x}(t)=L_{n} (\lx@overaccentset{{\bullet}}{\mu}_{x})\] \[\lx@overaccentset{{\bullet}}{\ell}_{n}(\mu;x)=\int_{(- \infty,x)}\lx@overaccentset{{\bullet}}{\ell}_{n-1}(\mu;t)\ d\mu(t)=\int \lx@overaccentset{{\bullet}}{\ell}_{n-1}(\lx@overaccentset{{ \bullet}}{\mu}_{x};t)\ d\hat{\mu}_{x}(t)=L_{n}(\lx@overaccentset{{ \bullet}}{\mu}_{x}).\]
Finally the conditional plunge functions can be written as
\[P_{\mu}(x;Z)=\sum_{n=0}^{\infty}\ell_{n}(\mu;x)Z^{n}=\sum_{n=0}^{\infty}L_{n}( \mu_{x})Z^{n}=P_{\mu_{x}}(Z).\]
Now we will establish bounds on the radius of convergence of \(P_{\mu}(Z)\) so that we can evaluate the probability generating functions for \(\lx@overaccentset{{\bullet}}{N}_{i}\) and \(\lx@overaccentset{{\circ}}{N}_{i}\).
**Lemma 5**.: _If \((S,\lambda)\) is an order measure then for all \(n,a_{1},\ldots,a_{k}\geq 0\) with \(n=a_{1}+\ldots+a_{k}\),_
\[0\leq L_{n}\leq\prod_{i=1}^{k}L_{a_{i}}\quad\text{ and }\quad 0\leq\ell_{n}(x)\leq\ell_{a_{ 1}}(x)\prod_{i=2}^{k}L_{a_{i}}\ \text{ for all }x\in S.\]
Proof.: First notice that because \(\lambda\) is positive, repeated integration of a positive function (beginning with \(\ell_{0}(x)=1\)) will only yield positive functions thus \(\ell_{n}(x),L_{n}\geq 0\) for all \(n\geq 0\). Further for \(n\geq 1\),
\[\lx@overaccentset{{\bullet}}{\ell}_{n}(x)=\int_{(-\infty,x]} \lx@overaccentset{{\bullet}}{\ell}_{n-1}(t)\ d\lambda(t)\leq\int \lx@overaccentset{{\bullet}}{\ell}_{n-1}(t)\ d\lambda(t)=\lx@overaccentset{{ \bullet}}{L}_{n}\]
and similarly for \(\lx@overaccentset{{\bullet}}{\ell}_{n}(x)\leq\lx@overaccentset{{ \bullet}}{L}_{n}\). Now we will perform an induction on \(n\geq 0\). When \(n=0\), we must have \(a_{1}=\ldots=a_{k}=0\) so
\[L_{0}=1\leq\prod_{i=1}^{k}1=\prod_{i=1}^{k}L_{0}\quad\text{ and }\quad\ell_{0}(x)=1\leq 1 \cdot\prod_{i=2}^{k}1=\ell_{0}(x)\prod_{i=2}^{k}L_{0}\ \text{ for all }x\in S.\]
When \(n\geq 1\) either \(a_{1}=0\) or \(a_{1}\geq 1\). If \(a_{1}=0\) then there exists some smallest \(j\in[2,k]\) such that \(a_{j}\geq 1\). Then \(n=a_{j}+\ldots+a_{k}\) and \(n-1=(a_{j}-1)+a_{j+1}+\ldots+a_{k}\) so by the induction hypothesis
\[L_{n} =\int\ell_{n-1}(t)\ d\lambda(t)\leq\int\ell_{a_{j}-1}(t)\prod_{i=j+ 1}^{k}L_{a_{i}}\ d\lambda(t)\] \[=\int\ell_{a_{j}-1}(t)\ d\lambda(t)\cdot\prod_{i=j+1}^{k}L_{a_{i}}=L _{a_{j}}\prod_{i=j+1}^{k}L_{a_{i}}=\prod_{i=1}^{k}L_{a_{i}}\]
\[\ell_{n}(x)\leq L_{n}\leq\prod_{i=1}^{k}L_{a_{i}}=L_{0}\prod_{i=2}^{k}L_{a_{i}}= \ell_{0}(x)\prod_{i=2}^{k}L_{a_{i}}.\]
Otherwise, \(a_{1}\geq 1\) and \(n-1=(a_{1}-1)+a_{2}+\ldots+a_{k}\) so for all \(x\in S\),
\[\boldsymbol{\hat{\ell}}_{n}(x)=\int_{(-\infty,x]}\boldsymbol{\hat{\ell}}_{n-1 }(t)\;d\lambda(t)\leq\int_{(-\infty,x]}\boldsymbol{\hat{\ell}}_{a_{1}-1}(t) \prod_{i=2}^{k}\boldsymbol{\hat{L}}_{a_{i}}\;d\lambda(t)=\boldsymbol{\hat{ \ell}}_{a_{1}}(x)\cdot\prod_{i=2}^{k}\boldsymbol{\hat{L}}_{a_{i}}\]
with a similar argument applying to \(\hat{\ell}_{n}(x)\). Then for the plunge coefficients
\[L_{n}=\int\ell_{n-1}(t)\;d\lambda(t)\leq\int\ell_{a_{1}-1}(t)\;d\lambda(t) \cdot\prod_{i=2}^{k}L_{a_{i}}=L_{a_{1}}\cdot\prod_{i=2}^{k}L_{a_{i}}=\prod_{i =1}^{k}L_{a_{i}}\]
completing the induction.
**Proposition 6**.: _If \((S,\lambda)\) is a non-degenerate order measure then the radius of convergence of \(P_{\lambda}(Z)\) will be greater than \(\frac{1}{\|\lambda\|}\)._
Proof.: Because \(S\) is well-behaved, we know \(\{x<y\}\subseteq S^{2}\) is measurable. Since \(x<y\), \(x=y\), and \(x>y\) are disjoint possibilities, we know
\[\lambda^{2}(\{x<y\})+\lambda^{2}(\{x=y\})+\lambda^{2}(\{x>y\})\leq\|\lambda\| ^{2}.\]
By symmetry, \(\lambda^{2}(\{x<y\})=\lambda^{2}(\{x>y\})\) so \(\lambda^{2}(\{x<y\})\leq\frac{\|\lambda\|^{2}-\lambda^{2}(\{x=y\})}{2}\leq \frac{\|\lambda\|^{2}}{2}\). Let \(x\in S\) then by non-degeneracy \(\lambda(\{x\})<\|\lambda\|\) and so using slices
\[\lambda^{2}(\{x=y\})=\int\lambda(\{x=y\})\;d\lambda(x)=\int\lambda(\{x\})\;d \lambda(x)<\int\|\lambda\|\;d\lambda(x)=\|\lambda\|^{2}.\]
Then by lemma 1, \(\hat{\overleftarrow{L}}_{2}=\lambda^{2}(\{x<y\})\leq\frac{\|\lambda\|^{2}}{2}< \|\lambda\|^{2}\) and
\[\boldsymbol{\hat{L}}_{2}=\lambda^{2}(\{x\leq y\})=\lambda^{2}(\{x<y\})+ \lambda^{2}(\{x=y\})\leq\frac{\|\lambda\|^{2}-\lambda^{2}(\{x=y\})}{2}+\lambda ^{2}(\{x=y\})=\frac{\|\lambda\|^{2}+\lambda^{2}(\{x=y\})}{2}<\|\lambda\|^{2}.\]
In both the strict and non-strict cases, we have \(L_{2}<\|\lambda\|^{2}\). Let \(z\in\mathbb{C}\) with \(|z|<\frac{1}{\sqrt{L_{2}}}\). Then using lemma 5 and the fact that \(L_{2}=\|\lambda\|>0\), we know
\[\lim_{k\to\infty}\sqrt[2k]{|L_{2k}z^{2k}|}\leq\lim_{k\to\infty}\sqrt[2k]{|L_{2 }^{k}z^{2k}|}=|z|\sqrt{L_{2}}\quad\text{ and }\quad\lim_{k\to\infty}\sqrt[2k+1]{|L_{2k+1}z^{2k+1}|}\leq\lim_{k\to\infty} \sqrt[2k]{|L_{1}\cdot L_{2}^{k}z^{2k}|}=|z|\sqrt{L_{2}}\]
Because this holds for both evens and odds, we know \(\sqrt[k]{L_{k}|z|^{k}}\to|z|\sqrt{L_{2}}<1\) which by Cauchy's root test means that \(P_{\lambda}(z)\) converges absolutely. Since this is true for all \(|z|<\frac{1}{\sqrt{L_{2}}}\), we conclude that the radius of convergence must be at least \(\frac{1}{\sqrt{L_{2}}}>\frac{1}{\|\lambda\|}\).
**Theorem 7**.: _If \((T,\mu)\) is non-degenerate then \(N_{i}\) is almost surely finite and for any \(i\geq 0\),_
\[\mathbb{E}\left[N_{i}\right]=P_{\mu}(1)-2\quad\text{ and }\quad\operatorname{ var}\left(N_{i}\right)=P_{\mu}(1)-P_{\mu}(1)^{2}+2P_{\mu}^{\prime}(1)\]
_and for any \(x\in T\), the conditional expected value and variance will be_
\[\mathbb{E}\left[N_{i}\,|\,X_{i}=x\right]=P_{\mu_{x}}(1)-1\quad\text{ and }\quad \operatorname{var}\left(N_{i}\,|\,X_{i}=x\right)=P_{\mu_{x}}(1)-P_{\mu_{x}}(1)^{ 2}+2P_{\mu_{x}}^{\prime}(1).\]
Proof.: First from proposition 6, we know that \(P_{\mu}(Z)\) will have a radius of convergence greater than \(\frac{1}{\|\mu\|}=1\) in particular \(P_{\mu}(1)=\sum_{n=0}^{\infty}L_{n}\) exists and we must have \(\lim_{n\to\infty}L_{n}=0\). Thus by corollary 2 as \(k\to\infty\)
\[\mathbb{P}\left(\boldsymbol{\hat{N}}_{i}=\infty\right)=\mathbb{P}\left(X_{i} \geq X_{i+1}\geq\ldots\right)\leq\mathbb{P}\left(X_{i}\geq\ldots\geq X_{i+k} \right)=\boldsymbol{\hat{L}}_{k+1}\to 0\]
\[\mathbb{P}\left(\boldsymbol{\hat{N}}_{i}=\infty\right)=\mathbb{P}\left(X_{i}>X_{i +1}>\ldots\right)\leq\mathbb{P}\left(X_{i}>\ldots>X_{i+k}\right)=\boldsymbol{ \hat{L}}_{k+1}\to 0.\]
implying \(\mathbb{P}\left(\boldsymbol{\hat{N}}_{i}=\infty\right)=\mathbb{P}\left( \boldsymbol{\hat{N}}_{i}=\infty\right)=0\) so \(N_{i}\) is almost surely finite.
Now to find the expected value, we can take the derivative of the probability generating function at one. Using proposition 3, we know that the probability generating function for \(N_{i}\) will be related to the plunge functions. Because the radius of convergence of \(P_{\mu}(Z)\) is greater than \(1\), we can take infinitely many derivatives of \(P_{\mu}(z)\) at \(z=1\). Thus
\[\mathbb{E}\left[N_{i}\right]=\left.\left(z\frac{d}{dz}\right)G_{N _{i}}(z)\right|_{z=1} =z\frac{z^{2}(P_{\mu}(z)+(z-1)P_{\mu}^{\prime}(z))-2z(1+(z-1)P_{ \mu}(z))}{z^{4}}\right|_{z=1}\] \[=P_{\mu}(1)+(1-1)P_{\mu}^{\prime}(z)-2-2(1-1)P_{\mu}(1)=P_{\mu}(1 )-2.\]
Similarly for the variance, we will use the probability generating function to find
\[\mathbb{E}\left[N_{i}^{2}\right] =\left.\left(z\frac{d}{dz}\right)^{2}G_{N_{i}}(z)\right|_{z=1}= \left.\left(z\frac{d}{dz}\right)z\frac{-2+(2-z)P_{\mu}(z)+z(z-1)P_{\mu}^{ \prime}(z)}{z^{3}}\right|_{z=1}\] \[=z\frac{-P_{\mu}(z)+(2-z)P_{\mu}^{\prime}(z)+(2z-1)P_{\mu}^{ \prime}(z)+z(z-1)P_{\mu}^{\prime\prime}(z)}{z^{2}}-2z\frac{-2+(2-z)P_{\mu}(z)+ z(z-1)P_{\mu}^{\prime}(z)}{z^{3}}\right|_{z=1}\] \[=\left(-P_{\mu}(1)+(2-1)P_{\mu}^{\prime}(1)+(2-1)P_{\mu}^{\prime} (1)+(1-1)P_{\mu}^{\prime\prime}(z)\right)-2\left(-2+(2-1)P_{\mu}(1)+(1-1)P_{ \mu}^{\prime}(1)\right)\] \[=-P_{\mu}(1)+2P_{\mu}^{\prime}(1)+4-2P_{\mu}(1)=4-3P_{\mu}(1)+2P _{\mu}^{\prime}(1).\]
Then the variance will be
\[\operatorname{var}\left(N_{i}\right)=\mathbb{E}\left[N_{i}^{2}\right]- \mathbb{E}\left[N_{i}\right]^{2}=4-3P_{\mu}(1)+2P_{\mu}^{\prime}(1)-P_{\mu}(1) ^{2}+4P_{\mu}(1)-4=P_{\mu}(1)-P_{\mu}(1)^{2}+2P_{\mu}^{\prime}(1)\]
Now we can perform a similar process for the conditional case. Let \(x\in T\) then we recall from proposition 4 that \(P_{\mu}(x;Z)=P_{\mu_{x}}(Z)\). Again by proposition 6, we know \(P_{\mu_{x}}(Z)\) will have a radius of convergence greater than \(\frac{1}{\left\|\mu_{x}\right\|}>\frac{1}{\left\|\mu\right\|}=1\) so we can differentiate \(P_{\mu_{x}}(z)\) infinitely at \(z=1\).
\[\mathbb{E}\left[N_{i}\,|\,X_{i}=x\right]=\left.\left(z\frac{d}{dz} \right)G_{N_{i}}\big{|}_{X_{i}=x}(z)\right|_{z=1} =z\frac{z(P_{\mu_{x}}(z)+(z-1)P_{\mu_{x}}^{\prime}(z))-(1+(z-1)P_ {\mu_{x}}(z))}{z^{2}}\right|_{z=1}\] \[=P_{\mu_{x}}(1)+(1-1)P_{\mu_{x}}(1)-1-(1-1)P_{\mu_{x}}(1)=P_{\mu _{x}}(1)-1.\]
Next for the variance, we have
\[\mathbb{E}\left[N_{i}^{2}\,|\,X_{i}=x\right] =\left.\left(z\frac{d}{dz}\right)^{2}G_{N_{i}}(z)\right|_{z=1}= \left.\left(z\frac{d}{dz}\right)z\frac{-1+P_{\mu_{x}}(z)+z(z-1)P_{\mu_{x}}^{ \prime}(z)}{z^{2}}\right|_{z=1}\] \[=z\frac{z(P_{\mu_{x}}^{\prime}(z)+(2z-1)P_{\mu_{x}}^{\prime}(z)+ z(z-1)P_{\mu_{x}}^{\prime\prime}(z))-(-1+P_{\mu_{x}}(z)+z(z-1)P_{\mu_{x}}^{ \prime}(z))}{z^{2}}\right|_{z=1}\] \[=(P_{\mu_{x}}^{\prime}(1)+(2-1)P_{\mu_{x}}^{\prime}(1)+(1-1)P_{ \mu_{x}}^{\prime\prime}(1))-(-1+P_{\mu_{x}}(1)+(1-1)P_{\mu_{x}}^{\prime}(1))=1 -P_{\mu_{x}}(1)+2P_{\mu_{x}}^{\prime}(1).\]
Then the variance will be
\[\operatorname{var}\left(N_{i}\,|\,X_{i}=x\right) =\mathbb{E}\left[N_{i}^{2}\,|\,X_{i}=x\right]-\mathbb{E}\left[N_{ i}\,|\,X_{i}=x\right]^{2}\] \[=1-P_{\mu_{x}}(1)+2P_{\mu_{x}}^{\prime}(1)-P_{\mu_{x}}(1)^{2}+2P_{ \mu_{x}}(1)-1=P_{\mu_{x}}(1)-P_{\mu_{x}}(1)^{2}+2P_{\mu_{x}}^{\prime}(1).\]
## 4 Combining Plunge Functions
Now that we have shown the value of plunge functions in understanding the plunge length, we consider how to calculate these functions. We will do this by building orders out of simpler orders. Then the plunge functions of the combined order can be calculated from the plunge functions of the simpler ones.
**Definition 4**.: For any orders \(S\) and \(T\), we define their **concatenation**\(S\,\|\,T\) as the disjoint union of their elements with the binary relation
\[s_{1}\leq_{S\|T}s_{2}\iff s_{1}\leq_{S}s_{2}\text{ for all }s_{1},s_{2}\in S\]
\[t_{1}\leq_{S\|T}t_{2}\iff t_{1}\leq_{T}t_{2}\text{ for all }t_{1},t_{2}\in T\]
\[s\leq_{S\|T}t\text{ for all }s\in S,t\in T.\]
Additionally we say their **j****xtaposition**\(S\sqcup T\) is the disjoint union of their elements with the binary relation
\[s_{1}\leq_{S\sqcup T}s_{2}\iff s_{1}\leq_{S}s_{2}\text{ for all }s_{1},s_{2}\in S\]
\[t_{1}\leq_{S\sqcup T}t_{2}\iff t_{1}\leq_{T}t_{2}\text{ for all }t_{1},t_{2}\in T\]
\[s\text{ is incomparable to }t\text{ for all }s\in S,t\in T.\]
The above operations have been studied and rediscovered a number of times. As a result, there are a variety of different notations and names for them. The above "concatenation" is sometimes referred to as the "linear sum", "ordinal sum", "lexicographic sum", or "series composition". Furthermore the above "juxtaposition" is sometimes referred to as the "direct sum", "disjoint union", or "parallel composition". Using the terms series and parallel composition, Mohring enumerates several of the main results about them particularly in the case of finite orders [6]. Schroder provides a more abstract treatment of concatenation using the term lexicographic sum [9]. We can extend these operations on orders to operations on order measures which does not appear to have been done before.
**Definition 5**.: For any order measures \((S,\lambda)\) and \((T,\mu)\), we define their **concatenation** as \((S\parallel T,\lambda\parallel\mu)\) where
\[(\lambda\parallel\mu)(E):=\lambda(E\cap S)+\mu(E\cap T)\text{ for all }E\in \mathcal{B}_{S\parallel T}\]
and their **juxtaposition** as \((S\sqcup T,\lambda\sqcup\mu)\) where
\[(\lambda\sqcup\mu)(E):=\lambda(E\cap S)+\mu(E\cap T)\text{ for all }E\in \mathcal{B}_{S\sqcup T}.\]
The effect of these operations on the corresponding plunge functions then follow simple patterns.
**Theorem 8**.: _Suppose \((S,\lambda)\) and \((T,\mu)\) are order measures then \(P_{\lambda\parallel\mu}(Z)=P_{\lambda}(Z)\cdot P_{\mu}(Z)\) as formal power series._
Proof.: First we will prove by induction on \(n\geq 0\) that
\[\ell_{n}(\lambda\parallel\mu;s)=\ell_{n}(\lambda;s)\text{ \ for all }s\in S\text{ \ \ \ and \ \ \ }\ell_{n}(\lambda\parallel\mu;t)=\sum_{k=0}^{n}L_{k}(\lambda)\cdot\ell_{n-k}( \mu;t)\text{ \ for all }t\in T.\]
For \(n=0\), we have
\[\ell_{0}(\lambda\parallel\mu;s)=1=\ell_{0}(\lambda;s)\text{ \ for all }s\in S\text{ and }\]
\[\ell_{0}(\lambda\parallel\mu;t)=1=\ell_{0}(\lambda;t)\cdot L_{0}(\mu)\text{ \ for all }t\in T.\]
Then we consider \(n\geq 1\). For any \(s\in S\), we know \((-\infty,s]_{S\parallel T}=(-\infty,s]_{S}\) so
\[\overset{\raisebox{-0.5pt}{\scalebox{0.5}{$\bullet$}}}{\ell}_{n}(\lambda \parallel\mu;s)=\int_{(-\infty,s]}\overset{\raisebox{-0.5pt}{\scalebox{0.5}{$ \bullet$}}}{\ell}_{n-1}(\lambda\parallel\mu;r)\ d(\lambda\parallel\mu)(r)=\int_ {(-\infty,s]_{S}}\overset{\raisebox{-0.5pt}{\scalebox{0.5}{$\bullet$}}}{\ell}_{ n-1}(\lambda;r)\ d(\lambda)=\overset{\raisebox{-0.5pt}{\scalebox{0.5}{$ \bullet$}}}{\ell}_{n}(\lambda;s)\]
Figure 1: The concatenation of \(S\) and \(T\) where lower elements are to the left, mass of atoms is indicated by the size of their circle, and the mass of diffuse sections by their length.
and similarly for \(\hat{\ell}_{n}(\lambda\,\parallel\mu;s)\). Next for any \(t\in T\), we know \((-\infty,t]_{S\parallel T}=S\cup(-\infty,t]_{T}\) so
\[\hat{\ell}_{n}(\lambda\,\parallel\mu;t) =\int_{(-\infty,t]}\hat{\ell}_{n-1}(\lambda\,\parallel\mu;r)\ d( \lambda\,\|\,\mu)(r)=\int_{S}\hat{\ell}_{n-1}(\lambda\,\parallel\mu;r)\ d \lambda(r)+\int_{(-\infty,t]_{T}}\hat{\ell}_{n-1}(\lambda\,\parallel\mu;r)\ d\mu(r)\] \[=\int\hat{\ell}_{n-1}(\lambda;r)\ d\lambda(r)+\int_{(-\infty,t]_{ T}}\sum_{k=0}^{n-1}\hat{\mathbf{L}}_{k}(\lambda)\cdot\hat{\ell}_{n-k-1}(\mu;r)\ d\mu(r)\] \[=\hat{\mathbf{L}}_{n}(\lambda)+\sum_{k=0}^{n-1}\hat{\mathbf{L}}_{k}( \lambda)\cdot\int_{(-\infty,t]_{T}}\hat{\ell}_{n-k-1}(\mu;r)\ d\mu(r)\] \[=\hat{\mathbf{L}}_{n}(\lambda)\cdot\hat{\ell}_{0}(\mu;r)+\sum_{k=0}^{ n-1}\hat{\mathbf{L}}_{k}(\lambda)\cdot\hat{\mathbf{\ell}}_{n-k}(\mu;t)=\sum_{k=0}^{n} \hat{\mathbf{L}}_{k}(\lambda)\cdot\hat{\ell}_{n-k}(\mu;t)\]
with a similar argument holding for \(\hat{\ell}_{n}(\lambda\,\|\,\mu;t)\). This completes the induction. Second we show that the plunge coefficients will be
\[L_{n}(\lambda\,\|\,\mu)=\sum_{k=0}^{n}L_{k}(\lambda)\cdot L_{n-k}(\mu).\]
For \(n=0\), we have \(L_{0}(\lambda\,\|\,\mu)=1=1\cdot 1=L_{0}(\lambda)\cdot L_{0}(\mu)\). Next for \(n\geq 1\), we have
\[L_{n}(\lambda\,\|\,\mu) =\int_{S\parallel T}\ell_{n-1}(\lambda\,\|\,\mu;r)\ d(\lambda\, \|\,\mu)(r)=\int_{S}\ell_{n-1}(\lambda\,\|\,\mu;r)\ d\lambda(r)+\int_{T}\ell_ {n-1}(\lambda\,\|\,\mu;r)\ d\mu(r)\] \[=\int_{S}\ell_{n-1}(\lambda;r)\ d\lambda(r)+\sum_{k=0}^{n-1}L_{k} (\lambda)\cdot\int_{T}\ell_{n-k-1}(\mu;r)\ d\mu(r)\] \[=L_{n}(\lambda)\cdot L_{0}(\mu)+\sum_{k=0}^{n-1}L_{k}(\lambda) \cdot L_{n-k}(\mu)=\sum_{k=0}^{n}L_{k}(\lambda)\cdot L_{n-k}(\mu).\]
Third we show the desired product identities for the plunge functions. Using the substitution \(n=i+j\) in the following sums
\[P_{\lambda}(Z)\cdot P_{\mu}(Z) =\left(\sum_{i=0}^{\infty}L_{i}(\lambda)Z^{i}\right)\left(\sum_{j =0}^{\infty}L_{j}(\mu)Z^{j}\right)=\sum_{i=0}^{\infty}\sum_{j=0}^{\infty}(L_{i }(\lambda)\cdot L_{j}(\mu))Z^{i+j}\] \[=\sum_{n=0}^{\infty}Z^{n}\sum_{i=0}^{n}L_{i}(\lambda)\cdot L_{n-i} (\mu)=\sum_{n=0}^{\infty}L_{n}(\lambda\,\|\,\mu)Z^{n}=P_{\lambda\parallel\mu}( Z).\]
**Theorem 9**.: _For any order measures \((S,\lambda)\) and \((T,\mu)\), \(P_{\lambda\sqcup\mu}(Z)=P_{\lambda}(Z)+P_{\mu}(Z)-1\) as formal power series._
Proof.: First we will prove by induction on \(n\geq 0\) that
\[\ell_{n}(\lambda\,\sqcup\mu;s)=\ell_{n}(\lambda;s)\text{ for all }s\in S \qquad\ell_{n}(\lambda\,\sqcup\mu;t)=\ell_{n}(\mu;t)\text{ for all }t\in T.\]
For \(n=0\), we know \(\ell_{n}(\lambda\,\sqcup\,\mu;s)=1=\ell_{n}(\lambda;s)\) for all \(s\in S\) and \(\ell_{n}(\lambda\,\sqcup\,\mu;t)=1=\ell_{n}(\mu;t)\) for all \(t\in T\). Then for \(n\geq 1\), let \(s\in S\) then
\[\hat{\ell}_{n}(\lambda\,\sqcup\,\mu;s)=\int_{(-\infty,s]}\hat{\ell}_{n-1}( \lambda\,\sqcup\,\mu;r)\ d(\lambda\,\sqcup\,\mu)(r)=\int_{(-\infty,s]_{S}}\hat{ \ell}_{n-1}(\lambda;r)\ d\lambda(r)=\hat{\ell}_{n}(\lambda;s)\]
and similarly for \(\hat{\ell}_{n}(\lambda\,\sqcup\,\mu;s)\). By symmmetry of \(S\) and \(T\), the above holds for \(T\) as well. Second let \(n\geq 1\) then
\[L_{n}(\lambda\,\sqcup\,\mu)=\int_{S\sqcup T}\ell_{n-1}(\lambda\,\sqcup\,\mu;r)\ d( \lambda\,\sqcup\,\mu)(r)=\int_{S}\ell_{n-1}(\lambda;r)\ d\lambda(r)+\int_{T}\ell_ {n-1}(\mu;r)\ d\mu(r)=L_{n}(\lambda)+L_{n}(\mu).\]
Hence the plunge functions will be
\[P_{\lambda\sqcup\mu}(Z) =\sum_{n=0}^{\infty}L_{n}(\lambda\,\sqcup\,\mu)Z^{n}=1+\sum_{n=1}^ {\infty}(L_{n}(\lambda)+L_{n}(\mu))Z^{n}\] \[=\left(1+\sum_{n=1}^{\infty}L_{n}(\lambda)Z^{n}\right)+\left(1+\sum_ {n=1}^{\infty}L_{n}(\mu)Z^{n}\right)-1=P_{\lambda}(Z)+P_{\mu}(Z)-1.\]
Plunge Functions of Total Orders
We can now prove the formula for the plunge functions of an arbitrary total order measure. To do this in the case of finitely many atoms only requires the concatenation rule from above. However to generalize these results to all total orders, we must take a limit. To wit, we will show that a sequence of plunge functions will converge on an appropriate domain whenever the corresponding order measures converge in total variation.
The following proof and result for lemma 10 are analogous to the first lemma in Renyi's paper [8].
**Lemma 10**.: _If \((T,\mu)\) is a diffuse total order measure then for all \(n\geq 0\), \(L_{n}=\frac{\|\mu\|^{n}}{n!}\) meaning that \(P_{\mu}(Z)=e^{\|\mu\|Z}\)._
Proof.: First for \(n=0\) and \(n=1\), we have \(L_{0}=1=\frac{\|\mu\|^{0}}{0!}\) and \(L_{1}=\|\mu\|=\frac{\|\mu\|^{1}}{1!}\). To simplify the notation, we will elide \(\langle a_{1},\ldots,a_{n}\rangle\in T^{n}\) from set schema. Let \(\mathcal{S}_{n}\) be the set of permutations on \([1,n]\). Then for every \(\sigma\in\mathcal{S}_{n}\), we define the subset \(A_{\sigma}\subseteq T^{n}\) as
\[A_{\sigma}:=\{a_{\sigma(1)}<\ldots<a_{\sigma(n)}\}.\]
Notice that if \(a_{\sigma(1)}<\ldots<a_{\sigma(n)}\) then no other permutation of \(a_{1},\ldots,a_{n}\) will be ascending so \(A_{\sigma}\) is disjoint from all other \(A_{\sigma^{\prime}}\). From lemma 1, we know \(\mu^{n}(A_{\sigma})=\hat{L}_{n}\). Next define \(D:=\{a_{1},\ldots,a_{n}\) are pairwise distinct\(\}\). Because \(T\) is a total order, for any tuple \(a=\langle a_{1},\ldots,a_{n}\rangle\in D\), there is a permutation \(\sigma\) such that \(a_{\sigma(1)}<\ldots<a_{\sigma(n)}\) and therefore \(a\in A_{\sigma}\). Thus \(D=\bigcup_{\sigma}A_{\sigma}\) and by disjointness
\[\mu^{n}(D)=\sum_{\sigma\in\mathcal{S}_{n}}\mu^{n}(A_{\sigma})=\sum_{\sigma\in \mathcal{S}_{n}}\hat{L}_{n}=|\mathcal{S}_{n}|\cdot\hat{L}_{n}=n!\cdot\hat{L}_ {n}.\]
Now for every \(i,j\in[1,n]\), we define \(E_{i,j}:=\{a_{i}=a_{j}\}\). By symmetry, we know \(\mu^{n}(E_{i,j})=\mu^{n}(E_{1,2})\) and by diffuseness
\[\mu^{n}(E_{1,2})=\mu^{n}(\{a_{1}=a_{2}\}\times T^{n-2})=\mu^{2}(\{a_{1}=a_{2} \})\cdot\mu^{n-2}(T^{n-2})=\|\mu\|^{n-2}\cdot\int\mu(\{t\})\;d\mu(t)=0.\]
Next for every \(a=\langle a_{1},\ldots,a_{n}\rangle\in D^{c}\), there exist \(i,j\in n\) with \(a_{i}=a_{j}\) so \(a\in E_{i,j}\). Thus
\[D^{c}\subseteq\bigcup_{i,j\in[1,n]}E_{i,j}\qquad\text{ implying }\qquad\mu^{n}(D^{c})\leq\mu^{n}\left(\bigcup_{i,j}E_{i,j}\right)=0\]
\[\|\mu\|^{n}=\mu^{n}(T^{n})=\mu^{n}(D)+\mu^{n}(D^{c})=n!\cdot\hat{L}_{n}+0\]
showing that \(\hat{L}_{n}=\frac{\|\mu\|^{n}}{n!}\).
Second we define \(A=\{a_{1}<\ldots<a_{n}\}\) and \(B=\{a_{1}\leq\ldots\leq a_{n}\}\). From above, we know \(\mu^{n}(A)=\frac{\|\mu\|^{n}}{n!}\). If \(a=\langle a_{1},\ldots,a_{n}\rangle\in B\setminus A\) then there must exist \(i\in[1,n]\) with \(a_{i}=a_{i+1}\) so \(a\in E_{i,i+1}\). Thus \(B\setminus A\subseteq\bigcup_{i,j}E_{i,j}\) so \(\mu^{n}(B\setminus A)=0\). Using lemma 1 again, we have
\[\hat{L}_{n}=\mu^{n}(B)=\mu^{n}(A)+\mu^{n}(B\setminus A)=\frac{\|\mu\|^{n}}{n! }+0.\]
Finally the plunge functions will be
\[\hat{P}_{\mu}(Z)=\hat{P}_{\mu}(Z)=\sum_{n=0}^{\infty}\frac{\|\mu\|^{n}}{n!}Z^{ n}=e^{\|\mu\|Z}.\]
**Lemma 11**.: _If \((T,\mu)\) is a total order measure with finitely many atoms then_
\[\hat{P}_{\mu}(Z)=e^{m_{d}Z}\prod_{\alpha\in\mathcal{A}}\frac{1}{1-m_{\alpha}Z }\quad\text{ and }\quad\hat{\bar{P}}_{\mu}(Z)=e^{m_{d}Z}\prod_{\alpha\in\mathcal{A}}(1+m_{ \alpha}Z)\]
_where we treat \(e^{m_{d}Z}\) and \(\frac{1}{1-m_{\alpha}Z}\) as shorthand for their formal power series._
Proof.: First we will enumerate the atoms as \(\{\alpha_{1},\ldots,\alpha_{k}\}=\mathcal{A}\). Now because of totalness of \(T\), we can define \(S_{0},\ldots,S_{k}\subseteq T\) to separate \(T\) into
\[S_{0}\cup\{\alpha_{1}\}\cup S_{1}\cup\{\alpha_{2}\}\cup\ldots\cup\{\alpha_{k}\} \cup S_{k}:=(-\infty,\alpha_{1})\cup\{\alpha_{1}\}\cup(\alpha_{1},\alpha_{2}) \cup\{\alpha_{2}\}\cup\ldots\cup\{\alpha_{k}\}\cup(\alpha_{k},\infty)=T.\]
Treating each section as its own total order, we have \(T=S_{0}\parallel\{\alpha_{1}\}\parallel\ldots\parallel\{\alpha_{k}\} \parallel S_{k}\). Next we take \(\lambda_{0},\ldots,\lambda_{k}\) to be the restrictions of \(\mu\) to \(S_{0},\ldots,S_{k}\) and \(\delta_{1},\ldots,\delta_{k}\) to be the restrictions of \(\mu\) to \(\{\alpha_{1}\},\ldots,\{\alpha_{k}\}\). Then \(\mu=\lambda_{0}\parallel\delta_{1}\parallel\ldots\parallel\delta_{k}\parallel \lambda_{k}\) so by theorem 8,
\[P_{\mu}(Z)=P_{\lambda_{0}}(Z)\cdot P_{\delta_{1}}(Z)\cdots P_{\delta_{k}}(Z) \cdot P_{\lambda_{k}}(Z).\]
Because \(S_{0}\cup\ldots\cup S_{k}\subseteq T\setminus\mathcal{A}\), we know \(S_{0}\cup\ldots\cup S_{k}\) will contain no atoms of \(\mu\). Thus every one of \(\lambda_{0},\ldots,\lambda_{k}\) must be diffuse and so lemma 10 tells us that
\[P_{\lambda_{i}}(Z)=e^{\parallel\lambda_{i}\parallel Z}=e^{\mu(S_{i})Z}\text{ for all }0\leq i\leq k.\]
Second let \(1\leq i\leq k\) and we will consider \(\delta_{i}\). For the strict plunge coefficients, we know \(\overset{\circ}{L}_{0}(\delta_{i})=1\) and \(\overset{\circ}{L}_{1}(\delta_{i})=\|\delta_{i}\|=\mu(\{\alpha_{i}\})=m_{ \alpha_{i}}\). Then for all \(n\geq 2\), we know \((-\infty,\alpha_{i})_{\{\alpha_{i}\}}=\emptyset\) so
\[\hat{L}_{n}=\int_{\{\alpha_{i}\}}\hat{\ell}_{n-1}(t)\;d\delta_{i}(t)=\hat{ \ell}_{n-1}(\alpha_{i})\cdot\delta_{i}(\{\alpha_{i}\})=m_{\alpha_{i}}\cdot \int_{\emptyset}\hat{\ell}_{n-2}(t)\;d\delta_{i}(t)=0.\]
Thus \(\hat{P}_{\delta_{i}}(Z)=1+m_{\alpha_{i}}Z\). Next for the non-strict plunge coefficients, we will prove by induction that \(\overset{\bullet}{L}_{n}(\delta_{i})=\overset{\bullet}{\ell}_{n}(\delta_{i} ;\alpha_{i})=m_{\alpha_{i}}^{n}\). When \(n=0\) we get \(\overset{\bullet}{L}_{0}=\overset{\bullet}{\ell}_{0}(\alpha_{i})=1=m_{ \alpha_{i}}^{0}\). Otherwise \(n\geq 1\) and \((-\infty,\alpha_{i}]_{\{\alpha_{i}\}}=\{\alpha_{i}\}\) so
\[\overset{\bullet}{\ell}_{n}(\alpha_{i})=\overset{\bullet}{L}_{n}=\int_{\{ \alpha_{i}\}}\overset{\bullet}{\ell}_{n-1}(t)\;d\delta_{i}(t)=\overset{ \bullet}{\ell}_{n-1}(\alpha_{i})\cdot\delta_{i}(\{\alpha_{i}\})=\overset{ \bullet}{L}_{n-1}\cdot m_{\alpha_{i}}=m_{\alpha_{i}}^{n-1}\cdot m_{\alpha_{i}} =m_{\alpha_{i}}^{n}\]
completing the induction. Thus \(\overset{\bullet}{P}_{\delta_{i}}(Z)=\sum_{n=0}^{\infty}m_{\alpha_{i}}^{n}Z _{n}=\frac{1}{1-m_{\alpha_{i}}Z}\). Finally using these results in the above product, we get
\[\overset{\bullet}{P}_{\mu}(Z)=e^{\mu(S_{0})Z}\cdot\frac{1}{1-m_{\alpha_{1}}Z }\cdots\frac{1}{1-m_{\alpha_{k}}Z}\cdot e^{\mu(S_{k})Z} =e^{\mu(S_{0}\cup\ldots\cup S_{k})Z}\cdot\prod_{i=1}^{k}\frac{1}{1-m _{\alpha_{i}}Z}=e^{m_{d}Z}\prod_{\alpha\in\mathcal{A}}\frac{1}{1-m_{\alpha}Z}\]
\[\overset{\circ}{P}_{\mu}(Z)=e^{\mu(S_{0})Z}\cdot(1+m_{\alpha_{1}}Z)\cdots(1+m _{\alpha_{k}}Z)\cdot e^{\mu(S_{k})Z} =e^{\mu(S_{0}\cup\ldots\cup S_{k})Z}\cdot\prod_{i=1}^{k}(1+m_{ \alpha_{i}}Z)=e^{m_{d}Z}\prod_{\alpha\in\mathcal{A}}(1+m_{\alpha}Z).\]
**Lemma 12**.: _If \(\mu_{1}\) and \(\mu_{2}\) are finite positive measures on \(T\) then for every \(n\geq 1\),_
\[|L_{n}(\mu_{1})-L_{n}(\mu_{2})|\leq\|\mu_{1}-\mu_{2}\|\cdot\sum_{k=0}^{n-1}L_{n -k-1}(\mu_{1})\cdot L_{k}(\mu_{2}).\]
Proof.: First we will prove by induction on \(n\geq 1\) that for all \(x\in T\),
\[|\ell_{n}(\mu_{1};x)-\ell_{n}(\mu_{2};x)|\leq\|\mu_{1}-\mu_{2}\|\cdot\sum_{k=0} ^{n-1}\ell_{n-k-1}(\mu_{1};x)\cdot L_{k}(\mu_{2}).\]
For \(n=1\),
\[\left|\overset{\bullet}{\ell}_{1}(\mu_{1};x)-\overset{\bullet}{ \ell}_{1}(\mu_{2};x)\right| =\left|\int_{(-\infty,x]}1\;d\mu_{1}(t)-\int_{(-\infty,x]}1\;d\mu_{2}( t)\right|=\left|\mu_{1}((-\infty,x])-\mu_{2}((-\infty,x])\right|\] \[\leq\left|\mu_{1}-\mu_{2}\right|((-\infty,x])\leq\|\mu_{1}-\mu_{2} \|=\|\mu_{1}-\mu_{2}\|\cdot\overset{\bullet}{\ell}_{0}(\mu_{1};x)\cdot L_{0}( \mu_{2})\]
and similarly for the strict case. Then for \(n\geq 2\), using lemma 5 we know \(\overleftarrow{\ell}_{n-1}(\mu_{2};t)\leq\hat{L}_{n-1}(\mu_{2})\) so
\[\left|\overleftarrow{\ell}_{n}(\mu_{1};x)-\overleftarrow{\ell}_{n }(\mu_{2};x)\right| =\left|\int_{(-\infty,x]}\overleftarrow{\ell}_{n-1}(\mu_{1};t)\;d \mu_{1}(t)-\int_{(-\infty,x]}\overleftarrow{\ell}_{n-1}(\mu_{2};t)\;d\mu_{2}( t)\right|\] \[\leq\left|\int_{(-\infty,x]}\overleftarrow{\ell}_{n-1}(\mu_{1};t) \;d\mu_{1}(t)-\int_{(-\infty,x]}\overleftarrow{\ell}_{n-1}(\mu_{2};t)\;d\mu_{1 }(t)\right|\] \[\quad+\left|\int_{(-\infty,x]}\overleftarrow{\ell}_{n-1}(\mu_{2};t )\;d\mu_{1}(t)-\int_{(-\infty,x]}\overleftarrow{\ell}_{n-1}(\mu_{2};t)\;d\mu_{ 2}(t)\right|\] \[\leq\int_{(-\infty,x]}\left|\overleftarrow{\ell}_{n-1}(\mu_{1};t )-\overleftarrow{\ell}_{n-1}(\mu_{2};t)\right|\;d\mu_{1}(t)+\left|\int_{(- \infty,x]}\overleftarrow{\ell}_{n-1}(\mu_{2};t)\;d(\mu_{1}-\mu_{2})(t)\right|\]
Now we can bound the left and right terms by
\[\int_{(-\infty,x]}\left|\overleftarrow{\ell}_{n-1}(\mu_{1};t)- \overleftarrow{\ell}_{n-1}(\mu_{2};t)\right|\;d\mu_{1}(t) \leq\int\|\mu_{1}-\mu_{2}\|\cdot\sum_{k=0}^{n-2}\overleftarrow{ \ell}_{n-k-2}(\mu_{1};t)\cdot\overleftarrow{L}_{k}(\mu_{2})\;d\mu_{1}(t)\] \[=\|\mu_{1}-\mu_{2}\|\cdot\left(\sum_{k=0}^{n-2}\overleftarrow{\ell }_{n-k-1}(\mu_{1};x)\cdot\overleftarrow{L}_{k}(\mu_{2})\right)\] \[\left|\int_{(-\infty,x]}\overleftarrow{\ell}_{n-1}(\mu_{2};t)\;d( \mu_{1}-\mu_{2})(t)\right| \leq\int\overleftarrow{L}_{n-1}(\mu_{2})\;d|\mu_{1}-\mu_{2}|(t)=\| \mu_{1}-\mu_{2}\|\cdot\overleftarrow{\ell}_{0}(\mu_{1};x)\cdot\overleftarrow{L }_{n-1}(\mu_{2}).\]
Thus the difference will be bounded by their sum
\[\left|\overleftarrow{\ell}_{n}(\mu_{1};x)-\overleftarrow{\ell}_{n}(\mu_{2};x) \right|\leq\|\mu_{1}-\mu_{2}\|\sum_{k=0}^{n-1}\overleftarrow{\ell}_{n-k-1}(\mu_ {1};x)\cdot\overleftarrow{L}_{k}(\mu_{2})\]
and the same argument applies to the strict case. Second we consider the plunge coefficients. For \(n=1\), we have
\[\left|L_{1}(\mu_{1})-L_{1}(\mu_{2})\right|=\left|\mu_{1}(T)-\mu_{2}(T)\right| \leq\left|\mu_{1}-\mu_{2}\right|(T)=\|\mu_{1}-\mu_{2}\|=\|\mu_{1}-\mu_{2}\| \cdot L_{0}(\mu_{1})\cdot L_{0}(\mu_{2}).\]
For \(n\geq 2\) using a similar argument as above, we have
\[\left|L_{n}(\mu_{1})-L_{n}(\mu_{2})\right| =\left|\int\ell_{n-1}(\mu_{1};t)\;d\mu_{1}(t)-\int\ell_{n-1}(\mu_ {2};t)\;d\mu_{2}(t)\right|\] \[\leq\int\left|\ell_{n-1}(\mu_{1};t)-\ell_{n-1}(\mu_{1};t)\right|d \mu_{1}(t)+\int\ell_{n-1}(\mu_{2};t)\;d|\mu_{1}-\mu_{2}|(t)\] \[\leq\int\|\mu_{1}-\mu_{2}\|\cdot\sum_{k=0}^{n-2}\ell_{n-k-2}(\mu_ {1};t)\cdot L_{k}(\mu_{2})\;d\mu_{1}(t)+\int L_{n-1}(\mu_{2})\;d|\mu_{1}-\mu_ {2}|(t)\] \[=\|\mu_{1}-\mu_{2}\|\cdot\sum_{k=0}^{n-1}L_{n-k-1}(\mu_{1})\cdot L _{k}(\mu_{2}).\]
**Lemma 13**.: _Suppose \(\mu_{1}\) and \(\mu_{2}\) are finite positive measures on \(T\). For each \(N\geq 0\), we define the polynomials_
\[\overleftarrow{p}_{\mu_{1},N}(Z):=\sum_{n=0}^{N}\hat{L}_{n}(\mu_{1})Z^{n} \qquad\hat{p}_{\mu_{1},N}(Z):=\sum_{n=0}^{N}\hat{L}_{n}(\mu_{1})Z^{n}\]
\[\overleftarrow{p}_{\mu_{2},N}(Z):=\sum_{n=0}^{N}\hat{L}_{n}(\mu_{2})Z^{n} \qquad\hat{p}_{\mu_{2},N}(Z):=\sum_{n=0}^{N}\hat{L}_{n}(\mu_{2})Z^{n}.\]
_Then for all \(z\in\mathbb{C}\) and \(N\geq 1\),_
\[\left|p_{\mu_{1},N}(z)-p_{\mu_{2},N}(z)\right|\leq\|\mu_{1}-\mu_{2}\|\cdot|z| \cdot p_{\mu_{1},N-1}(|z|)\cdot p_{\mu_{2},N-1}(|z|).\]
Proof.: Let \(z\in\mathbb{C}\) and let \(N\geq 0\). We recall that \(L_{0}(\mu_{1})=1=L_{0}(\mu_{2})\) so they will cancel in the difference. Using lemma 12, the difference of \(p_{\mu_{1},N}(z)\) and \(p_{\mu_{2},N}(z)\) will be
\[|p_{\mu_{1},N}(z)-p_{\mu_{2},N}(z)| =\left|\sum_{n=1}^{N}(L_{n}(\mu_{1})-L_{n}(\mu_{2}))z^{n}\right| \leq\sum_{n=1}^{N}|L_{n}(\mu_{1})-L_{n}(\mu_{2})|\cdot|z|^{n}\] \[=|z|\sum_{n=1}^{N}|z|^{n-1}\cdot\|\mu_{1}-\mu_{2}\|\sum_{k=0}^{n-1 }L_{n-k-1}(\mu_{1})\cdot L_{k}(\mu_{2})\] \[=\|\mu_{1}-\mu_{2}\|\cdot|z|\sum_{n=0}^{N-1}\sum_{k=0}^{n}(L_{n- k}(\mu_{1})\cdot L_{k}(\mu_{2}))|z|^{n}\]
Then using the substitution \(j=n-k\), we can say
\[\sum_{n=0}^{N-1}\sum_{k=0}^{n}(L_{n-k}(\mu_{1})\cdot L_{k}(\mu_{ 2}))|z|^{n} =\sum_{k=0}^{N-1}\sum_{j=0}^{N-k-1}(L_{j}(\mu_{1})\cdot L_{k}(\mu_ {2}))|z|^{j+k}\leq\sum_{k=0}^{N-1}\sum_{j=0}^{N-1}(L_{j}(\mu_{1})\cdot L_{k}( \mu_{2}))|z|^{j+k}\] \[=\left(\sum_{j=0}^{N-1}L_{j}(\mu_{1})|z|^{j}\right)\left(\sum_{k= 0}^{N-1}L_{k}(\mu_{2})|z|^{k}\right)=p_{\mu_{1},N-1}(|z|)\cdot p_{\mu_{2},N-1} (|z|)\]
giving us the desired result.
**Corollary 14**.: _Suppose \(\mu_{1}\) and \(\mu_{2}\) are finite positive measures on \(T\) then for all \(z\in\mathbb{C}\), if \(P_{\mu_{1}}(|z|)\) and \(P_{\mu_{2}}(|z|)\) exist then_
\[|P_{\mu_{1}}(z)-P_{\mu_{2}}(z)|\leq\|\mu_{1}-\mu_{2}\|\cdot|z|\cdot P_{\mu_{1} }(|z|)\cdot P_{\mu_{2}}(|z|)\quad\text{ and }\quad\left|\frac{1}{P_{\mu_{1}}(|z|)}-\frac{1}{P_{\mu_{2}}(|z|)}\right| \leq\|\mu_{1}-\mu_{2}\|\cdot|z|.\]
Proof.: Let \(z\in\mathbb{C}\) such that \(P_{\mu_{1}}(|z|)\) and \(P_{\mu_{2}}(|z|)\) exist. Then for all \(N\geq 1\), using lemma 13
\[|p_{\mu_{1},N}(z)-p_{\mu_{2},N}(z)|\leq\|\mu_{1}-\mu_{2}\|\cdot|z|\cdot p_{\mu _{1},N-1}(|z|)\cdot p_{\mu_{1},N-1}(|z|)\leq\|\mu_{1}-\mu_{2}\|\cdot|z|\cdot P _{\mu_{1}}(|z|)\cdot P_{\mu_{2}}(|z|).\]
Taking the limit as \(N\to\infty\), we get
\[|P_{\mu_{1}}(z)-P_{\mu_{2}}(z)|=\lim_{N\to\infty}|p_{\mu_{1},N}(z)-p_{\mu_{2}, N}(z)|\leq\|\mu_{1}-\mu_{2}\|\cdot|z|\cdot P_{\mu_{1}}(|z|)\cdot P_{\mu_{2}}(|z|).\]
Then this inequality holds for \(|z|\) as well so
\[\left|\frac{1}{P_{\mu_{1}}(|z|)}-\frac{1}{P_{\mu_{2}}(|z|)}\right|=\frac{|P_{ \mu_{1}}(|z|)-P_{\mu_{2}}(|z|)|}{P_{\mu_{1}}(|z|)\cdot P_{\mu_{2}}(|z|)}\leq \|\mu_{1}-\mu_{2}\|\cdot|z|.\]
**Proposition 15**.: _Suppose that \(\mu,\mu_{1},\mu_{2},\ldots\) are positive finite measures on \(T\) with \(\mu_{k}\to\mu\) in the total variation norm. Then for each \(n\geq 0\), \(L_{n}(\mu_{k})\to L_{n}(\mu)\). Additionally if \(z\in\mathbb{C}\) such that \(P_{\mu_{k}}(|z|)\) exists for all but finitely many \(k\) and \(\lim_{k\to\infty}P_{\mu_{k}}(|z|)\) converges then \(P_{\mu}(|z|)\) and \(P_{\mu}(z)\) exist with \(P_{\mu_{k}}(|z|)\to P_{\mu}(|z|)\) and \(P_{\mu_{k}}(z)\to P_{\mu}(z)\)._
Proof.: Because \(\mu_{k}\to\mu\) in the total variation norm, we know \(\|\mu_{k}-\mu\|\to 0\). For \(n=0\), we immediately have \(L_{0}(\mu_{k})-L_{0}(\mu)=1-1=0\) for all \(k\). Then, let \(n\geq 1\) and by lemma 12
\[0\leq\lim_{k\to\infty}|L_{n}(\mu_{k})-L_{n}(\mu)|\leq\lim_{k\to\infty}\|\mu_{k} -\mu\|\sum_{i=0}^{n-1}L_{n-i-1}(\mu_{1})\cdot L_{i}(\mu_{2})=0\]
and so by squeeze theorem \(L_{n}(\mu_{k})\to L_{n}(\mu)\).
Now let \(z\in\mathbb{C}\) such that \(P_{\mu_{k}}(|z|)\) exists for all but finitely many \(k\) and \(\lim_{k\to\infty}P_{\mu_{k}}(|z|)\) converges. Then we define \(y:=\lim_{k\to\infty}P_{\mu_{k}}(|z|)\). Because \(P_{\mu_{k}}(|z|)\geq L_{0}(\mu_{k})=1\) for all \(k\), we know \(y\geq 1\) and so \(\frac{1}{P_{\mu_{k}}(|z|)}\to\frac{1}{y}\). Thus there exists \(K_{1}\geq 0\) such that for all \(k\geq K_{1}\),
\[\left|\frac{1}{P_{\mu_{k}}(|z|)}-\frac{1}{y}\right|<\frac{1}{4y}.\]
Also because \(\|\mu_{k}-\mu\|\to 0\), there exists \(K_{2}\geq 0\) such that for all \(k\geq K_{2}\), we have \(\|\mu_{k}-\mu\|<\frac{1}{4y\max(|z|,1)}\). Then we define \(K:=\max(K_{1},K_{2})\). Since \(p_{\mu_{K},n}(|z|)\to P_{\mu_{K}}(|z|)\) as \(n\to\infty\) and \(p_{\mu_{K},n}(|z|)\geq 1\) for \(n\geq 1\), we know \(\frac{1}{p_{\mu_{K},n}(|z|)}\to\frac{1}{P_{\mu_{K}}(|z|)}\). Thus there exists \(N\geq 0\) such that for all \(n\geq N\),
\[\left|\frac{1}{p_{\mu_{K},n}(|z|)}-\frac{1}{P_{\mu_{K}}(|z|)}\right|<\frac{1}{ 4y}.\]
Using lemma 13, we know that for all \(n\geq 1\),
\[\left|\frac{1}{p_{\mu,n}(|z|)}-\frac{1}{p_{\mu_{K},n}(|z|)}\right| =\frac{|p_{\mu,n}(|z|)-p_{\mu_{K},n}(|z|)|}{p_{\mu,n}(|z|)\cdot p _{\mu_{K},n}(|z|)}\leq\frac{|p_{\mu,n}(|z|)-p_{\mu_{K},n}(|z|)|}{p_{\mu,n-1}(| z|)\cdot p_{\mu_{K},n-1}(|z|)}\] \[\leq\|\mu-\mu_{K}\|\cdot|z|<\frac{1}{4y\max(|z|,1)}\cdot|z|\leq \frac{1}{4y}.\]
Hence for all \(n\geq N\),
\[\left|\frac{1}{p_{\mu,n}(|z|)}-\frac{1}{y}\right|\leq\left|\frac{1}{p_{\mu,n} (|z|)}-\frac{1}{p_{\mu_{K},n}(|z|)}\right|+\left|\frac{1}{p_{\mu_{K},n}(|z|)} \right|+\left|\frac{1}{P_{\mu_{K}}(|z|)}-\frac{1}{y}\right|<\frac{1}{4y}+ \frac{1}{4y}+\frac{1}{4y}=\frac{3}{4y}\]
implying that \(\frac{1}{p_{\mu,n}(|z|)}>\frac{1}{y}-\frac{3}{4y}=\frac{1}{4y}\) which is \(p_{\mu,n}(|z|)<4y\). Thus \(\{p_{\mu,n}(|z|)\}_{n=N}^{\infty}\) is an increasing sequence bounded above by \(4y\) and so by the monotone convergence theorem, there exists \(P_{\mu}(|z|)=\lim_{n\to\infty}p_{\mu,n}(|z|)\). Because \(P_{\mu}(|z|)\) exists, we know \(P_{\mu}(z)\) converges absolutely so \(P_{\mu}(z)\) also exists. Now using corollary 14, considering as \(k\to\infty\)
\[0\leq\left|P_{\mu}(|z|)-P_{\mu_{k}}(|z|)\right|\leq\|\mu_{k}-\mu\|\cdot|z| \cdot P_{\mu}(|z|)\cdot P_{\mu_{k}}(|z|)\to 0\cdot|z|\cdot P_{\mu}(|z|)^{2}=0\]
\[0\leq\left|P_{\mu}(z)-P_{\mu_{k}}(z)\right|\leq\|\mu_{k}-\mu\|\cdot|z|\cdot P_ {\mu}(|z|)\cdot P_{\mu_{k}}(|z|)\to 0\]
and so by squeeze theorem \(P_{\mu_{k}}(|z|)\to P_{\mu}(|z|)\) and \(P_{\mu_{k}}(z)\to P_{\mu}(z)\).
**Theorem 16**.: _If \((T,\mu)\) is a total order measure then for all \(z\in\mathbb{C}\),_
\[\hat{\hat{P}}_{\mu}(z)=e^{m_{dz}}\prod_{\alpha\in\mathcal{A}}(1+m_{\alpha}z) \quad\text{ and for all }|z|<\left(\sup_{\alpha\in\mathcal{A}}m_{\alpha}\right)^{-1}, \quad\boldsymbol{\hat{P}}_{\mu}(z)=e^{m_{dz}}\prod_{\alpha\in\mathcal{A}} \frac{1}{1-m_{\alpha}z}.\]
Proof.: First we know \(\mathcal{A}\) is countable so we define the enumeration \(\{\alpha_{1},\alpha_{2},\ldots\}=\mathcal{A}\). For each \(k\geq 1\), we define the measurable set \(F_{k}=T\setminus\{\alpha_{k},\alpha_{k+1},\ldots\}\). Next we define \(\mu_{k}=\mu\big{|}_{F_{k}}\) that is \(\mu_{k}\) is the restriction of \(\mu\) to \(F_{k}\). Notice that for any \(x\in T\), if \(\mu_{k}(x)>0\) then \(x\in F_{k}\) so \(\mu(\{x\})=\mu_{k}(\{x\})>0\) and \(x\in\mathcal{A}\). Thus \(x\in F_{k}\cap\mathcal{A}=\{\alpha_{1},\ldots,\alpha_{k-1}\}\). We conclude that \(\mu_{k}\) will have exactly the atoms \(\{\alpha_{1},\ldots,\alpha_{k-1}\}\). Then the diffuse mass of \(\mu_{k}\) will be
\[\|\mu_{k}\|-\sum_{i=1}^{k-1}m_{\alpha_{i}}=\|\mu\|-\sum_{i=k}^{\infty}m_{\alpha _{i}}-\sum_{i=1}^{k-1}m_{\alpha_{i}}=\|\mu\|-\sum_{i=1}^{\infty}m_{\alpha_{i}}=m _{d}\]
where \(m_{d}\) is the diffuse mass of \(\mu\). Hence by lemma 11,
\[\boldsymbol{\hat{P}}_{\mu_{k}}(Z)=e^{m_{d}Z}\prod_{i=1}^{k-1}\frac{1}{1-m_{ \alpha_{i}}Z}\quad\text{ and }\quad\hat{\hat{P}}_{\mu_{k}}(Z)=e^{m_{d}Z}\prod_{i=1}^{k-1}(1+m_{\alpha_{i}}Z).\]
Next we observe that as \(k\to\infty\)
\[\|\mu-\mu_{k}\|=\left\|\mu\big{|}_{F_{k}^{c}}+\mu\big{|}_{F_{k}}-\mu\big{|}_{F _{k}}\right\|=\left\|\mu\big{|}_{F_{k}^{c}}\right\|=\mu(F_{k}^{c})=\sum_{i=k}^ {\infty}m_{\alpha_{i}}\to 0\]
so \(\mu_{k}\to\mu\).
Let \(z\in\mathbb{C}\) and we will show that \(\lim_{k\to\infty}\hat{P}_{\mu_{k}}(|z|)\) converges. For every \(k\geq 0\), we know \(1+m_{\alpha_{k}}|z|>1\) so the sequence \(\{\hat{\hat{P}}_{\mu_{k}}(|z|)\}\) is increasing. Further using the classical identity \(1+x\leq e^{x}\), we have
\[\log\left(\hat{P}_{\mu_{k}}(|z|)\right) =\log\left(e^{m_{d}|z|}\prod_{i=1}^{k-1}(1+m_{\alpha_{i}}|z|) \right)\leq\log\left(e^{m_{d}|z|}\prod_{i=1}^{k-1}e^{m_{\alpha_{i}}|z|}\right)\] \[=\left(m_{d}+\sum_{i=1}^{k-1}m_{\alpha_{i}}\right)|z|\leq\left(m_{d }+\sum_{i=1}^{\infty}m_{\alpha_{i}}\right)|z|=\|\mu\|\cdot|z|\]
so \(\lx@overaccentset{{\circ}}{P}_{\mu_{k}}(|z|)\leq e^{|\mu|\cdot|z|}\). Thus \(\left\{\lx@overaccentset{{\circ}}{P}_{\mu_{k}}(|z|)\right\}_{k=0}^{\infty}\) is increasing and bounded above so \(\lim_{k\to\infty}\lx@overaccentset{{\circ}}{P}_{\mu_{k}}(|z|)\) converges. By proposition 15, we know that
\[\lx@overaccentset{{\circ}}{P}_{\mu}(z)=\lim_{k\to\infty}e^{m_{d}z} \prod_{i=1}^{k-1}(1+m_{\alpha_{i}}z)=e^{m_{d}}\prod_{i=1}^{\infty}(1+m_{\alpha _{i}}z)=e^{m_{d}z}\prod_{\alpha\in\mathcal{A}}(1+m_{\alpha}z).\]
Next we define \(M:=\sup_{\alpha\in\mathcal{A}}m_{\alpha}\). Let \(z\in\mathbb{C}\) with \(|z|<\frac{1}{M}\) and we will show that \(\lim_{k\to\infty}\lx@overaccentset{{\circ}}{P}_{\mu_{k}}(|z|)\) converges. For every \(k\geq 0\), because \(m_{\alpha_{k}}|z|<1\), we know \(\frac{1}{1-m_{\alpha_{k}}|z|}>1\) so the sequence \(\lx@overaccentset{{\circ}}{P}_{\mu_{k}}(|z|)\) is increasing. Also because \(M|z|<1\), we know \(\frac{1}{1-M|z|}\) exists and \(\frac{1}{1-m_{\alpha_{k}}|z|}\leq\frac{1}{1-M|z|}\) for all \(k\). Again we can use the classical identity \(1+x\leq e^{x}\) to obtain
\[\log\left(\lx@overaccentset{{\circ}}{P}_{\mu_{k}}(|z|)\right) =\log\left(e^{m_{d}|z|}\prod_{i=1}^{k-1}\frac{1}{1-m_{\alpha_{k} }|z|}\right)=\log\left(e^{m_{d}|z|}\prod_{i=1}^{k-1}\left(1+\frac{m_{\alpha_{ k}}|z|}{1-m_{\alpha_{k}}|z|}\right)\right)\] \[\leq\log\left(e^{m_{d}|z|}\prod_{i=1}^{k-1}e^{\frac{m_{\alpha_{k} }|z|}{1-m_{\alpha_{k}}|z|}}\right)=m_{d}|z|+\sum_{i=1}^{k-1}\frac{m_{\alpha_{ k}}|z|}{1-m_{\alpha_{k}}|z|}\leq\left(m_{d}+\sum_{i=1}^{\infty}\frac{m_{\alpha_{ k}}}{1-m_{\alpha_{k}}|z|}\right)|z|\] \[\leq\left(\frac{m_{d}}{1-M|z|}+\sum_{i=1}^{\infty}\frac{m_{\alpha _{k}}}{1-M|z|}\right)|z|=\frac{m_{d}+\sum_{i=1}^{\infty}m_{\alpha_{k}}}{1-M|z| }|z|=\frac{\|\mu\|\cdot|z|}{1-M|z|}\]
so \(\lx@overaccentset{{\circ}}{P}_{\mu_{k}}(|z|)\leq e^{\frac{|z|+|z|}{1-M |z|}}\). Thus \(\left\{\lx@overaccentset{{\circ}}{P}_{\mu_{k}}(|z|)\right\}_{k=0}^{\infty}\) is increasing and bounded above so \(\lim_{k\to\infty}\lx@overaccentset{{\circ}}{P}_{\mu_{k}}(|z|)\) converges. Again by proposition 15, we know
\[\lx@overaccentset{{\circ}}{P}_{\mu}(z)=\lim_{k\to\infty}e^{m_{d}z} \prod_{i=1}^{k-1}\frac{1}{1-m_{\alpha_{i}}z}=e^{m_{d}z}\prod_{i=1}^{\infty} \frac{1}{1-m_{\alpha_{i}}z}=e^{m_{d}z}\prod_{\alpha\in\mathcal{A}}\frac{1}{1-m _{\alpha}z}.\]
## 6 Examples
### N-Sided Die
Suppose that we have an evenly weighted die with \(n\) sides labelled \(1\) through \(n\). How many consecutive rolls do we expect to be less than the previous? What about less than or equal to? As above, we can define \(\lx@overaccentset{{\circ}}{N}\) to be the number of consecutive rolls that are less than or equal to the previous and \(\lx@overaccentset{{\circ}}{N}\) as the number of consecutive rolls that are strictly less than the previous. Note that these will count the number of descents not the number of elements in the descent. The distribution for a die roll will have \(n\) atoms each with probability \(\frac{1}{n}\) and no diffuse mass. Thus by theorem 16, the plunge functions will be
\[\lx@overaccentset{{\circ}}{P}(Z)=\frac{1}{\left(1-\frac{Z}{n}\right)^{n }}=\left(\frac{n}{n-Z}\right)^{n}=\left(1+\frac{Z}{n-Z}\right)^{n}\quad\text{ and }\quad\lx@overaccentset{{\circ}}{P}(Z)=\left(1+\frac{Z}{n}\right)^{n}.\]
Notice that as \(n\to\infty\), we get \(\lx@overaccentset{{\circ}}{P}_{\mu}(Z),\lx@overaccentset{{ \circ}}{P}_{\mu}(Z)\to e^{Z}\) which is the plunge function for the diffuse probability order measure. Additionally for \(1\leq k\leq n\), there are \(k-1\) elements less than \(k\) and \(k\) elements less than or equal to \(k\). Thus according to proposition 4 and theorem 16, the conditional plunge functions will be
\[\lx@overaccentset{{\circ}}{P}(k;Z)=\frac{1}{\left(1-\frac{Z}{n}\right)^{k }}=\left(1+\frac{Z}{n-Z}\right)^{k}\quad\text{ and for }k>1,\quad\lx@overaccentset{{\circ}}{P}(k;Z)=\left(1+\frac{Z}{n} \right)^{k-1}.\]
The derivatives of these functions will be
\[\lx@overaccentset{{\circ}}{P}^{\prime}(Z)=n\left(\frac{n}{n-Z}\right)^{ n-1}\cdot\frac{n}{(n-Z)^{2}}=\left(\frac{n}{n-Z}\right)^{n+1}=\left(1+\frac{Z}{n-Z} \right)^{n+1}\]
\[\lx@overaccentset{{\circ}}{P}^{\prime}(Z)=n\left(1+\frac{Z}{n}\right)^{ n-1}\cdot\frac{1}{n}=\left(1+\frac{Z}{n}\right)^{n-1}\]
\[\lx@overaccentset{{\circ}}{P}^{\prime}(k;Z)=k\left(\frac{n}{n-Z}\right)^{ k-1}\cdot\frac{n}{(n-Z)^{2}}=\frac{k}{n}\left(\frac{n}{n-Z}\right)^{k+1}=\frac{k}{n} \left(1+\frac{Z}{n-Z}\right)^{k+1}\]
\[\hat{P}^{\prime}(k;Z)=(k-1)\left(1+\frac{Z}{n}\right)^{k-2}\cdot\frac{1}{n}=\frac {k-1}{n}\left(1+\frac{Z}{n}\right)^{k-2}.\]
Then by theorem 7, the expected values and variances will be
\[\mathbb{E}\left[\hat{N}\right]=\left(1+\frac{1}{n-1}\right)^{n}-2\qquad\text{ var}\left(\hat{N}\right)=\left(1+\frac{1}{n-1}\right)^{n}-\left(1+\frac{1}{n-1} \right)^{2n}+2\left(1+\frac{1}{n-1}\right)^{n+1}\]
\[\mathbb{E}\left[\hat{N}\right]=\left(1+\frac{1}{n}\right)^{n}-2\qquad\text{ var}\left(\hat{N}\right)=\left(1+\frac{1}{n}\right)^{n}-\left(1+\frac{1}{n} \right)^{2n}+2\left(1+\frac{1}{n}\right)^{n-1}.\]
\[\mathbb{E}\left[\hat{N}\right]\leq 0.521626\qquad\text{var}\left(\hat{N} \right)\leq 0.485815\qquad\sigma_{\hat{N}}\leq 0.697004.\]
For the six-sided case, we may also modify our question. Instead we ask, how many consecutive rolls do we expect to be less than the previous _and of the same parity_? Then the order will not be total. Instead it will be the juxtaposition of the orders \(\{1,3,5\}\) and \(\{2,4,6\}\) and the measure will still allot a weight of \(\frac{1}{6}\) for each. Hence by theorem 9, the plunge functions and their derivatives will be
\[\hat{P}(Z)=2\left(1+\frac{Z}{6-Z}\right)^{3}-1\quad\text{ and }\quad\hat{P}(Z)=2 \left(1+\frac{Z}{6}\right)^{3}-1\quad\text{ as well as}\]
\[\hat{P}^{\prime}(Z)=\left(1+\frac{Z}{6-Z}\right)^{4}\quad\text{ and }\quad\hat{P}^{\prime}(Z)=\left(1+\frac{Z}{6}\right)^{2}.\]
Thus the expected values and variances will be
\[\mathbb{E}\left[\hat{N}\right]=0.456\qquad\text{var}\left(\hat{N}\right)=0. 571264\qquad\sigma_{\hat{N}}\approx 0.755820\]
\[\mathbb{E}\left[\hat{N}\right]\approx 0.175926\qquad\text{var}\left(\hat{N} \right)\approx 0.163495\qquad\sigma_{\hat{N}}\approx 0.404345.\]
### Dart Board with Bullseye
Suppose we are playing a game of darts on a dart board with a bullseye. For any two throws that land outside the bullseye, we compare their distances from the center to determine which is better. However for any throws landing in the bullseye, we say they are all equally good. How many consecutive throws to we expect to make that are better than the last (or as good as)? We define \(\hat{N}\) as the number of consecutive throws that are as good or better than the previous and \(\hat{N}\) as the number of consecutive throws that are better than the previous. If we assume \(p\in(0,1)\) is the probability of hitting the bullseye then the order will be equivalent to \([0,1)\) (where \(0\) is the bullseye). Then the measure \(\mu\) will assign \(\mu(\{0\})=p\) and \(\mu((0,1))=1-p\). We will assume the mass of \((0,1)\) is spread uniformly. Then according to theorem 16, the plunge functions and their derivatives will be
\[\hat{P}_{\mu}(Z)=\frac{e^{(1-p)Z}}{1-pZ}\quad\text{ and }\quad\hat{P}_{\mu}(Z)=e^{(1-p)Z} (1+pZ)\quad\text{ as well as}\]
\[\hat{P}^{\prime}_{\mu}(Z)=\frac{\left((1-p)(1-pZ)+p\right)e^{(1-p)Z}}{(1-pZ)^ {2}}\quad\text{ and }\quad\hat{P}^{\prime}_{\mu}(Z)=\left((1-p)(1+pZ)+p\right)e^{(1-p)Z}.\]
Further for any \(k\in(0,1)\) because \(k\) is not an atom and the diffuse mass is distributed uniformly, \(\mu([0,k])=\mu([0,k])=p+k(1-p)\). Thus using proposition 4 and theorem 16, the conditional plunge functions will be
\[\hat{P}_{\mu}(k;Z)=\frac{e^{k(1-p)Z}}{1-pZ}\quad\text{ and }\quad\hat{P}_{\mu}(k;Z) =e^{k(1-p)Z}(1+pZ)\quad\text{ as well as}\]
\[\hat{P}_{\mu}^{\prime}(k;Z)=\frac{[k(1-p)(1-pZ)+p]e^{k(1-p)Z}}{(1-pZ)^{2}}\quad \text{ and }\quad\hat{P}_{\mu}^{\prime}(k;Z)=\big{[}k(1-p)(1+pZ)+p\big{]}e^{k(1-p)Z}.\]
Next using theorem 7, the expected values and variances will be
\[\mathbb{E}\left[\hat{N}\right]=\frac{e^{1-p}}{1-p}-2\qquad\text{var}\left( \hat{N}\right)=e^{1-p}\frac{3-3p+2p^{2}-e^{1-p}}{(1-p)^{2}}\]
\[\mathbb{E}\left[\hat{N}\right]=e^{1-p}(1+p)-2\qquad\text{var}\left(\hat{N} \right)=e^{1-p}\big{(}3+3p-2p^{2}-(1+p)^{2}e^{1-p}\big{)}.\]
\[\mathbb{E}\left[\hat{N}\,\Big{|}\,X_{0}=k\right]=\frac{e^{k(1-p)}}{1-p}-1 \qquad\text{var}\left(\hat{N}\,\Big{|}\,X_{0}=k\right)=e^{k(1-p)}\frac{(1+2k )+(1-4k)p+2kp^{2}-e^{k(1-p)}}{(1-p)^{2}}\]
\[\mathbb{E}\left[\hat{N}\,\Big{|}\,X_{0}=k\right]=e^{k(1-p)}(1+p)-1\qquad \text{var}\left(\hat{N}\,\Big{|}\,X_{0}=k\right)=e^{k(1-p)}\left[(1+2k)+3p-2kp ^{2}-(1+p)^{2}e^{k(1-p)}\right].\]
### Geometric Distribution
Suppose that \(\{X_{i}\}_{i=0}^{\infty}\) are indepedently distributed in \(\mathbb{N}\). Then there exists \(p\in(0,1)\) such that for all \(n\in\mathbb{N}\), the probability of \(X_{i}\) will follow \(\mathbb{P}\left(X_{i}=n\right)=p(1-p)^{n}\). The corresponding probability measure \(\mu\) on \(\mathbb{N}\) will be atomic so \(\mathcal{A}=\mathbb{N}\). Thus using theorem 16, the plunge functions and their derivatives will be
\[\hat{P}_{\mu}(z)=\prod_{n=0}^{\infty}\frac{1}{1-p(1-p)^{n}z}\quad\text{ and }\quad\hat{P}_{\mu}(z)=\prod_{n=0}^{\infty}\left(1+p(1-p)^{n}z\right)\quad\text{ as well as }\]
\[\hat{P}_{\mu}^{\prime}(z)=\hat{P}_{\mu}(z)\sum_{n=0}^{\infty}\frac{p(1-p)^{n}} {1-p(1-p)^{n}z}\quad\text{ and }\quad\hat{P}_{\mu}^{\prime}(z)=\hat{P}_{\mu}(z)\sum_{n=0}^{\infty}\frac{p(1-p )^{n}}{1+p(1-p)^{n}z}\]
for all \(z\in\mathbb{C}\) with \(|z|<\frac{1}{p}\). For any \(N\in\mathbb{N}_{>0}\), we have the intervals \((-\infty,N]=\{0,1,\ldots,N\}\) and \((-\infty,N)=\{0,1,\ldots,N-1\}\) so by proposition 4 and theorem 16, the conditional plunge functions will be
\[\hat{P}_{\mu}(N;Z)=\prod_{n=0}^{N}\frac{1}{1-p(1-p)^{n}Z}\quad\text{ and }\quad\hat{P}_{\mu}(N;Z)=\prod_{n=0}^{N-1}\left(1+p(1-p)^{n}Z\right)\quad\text{ as well as }\]
\[\hat{P}_{\mu}^{\prime}(N;Z)=\hat{P}_{\mu}(N;Z)\sum_{n=0}^{N}\frac{p(1-p)^{n}}{ 1-p(1-p)^{n}Z}\quad\text{ and }\quad\hat{P}_{\mu}^{\prime}(N;Z)=\hat{P}_{\mu}(N;Z)\sum_{n=0}^{N-1}\frac{p(1-p )^{n}}{1+p(1-p)^{n}Z}.\]
Notice that \(P_{\mu}(z)\) and \(P_{\mu}(N;Z)\) from above are related to the well-known \(q\)-Pochhammer symbol \((a;q)_{n}\) which is defined to be
\[(a;q)_{n}=\prod_{j=0}^{n-1}(1-aq^{j})\]
for all \(n>0\). This function plays a significant role in the theory of \(q\)-series (see [1] for more information). Then we can re-write the plunge functions as
\[\hat{P}_{\mu}(z)=\frac{1}{(pz;1-p)_{\infty}}\quad\text{ and }\quad\hat{P}_{\mu}(z)=(-pz;1-p)_{\infty}\quad\text{ as well as }\]
\[\hat{P}_{\mu}(N;Z)=\frac{1}{(pZ;1-p)_{N+1}}\quad\text{ and }\quad\hat{P}_{\mu}(N;Z)=(-pZ;1-p)_{N}.\]
Now using theorem 7, the expected values and variances will be
\[\mathbb{E}\left[\hat{N}\right]=\frac{1}{(p;1-p)_{\infty}}-2\qquad\text{var} \left(\hat{N}\right)=\frac{1}{(p;1-p)_{\infty}}\left(1-\frac{1}{(p;1-p)_{ \infty}}+2\sum_{n=0}^{\infty}\left(\frac{1}{p(1-p)^{n}}-1\right)^{-1}\right)\]
\[\mathbb{E}\left[\hat{N}\right]=(-p;1-p)_{\infty}-2\qquad\text{var}\left(\hat{N }\right)=(-p;1-p)_{\infty}\left(1-(-p;1-p)_{\infty}+2\sum_{n=0}^{\infty}\left( \frac{1}{p(1-p)^{n}}+1\right)^{-1}\right)\]
\[\mathbb{E}\left[\hat{N}\,\middle|\,X_{0}=N\right] =\frac{1}{(p;1-p)_{N+1}}-2\] \[\operatorname{var}\left(\hat{N}\,\middle|\,X_{0}=N\right) =\frac{1}{(p;1-p)_{N+1}}\left(1-\frac{1}{(p;1-p)_{N+1}}+2\sum_{n=0 }^{N}\left(\frac{1}{p(1-p)^{n}}-1\right)^{-1}\right)\] \[\mathbb{E}\left[\hat{N}\,\middle|\,X_{0}=N\right] =(-p;1-p)_{N}-2\] \[\operatorname{var}\left(\hat{N}\,\middle|\,X_{0}=N\right) =(-p;1-p)_{N}\left(1-(-p;1-p)_{N}+2\sum_{n=0}^{N-1}\left(\frac{1} {p(1-p)^{n}}+1\right)^{-1}\right).\]
## 7 Rearrangement Insensitivity
We will now take a closer look at a property of the above theory which may initially appear innocuous. Notice from the formulae arising in theorem 16 that the plunge function and strict plunge function of a total order measure is only dependent upon the mass of the atoms and diffuse mass, but not the relative positions of these masses. In particular, if we separate the measure into pieces
\[\mu=\lambda_{1}\parallel\ldots\parallel\lambda_{n}\]
and put it back together according to the permutation \(\sigma\),
\[\mu^{\prime}:=\lambda_{\sigma(1)}\parallel\ldots\parallel\lambda_{\sigma(n)}\]
then \(P_{\mu}(Z)=P_{\mu^{\prime}}(Z)\). Additionally all of the values determined by \(\lx@overaccentset{{\raisebox{-1.0pt}{\bullet}}}{P}_{\mu}(Z)\) and \(\lx@overaccentset{{\raisebox{-1.0pt}{\bullet}}}{P}_{\mu}(Z)\) will be preserved by this rearrangement as well. In the case of total orders at least, we can formalize this concept.
**Definition 6**.: Suppose there exists a property \(K_{\mu}\) associated with every total order measure \((T,\mu)\). Then we say this property is **rearrangement insensitive** if for any total order measures \((S,\lambda)\) and \((T,\mu)\), we have \(K_{\lambda\parallel\mu}=K_{\mu\parallel\lambda}\). Otherwise, we say the property is **sensitive to rearrangement**.
**Corollary 17**.: _The plunge function \(\lx@overaccentset{{\raisebox{-1.0pt}{\bullet}}}{P}_{\mu}(Z)\) and strict plunge function \(\lx@overaccentset{{\raisebox{-1.0pt}{\bullet}}}{P}_{\mu}(Z)\) are rearrangement insensitive properties. As a byproduct of this, the distributions of the plunge length \(\lx@overaccentset{{\raisebox{-1.0pt}{\bullet}}}{N}_{i}\) and strict plunge length \(\lx@overaccentset{{\raisebox{-1.0pt}{\bullet}}}{N}_{i}\) are also rearrangement insensitive._
Proof.: From theorem 8, the commutativity of formal power series multiplication gives
\[P_{\lambda\parallel\mu}(Z)=P_{\lambda}(Z)\cdot P_{\mu}(Z)=P_{\mu}(Z)\cdot P_{ \lambda}(Z)=P_{\mu\parallel\lambda}(Z).\]
Now proposition 3 states that the probability generating functions for \(\lx@overaccentset{{\raisebox{-1.0pt}{\bullet}}}{N}_{i}\) and \(\lx@overaccentset{{\raisebox{-1.0pt}{\bullet}}}{N}_{i}\) can be written in terms of the functions \(\lx@overaccentset{{\raisebox{-1.0pt}{\bullet}}}{P}_{\mu}(Z)\) and \(\lx@overaccentset{{\raisebox{-1.0pt}{\bullet}}}{P}_{\mu}(Z)\), respectively. Because the distributions are entirely determined by the probability generating functions, we know the distributions will be unperturbed by rearrangements as well.
It may seem to you that this insensitivity is not terribly remarkable. However it happens that it is not shared by many conditions of interest.
**Proposition 18**.: _Suppose \((T,\mu)\) is a probability total order measure and \(\{X_{i}\}_{i=0}^{\infty}\) is a random sequence in \(T\) independently and identically distributed as \(\mu\). Then for any \(n\geq 2\), the probability_
\[R_{n}(\mu):=\mathbb{P}\left(X_{n}\leq\min(X_{0},X_{1},\ldots,X_{n-1})\right)\]
_is sensitive to rearrangement. (Note the property \(R_{n}(\mu)\) can be readily generalized to total order measures with \(\left\|\mu\right\|\neq 1\).)_
Proof.: It suffices to consider a single counter-example. We will take \(S=\{x\}\) and \(T=\{y\}\) to be singleton orders. Then for \(p\in(0,1)\), we define \(\lambda\) and \(\mu\) as measures on \(S\) and \(T\), respectively, such that \(\lambda(\{x\})=p\) and \(\mu(\{y\})=1-p\). Therefore the total mass of \(\lambda\parallel\mu\) will be \(p+1-p=1\) so \(\lambda\parallel\mu\) will be a probability order measure. Now for any \(n\geq 2\), using the law of total probability and independence of \(X_{0},\ldots,X_{n-1}\)
\[R_{n}(\lambda\,\middle\|\,\mu) =\mathbb{P}\left(X_{n}\leq X_{0},X_{n}\leq X_{1},\ldots,X_{n}\leq X _{n-1}\right)\] \[=p\cdot\mathbb{P}\left(X_{n}\leq X_{0},\ldots,X_{n}\leq X_{n-1} \,\middle|\,X_{n}=x\right)+(1-p)\cdot\mathbb{P}\left(X_{n}\leq X_{0},\ldots,X_{n }\leq X_{n-1}\,\middle|\,X_{n}=y\right)\] \[=p\cdot\mathbb{P}\left(x\leq X_{0}\right)\cdots\mathbb{P}\left(x \leq X_{n-1}\right)+(1-p)\cdot\mathbb{P}\left(y\leq X_{0}\right)\cdots\mathbb{P }\left(y\leq X_{n-1}\right)\] \[=p\cdot 1^{n}+(1-p)\cdot(1-p)^{n}=p+(1-p)^{n+1}.\]
By a similar argument, one concludes that \(R_{n}(\mu\parallel\lambda)=(1-p)+p^{n+1}\). Then setting \(p=\frac{1}{3}\) and taking their difference
\[R_{n}(\lambda\parallel\mu)-R_{n}(\mu\parallel\lambda)=\left(\frac{1}{3}+\frac{2 ^{n+1}}{3^{n+1}}\right)-\left(\frac{2}{3}+\frac{1}{3^{n+1}}\right)=-\frac{1}{ 3}+\frac{2^{n+1}-1}{3^{n+1}}=\frac{2^{n+1}-1-3^{n}}{3^{n+1}}\]
For all \(n\geq 2\), \(3^{n}\geq 2^{n+1}\) so \(2^{n+1}-3^{n}-1<0\) implying \(R_{n}(\lambda\parallel\mu)\neq R_{n}(\mu\parallel\lambda)\).
The particular order measure chosen for this proof is not unique. One can check that any order measure that has non-trivial rearrangements will exhibit rearrangement sensitivity in \(R_{n}\). In fact, it is not difficult to find other events whose probability is sensitive to rearrangement. The condition for which events yield insensitive probabilities and which do not appears to be rather subtle. Notice from corollary 17 that \(\mathbb{P}\left(\overset{\star}{N}_{0}=1\right)=\mathbb{P}\left(X_{0}\geq X_{1 }<X_{2}\right)\) is insensitive. However, from proposition 18 above, we know \(R_{2}(\mu)=\mathbb{P}\left(X_{2}\leq X_{0},X_{2}\leq X_{1}\right)=\mathbb{P} \left(X_{0}\geq X_{1}\leq X_{2}\right)\) is sensitive.
|
2309.07225 | Breaking Free with AI: The Deconfinement Transition | Employing supervised machine learning techniques, we investigate the
deconfinement phase transition within $4$-dimensional $SU(2)$ Yang-Mills (YM)
theory, compactified on a small circle and endowed with center-stabilizing
potential. This exploration encompasses scenarios both without and with matter
in either the fundamental or adjoint representations. Central to our study is a
profound duality relationship, intricately mapping the YM theory onto an
XY-spin model with $\mathbb Z_p$-preserving perturbations. The parameter $p$
embodies the essence of the matter representation, with values of $p=1$ and
$p=4$ for fundamental and adjoint representations, respectively, while $p=2$
corresponds to pure YM theory. The logistic regression method struggles to
produce satisfactory results, particularly in predicting the transition
temperature. Contrarily, convolutional neural networks (CNNs) exhibit
remarkable prowess, effectively foreseeing critical temperatures in cases where
$p=2$ and $p=4$. Furthermore, by harnessing CNNs, we compute critical exponents
at the transition, aligning favorably with computations grounded in
conventional order parameters. Taking our investigation a step further, we use
CNNs to lend meaning to phases within YM theory with fundamental matter.
Notably, this theory lacks conventional order parameters. Interestingly, CNNs
manage to predict a transition temperature in this context. However, the
fragility of this prediction under variations in the boundaries of the training
window undermines its utility as a robust order parameter. This outcome
underscores the constraints inherent in employing supervised machine learning
techniques as innovative substitutes for traditional order parameters. | Christian Ermann, Stephen Baker, Mohamed M. Anber | 2023-09-13T18:00:51Z | http://arxiv.org/abs/2309.07225v1 | # Breaking Free with AI: The Deconfinement Transition
###### Abstract
Employing supervised machine learning techniques, we investigate the deconfinement phase transition within 4-dimensional \(SU(2)\) Yang-Mills (YM) theory, compactified on a small circle and endowed with center-stabilizing potential. This exploration encompasses scenarios both without and with matter in either the fundamental or adjoint representations. Central to our study is a profound duality relationship, intricately mapping the YM theory onto an XY-spin model with \(\mathbb{Z}_{p}\)-preserving perturbations. The parameter \(p\) embodies the essence of the matter representation, with values of \(p=1\) and \(p=4\) for fundamental and adjoint representations, respectively, while \(p=2\) corresponds to pure YM theory. The logistic regression method struggles to produce satisfactory results, particularly in predicting the transition temperature. Contrarily, convolutional neural networks (CNNs) exhibit remarkable powers, effectively foreseeing critical temperatures in cases where \(p=2\) and \(p=4\). Furthermore, by harnessing CNNs, we compute critical exponents at the transition, aligning favorably with computations grounded in conventional order parameters. Taking our investigation a step further, we use CNNs to lend meaning to phases within YM theory with fundamental matter. Notably, this theory lacks conventional order parameters. Interestingly, CNNs manage to predict a transition temperature in this context. However, the fragility of this prediction under variations in the boundaries of the training window undermines its utility as a robust order parameter. This outcome underscores the constraints inherent in employing supervised machine learning techniques as innovative substitutes for traditional order parameters.
## I Introduction
Confinement and mass generation in 4-dimensional Yang-Mills (YM) theory is an open and one of the most challenging problems in physics [1]. The difficulty of this problem is attributed to the fact that confinement happens at the strong coupling scale when perturbation analysis becomes of little use. The technical definition of confinement is that the chromoelectric field lines between two probe quarks are collimated in a flux tube and that the potential between the probe quarks increases linearly with their separation. When YM theory is put in a heat bath, the flux tube melts down at a critical temperature, and the theory exhibits a phase transition (deconfinement) between the confined and deconfined phases. Again, deconfinement occurs in the strongly coupled regime, and no reliable analytical techniques are available to tackle the problem. Both confinement and deconfinement phenomena, though, can be seen in full-scale lattice simulations of YM theory. One of the important tasks of the simulations is to examine the nature of the transition, e.g., second/first order or a smooth crossover, and determine the universality classes of second-ordered transitions.
Studies of phase transitions are based on the classical Landau-Ginsberg paradigm, which requires two essential ingredients: (1) the invariance of the system under a global symmetry and (2) an order parameter that transforms nontrivially under the symmetry. In the case of pure \(SU(N)\) YM theory, the global symmetry is a \(\mathbb{Z}_{N}^{(1)}\) 1-form center symmetry that acts on non-contractable Wilson's loops [2], but otherwise, it leaves the YM action invariant. Adding dynamical matter to YM theory changes the center of the theory. In general, given a matter with a representation of \(N\)-ality \(n\), the center symmetry is \(\mathbb{Z}_{\text{gcd}(N,n)}^{(1)}\). For example, while adjoint matter, \(n=0\), retains the full center symmetry, a theory with fundamental matter, \(n=1\), does not have a center. Thus, there is no order parameter, and hence, there is no meaningful distinction between the confined and deconfined phases.
Recently, machine learning (ML) algorithms have emerged as an alternative technique to studying phase
transitions; see [3; 4] for reviews. In this method, one simply takes a large number of lattice images both in the deep-ordered and deep-disordered phases. The algorithm is trained to distinguish between the two phases and then can be used to predict the critical (transition) temperature and the critical exponents. Interestingly, one does not necessarily have to define the critical region with absolute precision. The presence of an actual transition implies that its characteristics remain unaffected by a reasonable adjustment of the boundaries of the critical window.
ML techniques have also been used to study phase transitions in YM theory; see, e.g., [5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. In this paper, we apply these techniques to study the deconfinement transition in 4-dimensional \(SU(2)\) YM theory (with or without matter) by mapping the original theory to simple 2-dimensional XY-spin models with \(\mathbb{Z}_{p}\)-preserving perturbations. This is a true mapping (or, if you wish, a duality) between the original YM theory and the XY-spin models via well-under-control effective field theory techniques. This mapping is rather lengthy and technical, and the methods used for the mapping span more than a decade of several works. See, e.g., [15; 16; 17; 18; 19; 20] for the technical details and [21; 22] for reviews. We refrain from discussing the details in this article at great length, giving only a synopsis of the methods used and referring the interested reader to the literature. It is important to emphasize that the XY-spin models exhibit all the symmetries of the original YM theory (with or without matter) and genuinely inherit the important effective degrees of freedom relevant to the deconfinement transition. With all these important features, the XY-spin models are void of the complexities associated with YM theories, e.g., the topological sectors, the difficulty of simulating fermions, etc. Thus, the mapping provides a playground that enables us to examine the ML techniques in studying the deconfinement transition without dealing with all the complexities associated with simulating the full-fledged YM theories. Different values of \(p\) correspond to YM theories with distinct matter content: \(p=2,4,1\) correspond to pure YM theory, YM theory with adjoint fermions, and YM theory with fundamental fermions, respectively. As a reference example, we may remove the \(\mathbb{Z}_{p}\)-preserving perturbations to recover the pure XY-spin model. This model has a continuous \(U(1)\) symmetry, algebraic long-range order, and exhibits the famous Kosterlitz-Thouless (KT) phase transition [23].
We apply supervised ML techniques to evaluate their suitability, compared to conventional methods (i.e., the Landau paradigm and order parameters), in detecting the deconfinement transition. In this endeavor, we examine two techniques: logistic regression and convolutional neural networks (CNN). The first method yields excellent results for detecting the KT transition in the pure XY-spin model but fails to detect transitions in XY-spin models with \(\mathbb{Z}_{p}\)-preserving perturbations. The logistic regression method also fails in detecting the Ising model phase transition, as has been reported in the literature. Thus, one sees from all these examples the failure of this method in detecting transitions in systems with discrete symmetries. On the other hand, we show that CNN is reliable in detecting the phase transitions in systems with discrete symmetries. Simply, we train CNN to distinguish between spin images taken from the ordered and disordered phases, excluding the critical region where the transition might happen. Then, by feeding the CNN with images from the critical region, the transition temperature is determined as the temperature at which an image has a fifty-fifty chance to belong to either the ordered or disordered phases. Thus, we can define a predictive function of temperature, \(f(T)\), which serves as an alternative to the traditional order parameter, such that the transition temperature \(T_{c}\) is defined via the condition \(f(T_{c})\)=0.5. We may also use the derivative \(df(T)/dT\) as the susceptibility associated with \(f(T)\), and we show that this quantity peaks near the transition temperature precisely the same way the susceptibility of the magnetization (the traditional order parameter) peaks. Furthermore, we use finite-size scaling along with the susceptibility of the predictive function to calculate the critical exponents of our systems. For example, we examined that this method, when applied to the XY-spin model with \(\mathbb{Z}_{4}\)-preserving perturbation (which is dual to YM theory with adjoint fermions), yields results consistent (within the accuracy we were able to attain) with the critical exponents calculated from more traditional methods.
Long-established within the literature (e.g., [24]), it is recognized that YM theory coupled with fermions in the defining (fundamental) representation lacks a clearly defined order parameter. Consequently, the concept of phase transitions in this theory becomes unfeasible. Our investigation addresses this proposition by harnessing machine learning techniques. Notably, we examine the XY-spin model with \(p=1\), drawing an equivalence to YM theory with fundamental fermions. Intriguingly, employing CNN trained on images resembling ordered and disordered phases yields a transition temperature for the system. However, our findings reveal that this temperature's stability is limited, contingent upon how we define the boundaries of the training data window. This crucially undermines the ascription of genuine transitional behavior to this theory. This contrasts with the situation in XY-spin models with \(p=2,4\), wherein the critical temperature remains relatively insensitive to the boundaries of the training data window.
This paper is organized as follows. In Section II, we review the duality relation that enables us to map the original YM theory into XY-spin models with perturbations. Given the extended literature and the long calculations needed to verify the duality, we skip the derivation but only provide the essential ingredients/physics behind the duality. In section III, we apply two methods, namely, the logistic regression and convolution neural networks, to study phase transitions in the XY-spin models. We conclude by giving an outlook and possible extensions of our study in Section IV.
Theory and Formulation
We consider a 4-dimensional \(SU(2)\) Yang-Mills (YM) theory on \(\mathbb{R}^{3}\times\mathbb{S}^{1}_{L}\), where \(\mathbb{S}^{1}_{L}\) is a spacial circle1 of circumference \(L\). We take \(L\) to be much smaller than the inverse strong-coupling scale of the theory \(\Lambda\), i.e., \(L\Lambda\ll 1\). As we argue below, in this limit, the theory becomes weakly coupled and amenable to semi-classical analysis. Throughout this work, we shall consider three cases: pure YM theory, YM theory with adjoint fermions, and YM theory with fundamental fermions. We also impose periodic boundary conditions on \(\mathbb{S}^{1}_{L}\) for both gauge fields and fermions2. The three different theories will help us make conclusions about the usefulness of the machine learning paradigm in understanding phase transitions in general and deconfinement phase transitions in particular.
Footnote 1: It is important to distinguish between the spatial and temporal circles. The distinction will be clear as we spell out the details.
Footnote 2: Imposing periodic boundary conditions on gauge fields is the standard choice, corresponding to thermal gauge theory. Notice, at this level, that it really makes no difference whether \(\mathbb{S}^{1}_{L}\) is temporal or spacial in pure YM theory. The distinction, however, is manifest once we endow the theory with fermions that satisfy periodic boundary conditions. The latter case corresponds to the twisted partition function: \(\mathcal{Z}=\mathrm{tr}\left[e^{-HL}(-1)^{F}\right]\), where \((-1)^{F}\) is the fermion number.
Pure YM theory on a large compact manifold (large compared to \(\Lambda^{-1}\)) has a 1-form center symmetry \(\mathbb{Z}^{(1)}_{2}\) that acts on non-contractible Wilson's loops3. The latter is the order parameter for the confinement/deconfinement phase transition. Upon compactifying one of the large directions over a small circle, the 1-form symmetry bifurcates to 1-form \(\mathbb{Z}^{(1)}_{2}\) and 0-form \(\mathbb{Z}^{(0)}_{2}\) symmetries. The 1-form symmetry acts on spacial Wilson's loops in \(\mathbb{R}^{2}\) (we imagine compactifying \(\mathbb{R}^{3}\) on a large torus, see Footnote 3), while the 0-form symmetry \(\mathbb{Z}^{(0)}_{2}\) acts on the dimensionally-reduced Polyakov's loops4 that wraps \(\mathbb{S}^{1}_{L}\).
Footnote 3: The prototype example is to think of the 4-manifold as a 4-torus. Since the 4-torus has non-contractible directions (cycles), one can rigorously define non-contractible Wilson’s loops and 1-form symmetries. When the length of the cycles is much larger than \(\Lambda^{-1}\), practically speaking, the manifold approaches \(\mathbb{R}^{4}\). Therefore, throughout this work, when we speak about \(\mathbb{R}^{n}\) space, the reader should think of a large torus with a cycle length approaching infinity.
Footnote 4: The Polyakov’s loop wrapping \(\mathbb{S}^{1}_{L}\) is given by \(\mathrm{tr}_{\square}\!\left[e^{i\oint_{\mathbb{L}^{1}_{L}}A^{(1)}}\right]\). When \(\mathbb{S}^{1}_{L}\) is small, and the fluctuations of \(A^{(1)}\) along \(\mathbb{S}^{1}_{L}\) are not sizable, then we may choose \(A^{(1)}\) to be in the \(\tau^{3}\) (\(\tau^{3}\) being the third Pauli matrix) color direction and write \(\Phi=\oint_{\mathbb{L}^{1}_{L}}A^{(1)}\). Thus, the 1-form field \(A^{(1)}\) along \(\mathbb{S}^{1}_{L}\) is dimensionally reduced to a 0-form field, and the component of the 1-form symmetry along the same direction becomes a 0-form symmetry.
Like pure YM theory, a theory with adjoint fermions will also admit 1-form center symmetry \(\mathbb{Z}^{(1)}_{2}\). However, a theory with fundamentals does not admit a center symmetry; one cannot rigorously define the notion of confinement in a theory with fundamentals. One of the tasks of this paper is to examine whether machine learning can provide an alternative or generalized notion of confinement that may be used in this class of theories.
Since the circle is much smaller than all other length scales, we may try to write down a 3-dimensional effective field theory by integrating out a tower of heavy Kaluza-Klein excitations along \(\mathbb{S}^{1}_{L}\). In the case of pure YM, this results in a thermal field theory with destabilized 0-form center symmetry; this is the celebrated deconfined phase5 of pure YM. It is needless to say that we are not interested in this theory since it is far from being weakly coupled. To restore the center symmetry and force the theory into the weak coupling regime, we need to add by hand a double-trace deformation. Let \(\Omega=\mathrm{tr}_{\square}\left[e^{i\Phi}\right]\equiv\mathrm{tr}_{\square} \!\left[e^{i\oint_{\mathbb{L}^{1}_{L}}A^{(1)}}\right]\) be the Polyakov loop wrapping \(\mathbb{S}^{1}_{L}\), then adding the term \(\sum_{n=1}a_{n}|\Omega^{n}|^{2}\), for positive and large enough values of \(a_{n}\), will restore the 0-form center symmetry. Effectively, adding the double-trace deformation ensures that the total effective potential \(V(\Phi)\) is minimized at the center of the Weyl chamber. Therefore, all the W-bosons are massive6. We call this class of theories deformed YM (dYM)
Footnote 5: The deconfinement of pure YM happens at a temperature \(T=L^{-1}\sim\Lambda\). Above this temperature, the temporal Wilson’s loops obey the perimeter rather than the area law.
Footnote 6: Without the double-trace deformation, the potential is minimized at the boundary of the Weyl chamber. This has the effect of keeping some gauge modes massless, and thus, the theory stays in its strongly-coupled regime.
The above arguments apply even if we endow the theory with fundamental fermions; we still need to add a double-trace deformation to stabilize the center. We denote the theory with fundamentals and deformation by dYM(F). The situation, however, is different for adjoint fermions. If the latter obey periodic boundary conditions on \(\mathbb{S}^{1}_{L}\), integrating them out will generate a center-stabilizing potential7. This class of theories is known as QCD(adj).
Footnote 7: Here we assume that the number of the left-handed Weyl adjoint flavors \(n_{f}>1\). The case \(n_{f}=1\) is the pure supersymmetric YM theory, and we do not consider it here.
Whether we add adjoints or double-trace deformations, in both cases, the theory abelianizes. This happens because the adjoint field \(\Phi\) breaks \(SU(2)\) down to \(U(1)\). The 3-dimensional \(U(1)\) theory can be dualized and described by a compact scalar (dual photon) \(\sigma\) via the duality relation \(F^{3}_{\mu\nu}=\frac{g^{2}}{4\pi L}\epsilon_{\mu\nu\alpha}\partial_{\alpha}\sigma\). The superscript 3 denotes the color direction, \(g\) is the 4-dimensional coupling constant, and the Greek letters run over \(0,1,2\), keeping in mind that we are using a Euclidean description. The photon kinetic energy term is
\[\mathcal{L}_{U(1)}=\frac{g^{2}}{16\pi^{2}L}\left(\partial_{\mu}\sigma\right)^{2}\,, \tag{1}\]
and the compact scalar has a period8\(\frac{2\pi}{\sqrt{2}}\).
Footnote 8: This period corresponds to the normalization \(\mathrm{tr}_{\square}\left[e^{i\Phi}\right]=\delta_{ab}\), where \(\delta_{ab}\) is the Kronecker delta.
The story continues, thanks to the existence of monopole and/or composite instantons. These are non-perturbative objects that extremize the Euclidean path integral, and their existence is guaranteed because of the nontrivial second homotopy class \(\Pi_{2}(SU(2)/U(1))\in\mathbb{Z}\). The dominating objects in both Pure YM theory and YM theory with fundamentals are monopole instantons. They carry magnetic charge \(\pm\sqrt{2}\) under \(U(1)\) and have action \(S_{m}=\frac{4\pi^{2}}{g^{2}}\). The 't Hooft vertex of the monopole operators, modulo a prefactor, is \(e^{\pm i\sqrt{2}\sigma}e^{-S_{m}}\), where \(e^{-S_{m}}\) is the monopole fugacity. The dominating objects in YM theory with adjoints are the bions. These are molecules composed of two monopole instantons; they carry twice the magnetic charge \(Q_{b}=\pm 2\sqrt{2}\) and have twice the actions \(S_{b}=\frac{8\pi^{2}}{g^{2}}\), i.e., their 't Hooft vortex is \(e^{\pm i2\sqrt{2}\sigma}e^{-S_{b}}\). The proliferation of monopoles or bions can be incorporated in the path integral, which results in the 3-dimensional effective actions:
\[\mathcal{L}_{\text{dYM, dYM(F)}} = \frac{g^{2}}{16\pi^{2}L}\left[(\partial_{\mu}\sigma)^{2}+e^{- \frac{4\pi^{2}}{g^{2}}}\cos\left(\sqrt{2}\sigma\right)\right]\,,\] \[\mathcal{L}_{\text{QCD(adj)}} = \frac{g^{2}}{16\pi^{2}L}\left[(\partial_{\mu}\sigma)^{2}+e^{- \frac{4\pi^{2}}{g^{2}}}\cos\left(2\sqrt{2}\sigma\right)\right]\,.\]
These Lagrangians show that the proliferation of the magnetic charge generates a mass gap. Further studies of these models show in a remarkable way how confinement happens in YM theories on \(\mathbb{R}^{3}\times\mathbb{S}_{L}^{1}\).
It is important to emphasize, again, that the Lagrangians in (2) describe an effective field theory at zero temperature; \(\mathbb{S}_{L}^{1}\) is by no means a thermal circle. Yet, one wonders about the behavior of this theory as we put it in a heat bath at temperature \(T\). This amounts to identifying the effective degrees of freedom as we compactify the temporal direction over a circle of circumference \(\beta=\frac{1}{T}\). Thus, now the theory lives on \(\mathbb{R}^{2}\times\mathbb{S}_{L}^{1}\times\mathbb{S}_{\beta}^{1}\). It was realized in a series of works [15; 16; 17; 18; 19; 20] that the W-bosons9 play an important role at temperatures near the deconfinement transition. These particles are electrically charged, with electric charge \(Q_{W}=\pm\sqrt{2}\) under \(U(1)\), and very heavy, of mass \(M_{W}=\frac{\pi}{L}\), and do not participate in the 3-dimensional Lagrangian 2 but become important near the transition. The idea is that these bosons come with a Boltzmann suppression factor \(e^{-\frac{M_{W}}{T}}\), which, near the transition, is comparable to the monopole or bion fugacities. Therefore, the problem reduces to studying a 2-dimensional electric-magnetic Coulomb gas of W-bosons and magnetic charges (either monopoles or bions). The vacuum is dominated by magnetic charges at low temperatures (magnetically disordered phase) and by electric charges at high temperatures (electrically disordered phase). The competition between the electric and magnetic charges is ultimately responsible for the phase transition. In addition to the W-bosons, the first excited Kaluza-Klein fundamental and adjoint fermions contribute to the Coulomb gas. The mass and charges of the fundamentals are \(M_{F}=\frac{\pi}{2L}\), \(Q_{F}=\pm\frac{1}{\sqrt{2}}\), while those of the adjoint fermions are equal to the corresponding values of the W-boson: \(M_{adj}=\frac{\pi}{L}\), \(Q_{adj}=\pm\sqrt{2}\).
Footnote 9: The W-bosons appear due to the Higgsing of \(SU(2)\) at the center of the Weyl chamber, see Footnote 6.
The electric-magnetic Coulomb gas can be mapped to a 2-dimensional XY-spin model with \(\mathbb{Z}_{p}\)-preserving perturbations. The partition function of this system is
\[\mathcal{Z}\left[y,p,n=\pm 1\right]=\left[\prod_{I}\int_{0}^{2 \pi}d\theta_{I}\right]e^{-H\tilde{T}}\,,\] \[H=-\sum_{\langle I,J\rangle}\cos\left(\theta_{I}-\theta_{J} \right)-y\sum_{I}\cos(p\,\theta_{I})\,, \tag{3}\]
and \(\tilde{T}\) is a dimensionless temperature. We may also define a new dimensionless temperature \(T\equiv 1/\tilde{T}\), warning the reader that the newly defined \(T\) is not the physical temperature of the system, but both can be related. Moving forward, the symbol \(T\) will denote the dimensionless temperature, and we will explicitly indicate instances when the physical temperature is being employed. The first term in the Hamiltonian \(H\) is the kinetic energy. The angular variable \(\theta_{I}\) is the spin, which is localized at site \(I\) and takes values \(0\leq\theta_{I}<2\pi\). The sum in the first term is restricted to the nearest-neighbor spins. Notice that the kinetic energy is invariant under the continuous shift symmetry (related to the U(1) symmetry) \(\theta_{I}\rightarrow\theta_{I}+C\), \(C\in[0,2\pi)\). The compactness of the angular variable allows the system to have vortices \(\oint d\theta=2\pi n\), \(n\in\mathbb{Z}\). The fugacity of the vortices is implicit and practically controlled by the lattice spacing, i.e., the UV cutoff. Yet, only the lower-order vortices with winding number \(n=\pm 1\) will dominate. The second term in \(H\) is the perturbation, which breaks the continuous shift symmetry down to \(\mathbb{Z}_{p}\). The coefficient \(y\) controls the strength of perturbations.
The reader will notice that the Hamiltonian in Eq. (3) resembles the system in Eq. (2). A dimensional reduction of the latter from 3 to 2 dimensions reproduces an almost identical form of Eq. (3) (after discretization and rescaling of variables). Although this is superficially true, the exact connection between Eqs. (3) and (2) is more involved. For example, the naive dimensional reduction of QCD(adj) Lagrangian allows for lowest-order vortices to dominate the system; the vortices, in this case, are the fundamental fermions. Obviously, these excitations do not exist in QCD(adj), and thus, the naive dimensional reduction of Eq. (2) does not lead to the desired physical properties. One overcomes this problem by using a T-duality, which maps the electric and magnetic charges to each other. The interested reader is referred to [20] for the details of this duality. Here, it suffices to say that Eq. (3) is the T-dual of Eq. (2); this is true for
dYM, dYM(F), and QCD(adj). Now, we mention how the different values of \(p\) capture the physics of the 3 distinct theories we have.
1. \(p=1\). This case corresponds to dYM(F). The perturbation term \(y\sum_{I}\cos\theta_{I}\) accounts for the fundamental quarks with fugacity \(y\tilde{T}\). In principle, one should also add the term \(\sum_{I}\cos 2\theta_{I}\) to account for the W-bosons. The fugacity of the latter, however, is exponentially suppressed compared to the fundamental fermions10 and can be ignored. The unit winding vortices \(n=\pm 1\) are the magnetic monopoles. This model does not exhibit any symmetry since the shift symmetry \(\theta_{I}\rightarrow\theta_{I}+C\) is broken by the perturbations to nothing. This is exactly what we expect in a theory with fundamental charges since the theory does not admit a center symmetry. Footnote 10: The fugacity of the fundamentals is \(\sim e^{-\frac{M_{F}}{T}}=e^{-\frac{\pi}{2\tilde{T}T}}\), while the W-bosons fugacity is \(\sim e^{-\frac{M_{W}}{T}}=e^{-\frac{\pi}{2\tilde{T}T}}\) (here, \(T\) is the physical temperature). We easily see that the latter is exponentially suppressed compared to the former.
2. \(p=2\). This corresponds to dYM, where the perturbation term corresponds to the W-bosons. Again, The unit winding vortices are the magnetic monopoles. The theory exhibits the \(\mathbb{Z}_{2}\) symmetry: \(\theta_{I}\rightarrow\theta_{I}+\pi\). This is the dimensionally-reduced 0-form part of the \(\mathbb{Z}_{2}^{(1)}\) symmetry, the order parameter of the confinement/deconfinement transition. The latter acts on Polyakov's loop that wraps \(\mathbb{S}_{\beta}^{1}\).
3. \(p=4\). This corresponds to QCD(adj). The perturbation term \(y\sum_{I}\cos 2\theta_{I}\) accounts for both the W-bosons and the first excited Kaluza-Klein mode of adjoint fermions (both have the same mass and hence, the fugacity). The unit winding vortices \(n=\pm 1\) are the magnetic bions (one needs to study the T-duality that acts on the electric-magnetic charges to see why this is the case compared to dYM and dYM(F)). QCD(adj) enjoys two symmetries: the 0-form discrete chiral symmetry \(\mathbb{Z}_{2}^{d\chi}\) that acts on the adjoint fermions and the 1-form center symmetry \(\mathbb{Z}_{2}^{(1)}\) that acts on the Polyakov's loop that wraps \(\mathbb{S}_{\beta}^{1}\). As we map the theory into the XY-spin model, \(\mathbb{Z}_{2}^{d\chi}\) chiral and \(\mathbb{Z}_{2}\) center are enhanced to \(\mathbb{Z}_{4}\).
In all cases, exciting a vortex costs energy \(\mathcal{O}(\tilde{T})\) (in dimensionless units), which means that vortices (the magnetic charges) are suppressed at high temperatures. On the other hand, the fugacity of the electric charges \(\sim y\tilde{T}\) increases with temperature \(\tilde{T}\), and therefore, they dominate at high temperatures \(\tilde{T}\). This is exactly the expected behavior in all theories discussed in this work.
In addition to the above cases, it will also be instructive to include the case of the pure XY-spin model, setting \(y=0\). This choice does not correspond to YM theory since it does not account for the electric charges. Pure XY-spin model, however, enjoys an exact continuous shift symmetry, and according to the Mermin-Wagner-Coleman theory, it does not have a true long-range order. Yet, it exhibits the celebrated Kosterlitz-Thouless (KT) phase transition.
The traditional method in studying phase transitions is to examine the expectation value of the order parameter, which in this case is given by \(\langle\sum_{I}e^{i\theta_{I}}\rangle\), and its higher moments, e.g., the susceptibility. This method will serve as the basis of comparison with the machine learning techniques that we study in the next section.
## III Deconfinement via Supervised Learning
In this section, we apply the methods of supervised machine learning to the XY-spin models with \(\mathbb{Z}_{p}\) perturbations, taking \(p=1,2,4\). We are also interested in studying the pure XY-spin model11. Our investigation aims to answer the following questions.
Footnote 11: The proliferation of vortices in this model was used in [25] to train a neural network to distinguish between the ordered and disordered phases.
1. We test the suitability of two supervised machine learning techniques to identify phase transitions in our systems: logistic regression and convolution neural network (CNN). In particular, we examine the accuracy of both techniques in determining the critical temperature and its dependence on the optimization parameters.
2. We use the predictability function, which predicts the probability of whether we are in the ordered/disorder phase, as an alternative to the conventional order parameter to calculate the critical exponents of the \(p=4\) system.
3. Of particular importance is the \(p=1\) case, which corresponds to dYM(F). It is well known that this theory does not have an order parameter, and thus, it can only exhibit a smooth cross-over instead of a sharp transition. It is interesting to examine whether machine learning techniques can identify a hidden "order parameter" beyond the classical Landau-Ginsberg paradigm that can distinguish between different phases.
To train and test our machine-learning algorithms, we generate a set of data points using Monte Carlo techniques and the Metropolis algorithm. We consider our
models on a \(N\times N\) lattice12 and generate \(10^{4}\) states at every temperature \(T\). The states are recorded in every Monte Carlo sweep and randomized to avoid fitting spurious correlations.
Footnote 12: The reader should not confuse the lattice size \(N\) with \(N\) of the gauge group \(SU(N)\).
Both logistic regression and CNN were used to predict the critical temperatures of the simulated systems. In both cases, the classifiers were trained on data from low and high-temperature regions, far away from the critical temperature. After being trained, the models were then used to predict the critical temperature and critical exponents, where the system crosses from the low-temperature phase to the high-temperature phase.
### Logistic Regression
Logistic regression is a technique used in classification tasks; for a given state of the XY-spin model, with or without perturbations, we would like to identify the phase, i.e., ordered/disordered. Consider the data set \(\{y_{i},\mathbf{x}_{i}\}\) with a binary label \(y_{i}\in\{0,1\}\) for the disordered/ordered phase, respectively, and \(\mathbf{x}_{i}\) is the flattened 2-dimensional \(N\times N\) arrays of the spin-state into a 1-dimensional array. We soften the classifier by considering the sigmoid13 function \(\sigma(x)=1/(1+\exp(-x))\), which yields a continuous range \(0\leq\sigma\leq 1\) instead of the binary \(\{0,1\}\). Values of \(\sigma\in(0,\frac{1}{2})\) are considered to be in the disordered phase, and values of \(\sigma\in(\frac{1}{2},1)\) are in the ordered phase.
Footnote 13: We warn the reader not to confuse the sigmoid function with the dual photon.
According to the Bayesian methods, the likelihood of observing the data set \(\{y_{i},\mathbf{x}_{i}\}\) is given by the probability function:
\[P(\{y_{i},\mathbf{x}_{i}\}|\mathbf{w})=\prod_{i=1}^{n}\left[\sigma\left(\mathbf{x}_{i} \cdot\mathbf{w}\right)\right]^{y_{i}}\left[1-\sigma\left(\mathbf{x}_{i}\cdot\mathbf{w} \right)\right]^{1-y_{i}}\,, \tag{4}\]
for some weights \(\mathbf{w}\). Then, one readily defines the cost (error) function (also known as cross-entropy):
\[C(\mathbf{w}) = \sum_{i=1}^{n}-y_{i}\log\sigma\left(\mathbf{x}_{i}\cdot\mathbf{w}\right) \tag{5}\] \[-(1-y_{i})\log\left[1-\sigma\left(\mathbf{x}_{i}\cdot\mathbf{w}\right) \right]\,.\]
The weights are learned (determined) by minimizing the cost function14. However, for higher-dimensional data, i.e., \(n\gtrsim N^{2}\), the model might not learn well or overfit15. To overcome this difficulty, we use the \(L^{2}\) regularization:
Footnote 14: The cross-entropy is a convex function of the weights \(\mathbf{w}\), and thus, a local minimizer is a global one.
Footnote 15: This can be inferred from calculating the model’s accuracy, as we shall see soon.
\[\hat{\mathbf{w}}(\lambda)=\underset{\mathbf{w}\in\mathbb{R}^{N^{2}}}{\arg\min}\left(C (\mathbf{w})+\lambda||\mathbf{w}||_{2}^{2}\right)\,, \tag{6}\]
with a regularization parameter \(\lambda\geq 0\). The computations are carried out using the Scikit-learn library for machine learning in Python. The minimization procedure is performed with the liblinear routine16.
Footnote 16: This routine is based on the coordinate descent method, an optimization algorithm that successively minimizes along one coordinate direction at a time until it finds the minimum of a function.
We use a lattice size \(32\times 32\) and take the temperature \(T\) to range from \(0.25\) to \(4.0\) with a step size of \(0.25\), for a
Figure 1: Top panel: The accuracy and predictivity of the logistic regression algorithm applied to the pure XY-spin model. We train the algorithm on \(n=8\times 10^{4}\) Monte Carlo configurations in the subcritical and supercritical regions, and then we use the learned weights to predict the transition temperature. Bottom panel: We plot the probabilities versus the dimensionless temperature \(T\) that the system is either in the ordered or disordered phase. The intersection between the two probabilities (the 50% chance that the system is in either phase) gives the transition temperature \(T_{c}\cong 0.9\), consistent with the literature value of \(T_{KT}=0.89\).
total of \(16\) temperatures and \(160,000\) configurations. The low-\(T\) region is taken in the interval \([0.25,0.75]\), the high-\(T\) region is \([3.25,4.00]\), while the critical region is taken in the interval \((0.75,3.25)\). We divide the data outside the critical region into training and test data. The training and test data sets were formed as a randomly-shuffled combination of the low-T and high-T data sets. \(80\%\) of this shuffled combination (\(n=8\times 10^{4}\) configurations) was used for training, and \(20\%\) was used for testing.
We train the logistic regression model on training data of pure XY-spin model and XY-spin model with \(\mathbb{Z}_{p}\)-preserving perturbations, \(p=2,4\), to learn the values of \(\mathbf{w}\) of each model. For a given data point (drawn from the test data) \(\mathbf{x}_{i}\), the classifier \(\sigma(\mathbf{x}_{i}\cdot\mathbf{w})\) returns the probability, given from Eq. (4), of being in the ordered or disordered phase. Then, we define the accuracy of the classifier as the percentage of the correctly identified data.
In FIG.1, we show the accuracy of the classifier for both the test and training data for the pure XY-spin model. Extremely good accuracy is achieved for a big range of \(\lambda\). Then, we fix \(\lambda=10^{-5}\) and calculate the predictivity, i.e., the average probability of whether the given set of configurations in the critical region is in the ordered or disordered phase, as a function of temperature. Next, we plot the probabilities that the system is either in the ordered or disordered phase. A \(50\%\) chance that the system is in either phase predicts the transition temperature. The predicted critical temperature is consistent with the literature value of \(T_{KT}=0.89\).
We also use the same classifier to calculate the accuracy for the XY-spin model with \(\mathbb{Z}_{p=2,4}\)-preserving perturbations. The results are shown in FIG. 2. Here, unlike the pure XY-spin model, we find that a very poor accuracy is attained across a range of \(\lambda\) that spans many orders of magnitude. Indeed, we completely fail to use the algorithm to see any transition, a result that is depicted in FIG. 3. Similar poor results of the logistic regression model were reported in the literature when applied to the \(\mathbb{Z}_{2}\) Ising model; again, the model fails to predict the transition temperature accurately. It is interesting to see that this method fails whenever the model exhibits a discrete symmetry.
### Convolutional Neural Networks (CNN)
A neural network is a specific machine learning model that attempts to mimic how a brain works. The network is composed of a series of nodes arranged into layers connected by synapses, with each having a specific weight that determines how strongly it affects the output of the network. Each node in the network has an activation function, which remaps the node's inputs non-linearly. The network is trained using the stochastic gradient descent (SGD) method, which tries to minimize a loss function that describes the error in the network's output by adjusting the weights of the synapses. Convolutional neural networks (CNNs) are a special type of neural network commonly used to solve problems where spatial relationships within the data are important, such as classifying images into categories. They are translationally invariant networks that respect the locality of the input data. Due to the interactions between the nearest lattice sites in the XY-spin models, it makes intuitive sense that a CNN may be more effective at classifying lattices into ordered and disordered phases. A CNN can be seen as taking a 2-dimensional input and then applying a series of filters to this input to extract features. The presence (or lack) of these features can then be used to classify the input into a specific category.
There are two basic layers in a CNN: a convolution layer that computes the convolution of the input with a stack of filters and pooling layers that coarse-grain the input while maintaining locality. We use a CNN composed of a single convolutional layer with five \(2\times 2\) filters
Figure 2: The accuracy of the logistic regression algorithm applied to the pure XY-spin model with \(\mathbb{Z}_{2}\)- (top panel) and \(\mathbb{Z}_{4}\)- (bottom panel) preserving perturbations. We train the algorithm on \(n=8\times 10^{4}\) Monte Carlo configurations in the subcritical and supercritical regions. The maximum attained accuracy is below \(70\%\) at \(\lambda\cong 10^{5}\), much lower than the accuracy we found for the pure XY-spin model. Such low accuracy hinders the ability of the algorithm to find the transition point.
followed by a max pooling layer of kernel size \(2\times 2\) and a stride of \(2\times 2\) yields near-perfect training and validation accuracy for all configurations of the XY-spin models.
To judge the accuracy of the CNN in predicting the transition temperature, we first review the results of the magnetic susceptibility for the XY-spin model with \(y=1\) and \(p=2,4\), which can be used to predict the transition temperatures using the finite-size scaling technique. The magnetization and magnetic susceptibility of a system are given by
\[|M|\equiv\left\langle|\sum_{J=1}^{N^{2}}e^{i\theta_{J}}|\right\rangle\,,\quad \chi_{M}\equiv\frac{d|M|}{dT}\,. \tag{7}\]
Simulations of the XY-spin modules with \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{4}\) were performed in [20] by one of the authors by varying the lattice sizes between \(N=8\) and \(N=56\). The simulations indicated critical temperatures \(T_{c}\cong 1.53,1.0\) for \(y=1\) and \(p=2,4\), respectively. As a reminder to the reader, \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{4}\) correspond to pure YM theory and YM theory with adjoint fermions, respectively. The state-of-art simulations for the XY-spin model with \(\mathbb{Z}_{4}\)-preserving perturbations at \(y=1\) were performed in [26] and yielded \(T_{c}\cong 1.008+\pm 0.002\). In FIG. 4, we plot the magnetization and the magnetic susceptibility for the \(p=4,y=1\) case taking \(N=8,16,32\) lattice sizes. Although the accuracy of our simulations is relatively low compared to those existing in the literature, the peak of the magnetic susceptibility is at \(T\cong 1\), still consistent with more accurate simulations. Arguably, we show these data as a gauge against the predictions of the CNN, which are trained on the same Monte Carlo data used to produce FIG. 4.
Next, we discuss the results we obtained from training the CNN. We use a lattice of size \(40\times 40\) and, as before, we take the temperature \(T\) to range from \(0.25\) to \(4.0\) with a step size of \(0.25\), for a total of \(16\) temperatures and \(160,000\) configurations. The low-\(T\) region is taken in the interval \([0.25,0.75]\), the high-\(T\) region is \([3.25,4.00]\), while the critical region is taken in the interval \((0.75,3.25)\). On the left panels of FIGs. 5 and 6, we plot the accuracy of the CNN, for both the test and training data, for the XY-spin model with \(y=1\) and \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{4}\), respectively. We plot the accuracy as a function of the number of epochs, where an epoch is a full iteration over the minibatches (collection of data points) used in the stochastic gradient descent minimization technique. Perfect accuracy is attained after training the CNN on the training data, and almost a \(100\%\) accuracy is achieved on the test data. On the right panels of FIGs. 5 and 6, we plot the probabilities that the system is either in the ordered or disordered phase. A \(50\%\) chance that the system is in either phase predicts the transition temperature. The predicted critical temperatures are \(T_{c}\cong 1.4\) and \(T_{c}\cong 1\) for \(\mathbb{Z}_{2}\) and \(\mathbb{Z}_{4}\), respectively, which agree with the critical temperatures found from the traditional methods.
To further quantify the predictions of the CNN versus the traditional method, we use the CNN to calculate the ratio of critical exponents \(\gamma/\nu\) of the XY-spin model with \(\mathbb{Z}_{4}\)-preserving perturbations. Let \(f(T)\) be the predictive function of a CNN at temperature \(T\), which measures the probability that the system is in the ordered (low-temperature) phase. We define the susceptibility \(\chi_{f}\) of the predictive function via the derivative:
\[\chi_{f}\equiv\frac{df(T)}{dT}\,. \tag{8}\]
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(N\) & 8 & 16 & 32 \\ \hline \hline \(\mathrm{Max}\log\chi_{M}\) & 0.15 & 0.72 & 1.31 \\ \hline \hline \(\mathrm{Max}\log\chi_{f}\) & 0.89 & 1.51 & 2.21 \\ \hline \end{tabular}
\end{table}
Table 1: The logarithm of the maxima of the magnetic and predictive function susceptibilities versus the logarithm of the lattice sizes \(N=8,16,32\).
Figure 3: The prediction of the logistic regression algorithm applied to the pure XY-spin model with \(\mathbb{Z}_{2}\)- (top panel) and \(\mathbb{Z}_{4}\)- (bottom panel) preserving perturbations. We show the probabilities that the system is either in the ordered or disordered phase as a function of the dimensionless temperature \(T\). We take \(\lambda\cong 10^{5}\), which yields the maximum accuracy of \(\cong 60\%\). For example, we see from the right panel that the prediction of the logistic regression method is extremely poor.
The magnetic and predictive function susceptibilities behave with the system size \(N\) as \(\chi\sim N^{\gamma/\nu}\); see, e.g., [27]. Table 1 displays the logarithm of the maxima of the magnetic and predictive function susceptibilities versus the logarithm of the lattice sizes \(N=8,16,32\). Using a least-square fit with a straight line, we obtain the following values of the ratio \(\gamma/\nu\):
\[\frac{\gamma}{\nu}=\left\{\begin{array}{ll}1.93&\quad\text{using}\,\chi_{M} \\ 2.2&\quad\text{using}\,\chi_{f}\,.\end{array}\right. \tag{9}\]
We find that the CNN prediction of \(\gamma/\nu\) is only \(14\%\) off the value obtained from the magnetization. As a reference point, we also mention that the state-of-art value of \(\gamma/\nu=1.76\pm 0.009\) was obtained in [26]. It is remarkable the CNNs give values of the critical exponents that are consistent with the exponents obtained from the physical order parameters. The fact that CNNs are never trained on data inside the critical region, and yet they can provide a good estimate of the thermodynamic properties of the systems is quite surprising. It will be interesting to check in future simulations whether one can achieve high accuracy in computing the critical exponents using ML techniques.
Let us try to harness the CNN further, checking if they can provide a robust distinction of the phases of YM theory with fundamentals. It is well known that this theory does not possess order parameters, thanks to the fundamental fermions. Simply, no topological Wilson lines can be defined in this theory as they can end on the fundamental charges, hence the absence of a well-defined order parameter. As mentioned above, the XY-spin model with \(p=1\) maps to YM theory with fundamentals. Thus, we repeat the above analysis, setting \(y=1\) and \(p=1\) in the model. As before, the low-\(T\) region is taken in the interval \([0.25,0.75]\), the high-\(T\) region is \([3.25,4.00]\), while the critical region is taken in the interval \((0.75,3.25)\). We divide the data outside the critical region into training and test data and always avoid training the CNN inside the critical region. The results are depicted in FIG. 8. As can be seen, the CNN attains high accuracy on the training and test data and predicts a transition temperature \(T_{c}\cong 1.7\). Yet, one needs to examine the robustness of such transition temperature. If the system exhibits a true phase transition, the critical temperature should not
Figure 4: The magnetization (left panel) and magnetic susceptibility (right panel) versus the dimensionless temperature \(T\) of the XY-spin model with \(y=1\) and \(\mathbb{Z}_{4}\)-preserving perturbations for lattice sizes \(N=8,16,32\).
Figure 5: The accuracy (right panel) and prediction (left panel) of the CNN applied to the XY-spin model with \(\mathbb{Z}_{2}\)-preserving perturbations. We take \(y=1\) and perform our simulations on a lattice size \(N=40\).
strongly depend on the training data. To examine such dependence, we repeat our analysis for \(p=1,2,4\) (setting \(y=1\)) while changing the boundaries of the training data. In TABLE 2, we show how \(T_{c}\) changes by changing the training windows on a \(N=8,16,32,64\) lattice size. We consider two low-T windows \([0.1,0.7]\), \([0.1,0.5]\) and two high-T windows \([3.3,4.0]\), \([3.5,\,4.0]\). Then, we form 4 distinct intervals built out of the low-T and high-T windows. For each lattice size, we observe a significantly greater variation in the transition temperature for the case with \(p=1\) compared to the variations seen in the transition temperatures of the \(p=2\) and \(p=4\) cases as we manipulate the training windows. For example, for \(N=64\), the critical temperature varies between \(1.46\) and \(1.53\) in the XY-spin model with \(p=2\) (dYM), while it varies between \(1.01\) and \(1.03\) in the XY-spin model with \(p=4\) (QCD(adj)). This variation is stunningly low, less than \(0.02\) in the latter case, but still is very weak in the \(p=2\) case, being less than the \(0.07\) in dimensionless temperature units (remember that the Monte Carlo data are generated in step size \(\Delta T=0.25\)). On the other hand, the variation in the critical temperature in the case \(p=1\) (fundamental fermions) is \(0.61\) in dimensionless units, much bigger than the \(\Delta T=0.25\) step size. We conclude that the transition temperature in the YM with fundamentals is not physically significant.
## IV Outlook
In this paper, we have applied the supervised ML techniques in phase transitions in YM theories defined on a small circle and endowed with center-stabilizing potential, without and with matter in the adjoint and fundamental representations. We have not tried to simulate the original theories, but used a duality that maps the original theories to XY-spin models with \(\mathbb{Z}_{p}\)-preserving perturbations. Distinct values of \(p\) map to different YM theories: \(p=2\) corresponds to pure YM, \(p=4\) corresponds to YM theory with adjoint matter, and \(p=1\) corresponds to YM with fundamentals. This map is an exact duality between the original YM theories and the XY-spin models, found using reliable semi-classical and effective field theory techniques, and not a mere modeling
Figure 6: The accuracy (right panel) and prediction (left panel) of the CNN applied to the XY-spin model with \(\mathbb{Z}_{4}\)-preserving perturbations. We take \(y=1\) and perform our simulations on a lattice size \(N=40\).
Figure 7: The predictive function \(f\) (left panel) and its derivative (right panel) versus the dimensionless temperature \(T\) of the XY-spin model with \(\mathbb{Z}_{4}\)-preserving perturbations for lattice sizes \(N=8,16,32\). We take \(y=1\).
of the original theories. The simulations of the XY-spin models are much simpler than the YM theories, especially if one wants to examine the suitability of the ML techniques in studying phase transitions.
While the logistic regression method proved to be unreliable in detecting phase transition in theories with \(p=2,4\), CNNs, on the other hand, are found to be robust in obtaining the critical temperatures and critical exponents. We have also tried to check whether CNN can provide new insight on systems without global symmetries, the \(p=1\) case. We concluded that the critical behavior observed using CNN is questionable since the critical temperature depends strongly on the training data. Any critical behavior should be robust against changing the boundaries of the windows of the training data.
Recently, there has been an avalanche in the use of ML techniques, and there are various venues to extend our studies. For example, one can apply the persistent homology method, used in [28] to study variants of the XY-spin model, to study XY spin-models with \(\mathbb{Z}_{p}\) perturbations. Another venue is the transfer learning technique, the breakthrough that gave rise to the celebrated Chat-GPT [29]. Here, the features learned in the one given model can be used to predict the structure of symmetry-breaking phase transitions in other models, irrespective of the universality class. This method was applied in \(q\)-state Potts models in [27], and then the learned features were used to calculate the critical exponents in scalar field theory. Applying this method to the XY-spin models with \(\mathbb{Z}_{p}\)-preserving perturbations can reveal the scope of applicability of this method to classify a wide range of phases universally.
###### Acknowledgements.
We would like to thank Erich Poppitz for comments on the manuscript. This work was supported in part by STFC through grant ST/T000708/1 and in part by NSF grant PHY-2013827. The simulations were performed on the BLT cluster at Lewis & Clark College, Portland, USA.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline & \([0.1,0.7]\cup[3.3,4.0]\) & \([0.1,0.5]\cup[3.3,4.0]\) & \([0.1,0.7]\cup[3.5,4.0]\) & \([0.1,0.5]\cup[3.5,4.0]\) & SD \\ \hline \hline \(p=1,N=8\) & 1.65 & 1.47 & 1.86 & 1.51 & 0.18 \\ \hline \(p=2,N=8\) & 1.341 & 1.336 & 1.343 & 1.336 & 0.004 \\ \hline \(p=4,N=8\) & 0.849 & 0.847 & 0.851 & 0.849 & 0.002 \\ \hline \hline \(p=1,N=16\) & 1.76 & 1.70 & 1.83 & 1.79 & 0.05 \\ \hline \(p=2,N=16\) & 1.456 & 1.451 & 1.454 & 1.453 & 0.002 \\ \hline \(p=4,N=16\) & 0.971 & 0.961 & 0.973 & 0.963 & 0.006 \\ \hline \hline \(p=1,N=32\) & 1.87 & 1.75 & 1.90 & 1.78 & 0.07 \\ \hline \(p=2,N=32\) & 1.543 & 1.54 & 1.545 & 1.541 & 0.002 \\ \hline \(p=4,N=32\) & 0.959 & 0.957 & 0.959 & 0.957 & 0.001 \\ \hline \hline \(p=1,N=64\) & 1.48 & 1.24 & 1.87 & 1.75 & 0.28 \\ \hline \(p=2,N=64\) & 1.50 & 1.53 & 1.51 & 1.46 & 0.028 \\ \hline \(p=4,N=64\) & 1.022 & 1.02 & 1.032 & 1.01 & 0.008 \\ \hline \end{tabular}
\end{table}
Table 2: The transition temperatures and their standard deviation (SD) for the XY-spin models for \(p=1,2,4\) on \(N=8,16,32,64\) lattices, setting \(y=1\), as we vary the training windows.
Figure 8: The accuracy (right panel) and prediction (left panel) of the CNN applied to the XY-spin model with \(y=1\) and \(p=1\), which maps to YM theory with the fundamental matter. We perform our simulations on a lattice size \(N=40\). |
2309.10759 | A Blueprint for Precise and Fault-Tolerant Analog Neural Networks | Analog computing has reemerged as a promising avenue for accelerating deep
neural networks (DNNs) due to its potential to overcome the energy efficiency
and scalability challenges posed by traditional digital architectures. However,
achieving high precision and DNN accuracy using such technologies is
challenging, as high-precision data converters are costly and impractical. In
this paper, we address this challenge by using the residue number system (RNS).
RNS allows composing high-precision operations from multiple low-precision
operations, thereby eliminating the information loss caused by the limited
precision of the data converters. Our study demonstrates that analog
accelerators utilizing the RNS-based approach can achieve ${\geq}99\%$ of FP32
accuracy for state-of-the-art DNN inference using data converters with only
$6$-bit precision whereas a conventional analog core requires more than $8$-bit
precision to achieve the same accuracy in the same DNNs. The reduced precision
requirements imply that using RNS can reduce the energy consumption of analog
accelerators by several orders of magnitude while maintaining the same
throughput and precision. Our study extends this approach to DNN training,
where we can efficiently train DNNs using $7$-bit integer arithmetic while
achieving accuracy comparable to FP32 precision. Lastly, we present a
fault-tolerant dataflow using redundant RNS error-correcting codes to protect
the computation against noise and errors inherent within an analog accelerator. | Cansu Demirkiran, Lakshmi Nair, Darius Bunandar, Ajay Joshi | 2023-09-19T17:00:34Z | http://arxiv.org/abs/2309.10759v1 | # A blueprint for precise and fault-tolerant analog neural networks
###### Abstract
Analog computing has reemerged as a promising avenue for accelerating deep neural networks (DNNs) due to its potential to overcome the energy efficiency and scalability challenges posed by traditional digital architectures. However, achieving high precision and DNN accuracy using such technologies is challenging, as high-precision data converters are costly and impractical. In this paper, we address this challenge by using the residue number system (RNS). RNS allows composing high-precision operations from multiple low-precision operations, thereby eliminating the information loss caused by the limited precision of the data converters. Our study demonstrates that analog accelerators utilizing the RNS-based approach can achieve \(\geq\)99% of FP32 accuracy for state-of-the-art DNN inference using data converters with only 6-bit precision whereas a conventional analog core requires more than 8-bit precision to achieve the same accuracy in the same DNNs. The reduced precision requirements imply that using RNS can reduce the energy consumption of analog accelerators by several orders of magnitude while maintaining the same throughput and precision. Our study extends this approach to DNN training, where we can efficiently train DNNs using 7-bit integer arithmetic while achieving accuracy comparable to FP32 precision. Lastly, we present a fault-tolerant dataflow using redundant RNS error-correcting codes to protect the computation against noise and errors inherent within an analog accelerator.
## Introduction
Deep Neural Networks (DNNs) are widely employed across various applications today. Unfortunately, their compute, memory, and communication demands are continuously on the rise. The slow-down in CMOS technology scaling, along with these increasing demands has led analog DNN accelerators to gain significant research interest. Recent research has been focused on using various analog technologies such as photonic cores [1; 2; 3; 4; 5; 6; 7], resistive arrays [8; 9; 10; 11; 12], switched capacitor arrays [13; 14], Phase Change Materials (PCM) [15], SpinTransfer Torque (STT)-RAM [16; 17], etc., to enable highly parallel, fast, and efficient matrix-vector multiplications (MVMs) in the analog domain. These MVMs are fundamental components used to build larger general matrix-matrix multiplication (GEMM) operations, which make up more than 90% of the operations in DNN inference and training [18].
The success of this approach, however, is constrained by the limited precision of the digital-to-analog and analog-to-digital data converters (i.e., DACs and ADCs). In an analog accelerator, the data is converted between analog and digital domains using data converters before and after every analog operation. Typically, a complete GEMM operation cannot be performed at once in the analog domain due to the fixed size of the analog core. Instead, the GEMM operation is tiled into smaller MVM operations. As a result, each MVM operation produces a partial output that must be accumulated with other partial outputs to obtain the final GEMM result. Concretely, an MVM operation consists of parallel dot products between \(b_{w}\)-bit signed weight vectors and \(b_{in}\)-bit signed input vectors--each with \(h\) elements--resulting in a partial output containing \(b_{\text{out}}\) bits of information, where \(b_{\text{out}}=b_{\text{in}}+b_{w}+\log_{2}(h)-1\). An ADC with a precision greater than \(b_{\text{out}}\) (i.e., \(b_{\text{ADC}}\geq b_{\text{out}}\)) is required to ensure no loss of information when capturing these partial outputs. Unfortunately, the energy consumption of ADCs increases exponentially with bit precision (often referred to as effective number of bits (ENOB)). This increase is roughly \(4\times\) for each additional output bit [19].
As a result, energy-efficient analog accelerator designs typically employ ADCs with lower precision than \(b_{\text{out}}\) and only capture the \(b_{\text{ADC}}\) most significant bits (MSBs) from the \(b_{\text{out}}\) bits of each partial output [20]. Reading only MSBs causes information loss in each partial output leading to accuracy degradation in DNNs, as pointed out by Rekhi et al. [20]. This degradation is most pronounced in large DNNs and large datasets. Fig. 1 shows the impact of this approach on DNN accuracy in two tasks: (1) a two-layer convolutional neural network (CNN) for classifying the MNIST dataset [21]: a simple task with only 10 classes, and (2) the ResNet50 CNN [22] for classifying the ImageNet dataset [23]: a more challenging task with 1000 classes. As the vector size \(h\) increases, higher precision is needed at the output to maintain the accuracy in both DNNs. Moreover, ResNet50 experiences accuracy degradation at smaller values of \(h\) compared to the two-layer CNN. While using a higher precision ADC can help recover from this accuracy degradation, it significantly reduces the energy efficiency of the analog hardware. Essentially, to efficiently execute large DNNs using analog accelerators, it is crucial to find a better way to achieve high accuracy than simply increasing the bit precision of
the data converters.
In this work, we present a universal residue number system (RNS)-based framework to overcome the above-mentioned challenge in analog DNN inference as well as DNN training. RNS represents high-precision values using multiple low-precision integer residues for a selected set of moduli. As such, RNS enables high-precision arithmetic without any information loss on the partial products, even when using low-precision DACs and ADCs. Utilization of RNS leads to a significant reduction in the data converter energy consumption, which is the primary contributor to energy usage in analog accelerators. This reduction can reach up to six orders of magnitude compared to a conventional fixed-point analog core with the same output bit precision.
Our study shows that the RNS-based approach enables \(\geq 99\%\) FP-32 inference accuracy by using only 6-bit data converters for state-of-the-art MLPerf (Inference: Datacenters) benchmarks [24] and Large Language Models (LLMs). We also demonstrate the applicability of this approach in training and fine-tuning state-of-the-art DNNs using low-precision analog hardware. The RNS approach, however, is susceptible to noise as small errors in the residues scale up during output reconstruction, leading to larger errors in the standard representation. To address this issue, we incorporate the Redundant RNS (RRNS) error-correcting code [25; 26; 27] to introduce fault-tolerance capabilities into the dataflow.
As RNS is closed under multiplication and addition, no significant changes are required in the design of the analog core or in how GEMM operations are performed. Unlike a conventional analog core design, performing RNS operations necessitates an analog modulo operation. This operation can be implemented by using ring oscillators [28] in an analog electrical core or by using optical phase shifters in an analog optical core. Our proposed framework, however, remains agnostic to the underlying technology. Importantly, arbitrary fixed-point precision can be achieved by combining the positional number system (PNS) and RNS in analog hardware. Overall, our presented RNS-based methodology offers a solution combining high accuracy, high energy efficiency, and fault tolerance in analog DNN inference and training.
## Results
### DNN Inference and Training Using RNS
The RNS represents an integer as a set of smaller (integer) residues. These residues are calculated by performing a modulo operation on the said integer using a selected set of \(n\)_co-prime_ moduli. Let \(A\) be an integer. \(A\) can be represented in the RNS with \(n\) residues as \(\{a_{1},\ldots,a_{n}\}\) for a set of co-prime moduli \(\mathcal{M}=\{m_{1},\ldots,m_{n}\}\) where \(a_{i}=|A|_{m_{i}}\equiv A\mod m_{i}\) for \(i\in\{1\ldots n\}\). \(A\) can be uniquely reconstructed using the Chinese Remainder Theorem (CRT):
\[A=\bigg{|}\sum_{i=1}^{n}a_{i}M_{i}T_{i}\bigg{|}_{M}, \tag{1}\]
if \(A\) is within the range \([0,M)\) where \(M=\prod_{i}m_{i}\). Here, \(M_{i}=M/m_{i}\) and \(T_{i}\) is the multiplicative inverse of \(M_{i}\), i.e., \(|M_{i}T_{i}|_{m_{i}}\equiv 1\). Hereinafter, we refer to the integer \(A\) as the _standard representation_, while we refer to the set of integers \(\{a_{1},\ldots,a_{n}\}\) simply as the residues.
A DNN consists of a sequence of \(L\) layers. During inference, where the DNN is previously trained and its parameters are fixed, only a forward pass is performed. Generically, the input \(X\) to \((\ell+1)\)-th layer of a DNN during the forward pass is the output generated by the previous \(\ell\)-th layer:
\[X^{(\ell+1)}=f^{(\ell)}\big{(}W^{(\ell)}X^{(\ell)}\big{)}, \tag{2}\]
where \(W^{(\ell)}X^{(\ell)}=O^{(\ell)}\) is a GEMM operation and \(f(\cdot)\) is an element-wise nonlinear function.
DNN training requires both forward and backward passes as well as weight updates. The forward pass in the training is performed the same way as in Eq. (2). After the forward pass, a loss value \(\mathcal{L}\) is calculated using the output of the last layer and the ground truth. The gradients of the DNN activations and parameters with respect to \(\mathcal{L}\) for each layer are calculated by performing a backward pass after each forward pass:
\[\frac{\partial\mathcal{L}}{\partial X^{(\ell)}}={W^{(\ell)}}^{T}\frac{\partial \mathcal{L}}{\partial O^{(\ell)}}, \tag{3}\]
\[\frac{\partial\mathcal{L}}{\partial W^{(\ell)}}=\frac{\partial\mathcal{L}}{ \partial O^{(\ell)}}{X^{(\ell)}}^{T}. \tag{4}\]
Figure 1: **Inference accuracy versus vector size for varying data bit-width in a conventional analog core.****a** Inference accuracy for a two-layer CNN classifying handwritten digits from the MNIST dataset. **b** Inference accuracy for ResNet50 classifying images from the ImageNet dataset evaluated in an analog core with varying precision \(b\) and vector sizes \(h\). For both **a** and **b**, **b**-bit precision means \(b=b_{\text{DAC}}=b_{\text{ADC}}<b_{\text{out}}\) where \(b\) varies between 2 and 8.
Using these gradients \(\Delta W^{(\ell)}=\frac{\partial\mathcal{L}}{\partial W^{(\ell)}}\), the DNN parameters are updated in each iteration \(i\):
\[W^{(\ell)}_{i+1}=W^{(\ell)}_{i}-\eta\Delta W^{(\ell)}_{i} \tag{5}\]
with a step size \(\eta\) for a simple stochastic gradient descent (SGD) optimization algorithm.
Essentially, for each layer, one GEMM operation is performed in the forward pass and two GEMM operations are performed in the backward pass. Because RNS is closed under addition and multiplication operations, GEMM operations can be performed in the RNS space. Using the RNS, Eq. (2) can be rewritten as:
\[X^{(\ell+1)}=f^{(\ell)}\Bigg{(}\text{CRT}\bigg{(}\Big{|}\big{|}W^{(\ell)}\big{|} _{\mathcal{M}}\big{|}X^{(\ell)}\big{|}_{\mathcal{M}}\Big{|}_{\mathcal{M}} \bigg{)}\Bigg{)}. \tag{6}\]
The same approach applies for Eqs. (3) and (4) in the backward pass.
The moduli set \(\mathcal{M}\) must be chosen to ensure that the outputs of the RNS operations are smaller than \(M\), which means that
\[\log_{2}M\geq b_{\text{out}}=b_{\text{in}}+b_{w}+\log_{2}(h)-1, \tag{7}\]
should be guaranteed for a dot product between \(b_{\text{in}}\)-bit input and \(b_{w}\)-bit weight vectors with \(h\)-elements. This constraint prevents overflow in the computation.
### Precision and Energy Efficiency in the RNS-based Analog Core
The selection of moduli set \(\mathcal{M}\), which is constrained by Eq. (7), impacts the achievable precision at the output as well as the energy efficiency of the RNS-based analog core. Table 1 compares RNS-based analog GEMM cores with example moduli sets and regular fixed-point analog GEMM cores. Here, we show two cases for the regular fixed-point representation: (1) the low-precision (LP) case where \(b_{\text{out}}>b_{\text{ADC}}=b_{\text{DAC}}\), and (2) the high-precision (HP) case where \(b_{\text{out}}=b_{\text{ADC}}>b_{\text{DAC}}\). It should be noted that all three analog cores represent data as fixed-point numbers. We use the term'regular fixed-point core' to refer to a typical analog core that performs computations in the standard representation (without RNS). 'RNS-based core' refers to an analog core that performs computations on the fixed-point residues.
While the LP approach introduces \(b_{\text{out}}-b_{\text{ADC}}\) bits of information loss in every dot product, the HP approach uses high-precision ADCs to prevent this loss. For the RNS-based core, we picked \(b_{\text{in}}=b_{w}=b_{\text{ADC}}=b_{\text{DAC}}=\lceil\log_{2}m_{i}\rceil \equiv b\) for ease of comparison against the fixed-point cores. Table 1 shows example moduli sets that are chosen to guarantee Eq. (7) for \(h=128\) while keeping the moduli under the chosen bit-width \(b\). In this case, for \(n\) moduli with bit-width of \(b\), \(M\) covers \(\approx n\cdot b\) bits of range at the output. \(h\) is chosen to be 128 as an example considering the common layer sizes in the evaluated MLPerf (Inference: Datacenter) benchmarks. The chosen \(h\) provides high throughput with high utilization of the GEMM core.
Fig. 2a compares the error (with respect to FP32 results) observed when performing dot products with the RNS-based core and the LP fixed-point core with the same bit precision. Both cores use the configurations described in Table 1 for the example vector size \(h=128\). The larger absolute error observed in the LP fixed-point case illustrates the effect of the information loss mentioned above. HP fixed-point case is not shown as it is equivalent to the RNS case.
Fig. 2b shows the energy consumption of DACs and ADCs per dot product for the three aforementioned analog hardware configurations. To achieve the same MVM throughput as the (LP/HP) fixed-point cores, the RNS-based core with \(n\) moduli must use \(n\) distinct MVM units and \(n\) sets of DACs and ADCs. This makes the energy consumption of the RNS-based core \(n\times\) larger compared to the LP fixed-point approach. However, the LP fixed-point approach with low-precision ADCs experiences information loss in the partial outputs and hence has lower accuracy.
The RNS-based approach and the HP fixed-point approach provide the same bit precision (i.e., the same DNN accuracy). Yet, using the RNS-based approach is orders of magnitude more energy-efficient than the HP fixed-point approach. This is mainly because of the high cost of high-precision ADCs required to capture the full output in the HP fixed-point approach. ADCs dominate the energy consumption with approximately three orders of magnitude higher energy usage than DACs with the same bit precision. In addition, energy consumption in ADCs increases exponentially with increasing bit precision[19]. This favors using multiple DACs and ADCs with lower precision in the RNS-based approach over using a single high-precision ADC. Briefly, the RNS-based approach provides a sweet spot between LP and HP fixed-point approaches without compromising on both high accuracy and high energy efficiency.
### Accuracy in the RNS-based Analog Core
Fig. 3a compares the inference accuracy of MLPerf (Inference: Datacenters) benchmarks[24] and OPT[29] (a transformer-based LLM) when run on an RNS-based analog core and a fixed-point (LP) analog core. The HP fixed-point analog core results are not shown as they are equivalent to the RNS-based results. The evaluated DNNs, their corresponding tasks, and the datasets are listed in Table 2. Fig. 3a shows that the RNS-based approach significantly ameliorates the accuracy drop caused by the low-precision ADCs used in the LP fixed-point approach for all the networks. By using the RNS-based approach, it is possible to achieve \(\geq\)99% of FP32 accuracy (this cut-off is defined in the MLPerf benchmarks[24]) for all evaluated benchmarks when using residues with as
low as 6 bits. This number can be lowered to 5 bits for BERT-large and RNN-T and to 4 bits for DLRM.
Besides inference, the RNS-based approach opens the door for analog computing to be used in tasks that require higher precision than inference such as DNN training. Figure 3b shows the loss during DNN training/fine-tuning. Table 3 reports the validation accuracies after FP32 and RNS-based low-precision training. Here, the GEMM operations during forward and backward passes of training follow the same methodology as inference, with weight updates carried out in FP32. Our experiments show that \(\geq\)99% FP32 validation accuracy is achievable after training ResNet50 from scratch using the RNS-based approach with only 6-bit moduli. Similarly, fine-tuning BERT-large and OPT-125M by using 5-bit and 7-bit moduli, respectively, can reach \(\geq\)99% FP32 validation accuracy. The results are noticeably promising as the previous efforts on analog DNN hardware that adopted the LP fixed-point approach had never successfully demonstrated the training of state-of-the-art DNNs due to the limited precision of this approach.
Fig. 4 illustrates the dataflow of the RNS-based analog core when performing MVM as part of the DNN inference/training. An input vector \(X\) and a weight matrix \(W\) to be multiplied in the MVM unit are first mapped to signed integers. To mitigate the quantization effects, \(X\) and each row in \(W\) are scaled by an FP32 scaling factor that is unique to the vector (See Methods). The signed integers are then converted into RNS residues through modulo operation (i.e., forward conversion). By construction, each residue is within the range of \([0,m_{i})\). To achieve the same throughput as a fixed-point analog core, the RNS-based analog core with \(n\) moduli requires using \(n\) analog MVM units--one for each modulus--and running them in parallel. Each analog MVM unit requires a set of DACs for converting the associated input and weight residues into the analog domain. The MVM operations are followed by an analog modulo operation on each output residue vector. Thanks to the modulo operation, the output residues--to be captured by ADCs--are reduced back to the \([0,m_{i})\) range. Therefore, a bit precision of \([\log_{2}m_{i}]\) is adequate for both DACs and ADCs to perform input and output conversions without any information loss. The output residues are then converted back to the standard representation in the digital domain using Eq. (1) to generate the signed-integer output vector, which is then mapped back to an FP32 final output \(Y\). The non-linear function \(f\) (e.g., ReLU, sigmoid, etc.) is then performed digitally in FP32.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**DNN** & **Task** & **Dataset** \\ \hline ResNet50 & Image classification & ImageNet[23] \\ SSD-ResNet34 & Object detection & MS COCO[35] \\ BERT-Large & Question answering & SQuADv1.[13] \\ RNN-T & Speech recognition & Librispeech[37] \\ DLRM & Recommendation & 1TB Click Logs[38] \\ OPT-125M & Language Modeling & Wikitext[39] \\ OPT-350M & Language Modeling & Wikitext[39] \\ \hline \hline \end{tabular}
\end{table}
Table 2: MLP Perf (Inference: Datacenters) benchmarks.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**RNS-based Core (This work)**} & \multicolumn{4}{c}{**LP Fixed-Point Core**} & \multicolumn{4}{c}{**HP Fixed-Point Core**} \\ \cline{3-10} \(b_{\text{in}}\), \(b_{w}\) & \(b_{\text{DAC}}\) & \(\log_{2}\)\(\mathcal{M}\) & \(b_{\text{ADC}}\) & Moduli Set (\(\mathcal{M}\)) & RNS Range (\(M\)) & \(b_{\text{DAC}}\) & \(b_{\text{out}}\) & \(b_{\text{ADC}}\) & Lost Bits & \(b_{\text{DAC}}\) & \(b_{\text{out}}\) & \(b_{\text{ADC}}\) \\ \hline
4 & 4 & 4 & 4 & 4 & \(\{15,14,13,11\}\) & \(\simeq 2^{15}-1\) & 4 & 14 & 4 & 10 & 4 & 14 & 14 \\
5 & 5 & 5 & 5 & 5 & \(\{31,29,28,27\}\) & \(\simeq 2^{19}-1\) & 5 & 16 & 5 & 11 & 5 & 16 & 16 \\
6 & 6 & 6 & 6 & 6 & \(\{63,62,61,59\}\) & \(\simeq 2^{24}-1\) & 6 & 18 & 6 & 12 & 6 & 18 & 18 \\
7 & 7 & 7 & 7 & 7 & \(\{127,126,125\}\) & \(\simeq 2^{21}-1\) & 7 & 20 & 7 & 13 & 7 & 20 & 20 \\
8 & 8 & 8 & 8 & \(\{255,254,253\}\) & \(\simeq 2^{24}-1\) & 8 & 22 & 8 & 14 & 8 & 22 & 22 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data and data converter precision in RNS-based, LP fixed-point, and HP fixed-point analog cores.
Figure 2: **Comparison of the RNS-based and regular fixed-point analog approaches.** The distribution of average error observed at the output of a dot product performed with the RNS-based analog approach (pink) and the LP regular fixed-point analog approach (cyan). Error is defined as the distance from the result calculated in FP32. The experiments are repeated for 10,000 randomly generated vector pairs with vector size \(h=128\). **b** Energy consumption of data converters (i.e., DACs and ADCs) per dot product for the RNS-based analog approach (pink) and the LP (cyan) and HP (dark blue) regular fixed-point analog approaches. See Methods for the energy estimation methodology.
### Redundant RNS for Fault Tolerance
Analog compute cores are sensitive to noise. In the case of RNS, even small errors in the residues can result in a large error in the corresponding integer they represent. The Redundant Residue Number System (RRNS) [25; 26; 27] can detect and correct errors--making the RNS-based analog core fault tolerant. RRNS uses a total of \(n+k\) moduli: \(n\) non-redundant and \(k\) redundant. An RRNS(\(n+k,n\)) code can detect up to \(k\) errors and can correct up to \(\lfloor\frac{k}{2}\rfloor\) errors. In particular, the error in the codeword (i.e., the \(n+k\) residues representing an integer in the RRNS space) can be one of the following cases:
* **Case 1:** Fewer than \(\lfloor\frac{k}{2}\rfloor\) residues have errors--thereby they are correctable,
* **Case 2:** Between \(\lfloor\frac{k}{2}\rfloor\) and \(k\) residues have errors or the codeword with more than \(k\) errors does not overlap with another codeword in the RRNS space--thereby the error is detectable,
* **Case 3:** More than \(k\) residues have errors and the erroneous codeword overlaps with another codeword in the RRNS space--thereby the error goes undetected.
Errors are detected by using majority logic decoding wherein we divide the total \(n+k\) output residues into
Figure 4: **RNS-based analog GEMM dataflow. The operation is shown for a moduli set \(\mathcal{M}=\{m_{1},\dots,m_{n}\}\). The \(n\times h\) analog MVM units are represented as generic blocks. The dataflow is agnostic of the technology.**
Figure 3: **Accuracy performance of the RNS-based analog core.****a Inference accuracy of regular fixed-point (LP) and RNS-based cores (See Table 1) on MLPer (Inference: Datacenters) benchmarks. The accuracy numbers are normalized to the FP32 accuracy. b-d Loss during training for FP32 and RNS-based approaches with varying moduli bit-width. ResNet50 (a) is trained from scratch for 90 epochs using SGD optimizer with a momentum. BERT-Large (b) and OPT-125M (c) are fine-tuned from pre-trained models. Both models are fine-tuned using the Adam optimizer with a linear learning rate scheduler for 2 and 3 epochs for BERT-Large and OPT-125M, respectively. All inference and training experiments use FP32 for all non-GEMM operations.**
\(\binom{n+k}{n}\) groups with \(n\) residues per group. One simple way of majority logic decoding in this context is to convert the residues in each group back to the standard representation via CRT to generate an output value for each group and compare the results of the \(\binom{n+k}{n}\) groups. If more than 50% of the groups have the same result in the standard representation, then the generated codeword is correct. This corresponds to **Case 1**. In contrast, not having a majority indicates that the generated codeword is erroneous and cannot be corrected. This corresponds to **Case 2**. In this case, the detected errors can be eliminated by repeating the entire calculation. In **Case 3**, the erroneous codeword generated by the majority of the groups overlaps with another codeword. As a result, more than 50% of the groups have the same incorrect result and the error goes undetected. To optimize the hardware performance of this process, more efficient base-extension-based algorithms [30] instead of CRT can be used for error detection.
The final error probability in an RRNS code depends on the percentage of the _non-correctable_ errors observed in the residues. This probability is influenced by the chosen moduli set and the number of error correction iterations (See Methods). Let \(p_{c},p_{d}\), and \(p_{u}\) be the probabilities that Cases 1, 2, and 3 occur respectively when computing a single output. Overall, \(p_{c}+p_{d}+p_{u}=1\). For a single attempt (i.e., \(R=1\)), the probability of producing the incorrect output integer is \(p_{\text{err}}(R=1)=1-p_{c}=p_{u}+p_{d}\). Generally, it is possible to repeat the calculations \(R\)-times until no detectable error is found at the expense of increasing compute latency. In this case, the probability of having an incorrect output after \(R\) attempts of error correction is
\[p_{\text{err}}(R)=1-p_{c}\sum_{r=0}^{R-1}(p_{d})^{r}. \tag{8}\]
As the number of attempts increases, the output error probability decreases and converges to \(\lim_{R\rightarrow\infty}p_{\text{err}}(R)=p_{u}/(p_{u}+p_{c})\).
Fig. 5 shows \(p_{\text{err}}\) for different numbers of redundant moduli (\(k\)), attempts (\(R\)), and moduli sets with different bit-widths. Broadly, as the probability of a single residue error \(p\) increases, the output error probability tends to 1. For a given number of attempts, increasing bit precision and the number of redundant moduli decreases \(p_{\text{err}}\). For a fixed number of redundant moduli and a fixed number of bits per moduli, \(p_{\text{err}}\) decreases as the number of attempts increases.
Fig. 6 investigates the impact of noise on the accuracy of two large and important MLPerf benchmarks--ResNet50 and BERT-Large--when using RRNS. The two networks show similar behavior: adding extra moduli and increasing the number of attempts decrease \(p_{\text{err}}\) at the same value of \(p\). ResNet50 requires \(\sim\)3.9 GigaMAC operations (GOp) for one inference on a single input image. For a \(128\times 128\) MVM unit, inferring an ImageNet image through the entire network involves computing \(\sim\)29.4M partial output elements. Therefore, we expect the transition point from an accurate network to an inaccurate network to occur at \(p_{\text{err}}\) to be \(\leq 1/29.4\text{M}=3.4\times 10^{-8}\). This \(p_{\text{err}}\) transition point is \(\leq 1/358.6\text{M}=2.8\times 10^{-9}\) for BERT-Large. Fig. 6, however, shows that the evaluated DNNs are more resilient to noise than expected: it is able to tolerate higher \(p_{\text{err}}\) while maintaining good accuracy. The accuracy of ResNet50 only starts degrading (below 99% FP32) when \(p_{\text{err}}\approx 4.5\times 10^{-5}\) (1000\(\times\) higher than the estimated value) on average amongst the experiments shown in Figure 6. This transition probability is \(p_{\text{err}}\approx 4\times 10^{-4}\) for BERT-Large (on average \(100,000\times\) higher than the estimated value).
## Discussion
The RNS (and the fault-tolerant RRNS) framework are agnostic to the analog technology employed. Generally, the RNS GEMM operations can be performed as a regular GEMM operation followed by a modulo operation in the analog domain. Analog GEMM is well-explored in the literature. Previous works leveraged photonics [1; 2; 3; 4; 5; 6; 7], crossbar arrays consisting of resistive RAM [8; 9; 10; 11; 12], switched capacitors [13; 14], PCM cells [15], STT-RAM [16; 17], etc.
The analog modulo operation can be performed electrically or optically. In the electronic domain, one can use ring oscillators: a circuit that generates a continuous waveform by cycling through a series of inverters [28] to perform modulo operations. By carefully designing the parameters of the ring oscillator, it is possible to create an output frequency that corresponds to the desired modulus value. Alternatively, the phase of a signal can be used for performing modulo due to the periodicity of phases in optical systems. Optical phase is inherently modular against \(2\pi\). By modulating the phase of an optical signal, one can achieve modulo operations in the analog domain. Using RNS requires forward and reverse conversion circuits to switch between the RNS and the standard number system. The forward conversion is a modulo operation while the reverse conversion can be done using the CRT, mixed-radix conversion, or look-up tables. The (digital) hardware costs of these circuits can be reduced by choosing special moduli sets [31; 32].
The RNS framework can be extended with the PNS to
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Precision** & **ResNet50** & **BERT-Large** & **OPT-125M** \\
**Acc.(\%)** & **F1 Score (\%)** & **Acc.(\%)/PPL** \\ \hline FP32 & 75.80 & 91.03 & 43.95/19.72 \\
8-bit & 75.77 & 90.98 & 43.86/20.00 \\
7-bit & 75.68 & 90.97 & 43.59/20.71 \\
6-bit & 75.13 & 90.85 & 42.79/22.62 \\
5-bit & 59.72 & 90.81 & 41.45/26.17 \\
4-bit & 42.15 & 89.66 & 38.64/35.65 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Validation accuracy results after training/fine-tuning.
work with arbitrary precision, despite having DACs and ADCs with limited precision. For applications requiring higher-precision arithmetic than the example cases in this study (e.g., some high-performance computing applications, homomorphic encryption, etc.), a higher \(M\) value and therefore moduli with higher bit-width might be necessary, which will be bound by the same limitations discussed in this paper. Instead, one can represent an integer value as \(D\) separate digits where each digit is represented as a set of residues in the RNS domain and has an RNS range of \(M\). This hybrid scheme can achieve \(D\log_{2}M\) bit precision where \(D\) can liberally be increased without increasing the bit precision of the data converters. Different from the RNS-only scheme, the hybrid scheme requires overflow detection and carry propagation from lower digits to higher digits. The overflow detection can be achieved using two sets of residues: primary and secondary. While the operations are performed with both sets of residues, base extension between two sets helps detect any overflow and propagate the carry to the higher digits if required (See Methods).
In conclusion, our work provides a methodology for precise, energy-efficient, and fault-tolerant analog DNN acceleration. Overall, we believe that RNS is a crucial numeral system for the development of next-generation analog hardware capable of both inference and training state-of-the-art neural networks for advanced applications, such as generative artificial intelligence.
## Methods
### Handling Negative Numbers with RNS
An RNS with dynamic range \(M\) allows representing values within the range of \([0,M)\). This range can be shifted to \([-\psi,\psi]\), where \(\psi=[(M-1)/2]\), to represent negative values. This is achieved by reassigning the values in between \((0,\phi]\) to be positive, \(0\) to be zero, and the numbers in between \((\psi,2\psi]\) to be negative (i.e. \([-\psi,0]\)). Then, the values can
Figure 5: **Calculated output error probability (\(\mathbf{p}_{err}\)) versus single residue error probability (\(\mathbf{p}\)). a-c \(p_{err}\) for one (a), two (b), and infinite (c) error correction attempts and a varying number of redundant moduli (\(k\)).**
Figure 6: **Inference accuracy versus single residue error probability (\(\mathbf{p}\)). a-f The plots show ResNet-50 (a-c) and BERT-Large (d-f) inference accuracy results under varying \(p\) for one (a and d), two (b and e), and infinite (e and f) error correction attempts and a varying number of redundant moduli (\(k\)).**
be recovered uniquely by using CRT with a slight modification:
\[A=\begin{cases}\sum\limits_{l=1}^{n}|a_{i}M_{i}T_{i}|_{M},&\text{if }\sum\limits_{i=1}^{n}|a_{i}M_{i}T_{i}|_{M}\leq\psi\\ \sum\limits_{i=1}^{n}|a_{i}M_{i}T_{i}|_{M}-M,&\text{otherwise}.\end{cases} \tag{9}\]
### Data Converter Energy Estimation
The DAC and ADC energy numbers in Fig. 2**(b)** are estimated by using equations formulated by Murmann [19; 33]. The energy consumption of a DAC per conversion is
\[E_{\text{DAC}}=\text{ENOB}^{2}C_{u}V_{\text{DD}}^{2}, \tag{10}\]
where \(C_{u}=0.5\) fF is a typical unit capacitance and \(V_{\text{DD}}=\) 1V is the supply voltage [19]. The energy consumption of an ADC per conversion is estimated as
\[E_{\text{ADC}}=k_{1}\text{ENOB}+k_{2}4^{\text{ENOB}}, \tag{11}\]
where \(k_{1}\)\(\approx\)100 fJ and \(k_{2}\)\(\approx\)1 aJ. \(E_{\text{ADC}}\) is dominated by the exponential term (i.e., \(k_{2}4^{\text{ENOB}}\)) at large ENOB ( \(\geq\) 10-bits).
### Accuracy Modeling
Both RNS-based and regular fixed-point analog cores are modeled using PyTorch for estimating inference and training accuracy. Convolution, linear, and hatched matrix multiplication (BMM) layers are performed as tiled-GEMM operations which are computed tile-by-tile as a set of tiled-MVM operations. Each input, weight, and output of the tiled MVM are quantized with a desired bit precision.
Pre-quantization, the input vectors and weight tiles are first dynamically scaled, i.e. scaled at runtime, to mitigate the quantization effects as follows: For an \(h\times h\) weight tile \(\mathcal{W}_{t}\), we denote each row vector as \(\mathcal{W}_{t}\) where the subscript \(r\) stands for the row and \(t\) for the tile. Similarly, an input vector of length \(h\) is denoted as \(\mathcal{X}_{t}\) where tiledinates the tile. Each weight row \(\mathcal{W}_{t}\) shares a single FP32 scale \(s^{r}_{rt}=\max(|\mathcal{W}_{rt}|)\) and each input vector \(\mathcal{X}_{t}\) shares a single FP32 scale \(s^{r}_{rt}=\max(|\mathcal{X}_{t}|)\). \(h\) scales per \(h\times h\) weight tile and 1 scale per input vector, in total \(h+1\) scales, are stored for each tiled-MVM operation. The tiled MVM is performed between the scaled weight and input vectors, \(\mathcal{W}_{rt}=\mathcal{W}_{rt}/s^{r}_{rt}\) and \(\mathcal{\widetilde{X}}_{t}=\mathcal{X}_{t}/s^{r}_{rt}\), to produce \(\mathcal{\widetilde{Y}}_{t}=\mathcal{\widetilde{W}}_{t}\mathcal{\widetilde{X} }_{t}\). The output \(\mathcal{\widetilde{Y}}_{rt}\) is then quantized (if required) to resemble the output ADCs and multiplied back with the appropriate scales so that the actual output elements \(Y_{rt}=\mathcal{\widetilde{Y}}_{rt}\cdot s^{r}_{rt}\) are obtained.
Here, the methodology is the same for RNS-based and regular fixed-point cores. For the RNS-based case, in addition to the description above, the quantized input and weight integers are converted into the RNS space before the tiled-MVM operations. MVMs are performed separately for each set of residues and are followed by a modulo operation before the quantization step. The output resides for each tiled MVM are converted back to the standard representation using the CRT.
The GEMM operations (i.e., convolution, linear, and RBM layers) are sandwiched between input operation \(O_{\text{m}}\) and an output operation \(O_{\text{out}}\). This makes the operation order \(O_{\text{in}}\)-GEMM-\(O_{\text{out}}\) during the forward pass, and \(O_{\text{out}}\)-GEMM-\(O_{\text{in}}\) in the backward pass. On i.e. times the input and weight tensors the forward pass and is a null operation in the backward pass. In contrast, \(O_{\text{out}}\) is a null operation in the forward pass and quantizes the activation gradients in the backward pass. In this way, the quantization is always performed before the GEMM operation. The optimizer (i.e., SGD or Adam) is modified to keep a copy of the FP32 weights to use during the weight updates. Before each forward pass, the FP32 weights are copied and stored. After the forward pass, the quantized model weights are replaced by the previously stored FP32 weights before the step function so that the weight updates are performed in FP32. After the weight update, the model parameters are quantized again for the next forward pass. This high-precision weight update step is crucial for achieving high accuracy in training.
In Fig. 3b, all the convolution, linear, and BMM layers in the models were replaced by the quantized versions. We trained ResNet50 from scratch by using SGD optimizer for 90 epochs with a momentum of 0.9 and a learning rate starting from 0.1. The learning rate was scaled down by 10 at epochs 30, 60, and 80. We fine-tuned BERT-large and OPT-125M from the implementations available in the Hugging-face transformers repository [34]. We used the Adam optimizer for both models with the default settings. The script uses a linear learning rate scheduler. The learning rate starts at 3e-05 and 5e-05 and the models are trained for 2 and 3 epochs, respectively for BERT-Large and OPT-125M.
### Error distribution in the RRNS code space
For an \(\text{RRNS}(n+k,n)\) with \(n\) non-redundant moduli \((m_{1},m_{2},...,m_{n})\) and \(k\) redundant moduli \((m_{n+1},m_{n+2},...,m_{n+k})\), the probability distributions \((p_{x},\ p_{d},\) and \(p_{u})\) of different types of errors (Case 1, Case 2, and Case 3 that were mentioned in Redundant RNS for Fault Tolerance) are related to the Hamming distance distribution of the RRNS code space. In an \(\text{RRNS}(n+k,n)\), every integer is represented as \(n+k\) residues (\(r_{i}\) where \(i\in\{1,...,n+k\}\)) and this vector of \(n+k\) residues is considered as an RRNS codeword. A Hamming distance of \(q\in\{0,1,...,n+k\}\) between the original codeword and the erroneous codeword indicates that \(n\) out of \(n+k\) residues are erroneous. The erroneous codewords create a new vector space of \(n+k\)-long vectors where at least one \(r_{i}\) is replaced with \(r^{\prime}_{i}\neq r_{i}\) with \(i\in\{1,...,n+k\}\) and \(r^{\prime}_{i}<m_{i}\). This vector space includes all the \(\text{RRNS}(n+k,n)\) codewords as well as other possible \(n+k\)-long vectors that do not overlap with any codeword in the RRNS code space space. A vector represents a codeword and is in the RRNS code space if and only if it can be converted into a value within the legitimate range \([0,M)\) of the \(\text{RRNS}(n+k,n)\) by using the CRT. The number of all vectors that have a Hamming distance \(\eta\) from a codeword in \(\text{RRNS}(n+k,n)\) can be expressed as
\[V_{\eta}=\sum\limits_{Q\binom{n+k}{\eta}}\prod\limits_{i=1}^{\eta}(m_{i}-1), \tag{12}\]
where \(Q\binom{n+k}{\eta}\) represents one selection of \(\eta\) moduli from \(n+k\) moduli while \(\sum_{Q\binom{n+k}{\eta}}\) represents the summation over all distinct \(\binom{n+k}{\eta}\) selections. The number of codewords that are in the RNS code space with a Hamming distance of \(\eta\in\{0,1,...,n+k\}\) can be expressed as
\[\centering\mathcal{D}_{\eta}=\sum\limits_{h=0}^{\eta-1-k}(-1)^{h}\binom{n+k-\eta +h}{n+k-\eta}\zeta(n+k,\eta-h),\@add@centering \tag{13}\]
for \(k+1\leq\eta\leq n+k\). For \(1\leq\eta\leq k\), \(D_{\eta}=0\) and \(D_{0}=1\). \(\zeta(n+k,\eta)\) represents the total number of non-zero common divisors in the height range \([0,M)\) for any \(n+k-\eta\) moduli out of the \(n+k\) moduli of the \(\text{RRNS}(n+k,n)\) code and can be denoted as
\[\centering\zeta(n+k,\eta)=\sum\limits_{Q\binom{n+k}{\eta+k-\eta}}\left\lfloor \frac{M-1}{m_{i_{1}}m_{i_{2}}...m_{i_{n+k-\eta}}}\right\rfloor,\@add@centering \tag{14}\]
where \((m_{i_{1}},m_{i_{2}},...,m_{i_{\lambda}})\) with \(1\leq\lambda\leq n+k\) is a subset of the \(\text{RRNS}(n+k,n)\) moduli set.
An undetectable error occurs only if a codeword with errors overlaps with another codeword in the same RRNS space. Given the distance distributions for the vector space \(V\) and the codespace \(D\) (Eq. (12), (13), respectively), the probability of observing an undetectable error \((p_{u})\) for \(\text{RRNS}(n+k,n)\) can be computed as
\[\centering p_{u}=\sum\limits_{\eta=k+1}^{n+k}\frac{D_{\eta}}{V_{\eta}}p_{E}( \eta),\@add@centering \tag{15}\]
where \(p_{E}(\eta)\) is the probability of having \(\eta\) erroneous residues in a codeword which can be calculated as
\[\centering p_{E}(\eta)=\sum\limits_{Q\binom{n+k}{\eta}}p^{\eta}(1-p)^{(n+k-\eta )},\@add@centering \tag{16}\]
for an error probability of \(p\) in a single residue.
Eq. (13) indicates that for up to \(\eta=k\) erroneous residues \(D_{\eta}=0\), and so it is not possible for an erroneous codeword to overlap with
another codeword in the RRNS code space. This guarantees the successful detection of the observed error. If the Hamming distance of the erroneous codeword is \(\eta\leq\lfloor\frac{k}{2}\rfloor\), the error can be corrected by the majority logic decoding mechanism. In other words, the probability of observing a correctable error is equal to observing less or equal to \(\lfloor\frac{k}{2}\rfloor\) errors in the residues and can be calculated as
\[p_{c}=\sum_{\eta=0}^{\lfloor\frac{k}{2}\rfloor}p_{E}(\eta)=\sum_{\eta=0}^{ \lfloor\frac{k}{2}\rfloor}\Big{(}\sum_{Q\binom{(n+k)}{\eta}}p^{\eta}(1-p)^{(n+k -\eta)}\Big{)}. \tag{17}\]
All the errors that do not fall under the undetectable or correctable categories are referred to as detectable but not correctable errors with a probability \(p_{d}\) where \(p_{d}=1-(p_{c}+p_{d})\). The equations in this section were obtained from the work conducted by Yang [27].
To model the error in the RNS core for the analysis shown in Fig. 6, \(p_{c}\), \(p_{d}\), and \(p_{u}\) are computed for a given RNS\((n+k,n)\) using Eqs. (15) and (17). Given the number of error correction attempts, output error probability (\(p_{err}\)) is calculated according to Eq. (8). Random noise is injected at the output of every tiled-MVM operation using a Bernoulli distribution with the probability of \(p_{err}\).
## RNS Operations
The proposed analog RNS-based approach requires modular arithmetic. In this section, we discuss two ways of performing modular arithmetic in the analog domain, one electrical and one optical.
### Modular Arithmetic with Ring Oscillators
In a ring oscillator, where each inverter has a propagation delay \(t_{\rm prop}>0\), there is always only one inverter that has the same input and output--either 1-1 or 0-0--at any given time when the ring oscillator is on. The location of this inverter with the same input and output propagates along with the signal every \(t_{\rm prop}\) time and rotates due to the ring structure. This rotation forms a modulator behavior in the ring when the location of this inverter is tracked.
Let \(S_{\rm RO}(t)\) be the state of a ring oscillator with \(N\) inverters. Here, \(S_{\rm RO}(t)\in\{0,...,N-1\}\) and \(S_{\rm RO}(t)=k\) means that the \(k+1\)-th inverter's input and output have the same value at time \(t\). \(S_{\rm RO}(t)\) keeps rotating between 0 to \(N-1\) as long as the oscillator is on. Figure 7a shows a simple example where \(N=3\). In the first \(t_{\rm prop}\) time interval, the input and output the first inverter are both 0, therefore, the state \(S_{\rm RO}(t<t_{\rm prop})=0\). Similarly, when \(t_{\rm prop}<t<2t_{\rm prop}\), the input and output of the second inverter are 1, so \(S_{\rm RO}(t_{\rm prop}<t<2t_{\rm prop})=1\). Here, the time between two states following one another (i.e., \(t_{\rm prop}\)) is fixed as \(S_{\rm RO}(t)\) rotates, \((1,2,1,0,1,...)\). Assume the state of the ring oscillator is sampled periodically with a sampling period of \(T_{a}=A-t_{\rm prop}\). Then, the observed change in the state of the ring oscillator between two samples (\(S_{\rm RO}(t=T_{a})-S_{\rm RO}(t=0)\)) is equivalent to \(|A_{N}|\) where \(A\) is a positive integer value. Therefore, to perform a modulo against a modulus value \(m\), the number of inverters \(N\) should be equal to \(m\). The dividend number \(A\) and the sampling period can be adjusted by changing the analog input voltage to a voltage-to-time converter (VTC).
In a modular dot product or an MVM operation, the dividend \(A\) is replaced by the output of the dot product. Analog dot products can be performed using traditional methods with no change with any desired analog technology where output can be represented as an analog electrical signal (e.g., current or voltage) before the analog modulo.
### Modular Arithmetic with Phase Shifters
The amount of phase shift introduced by a single dual-rail phase shifter when \(v\) and \(-v\) voltages are applied on the upper and the bottom arms, respectively, is
\[\Delta\Phi=\frac{vL}{V_{\rm{\pi-cm}}}, \tag{18}\]
where \(V_{\rm{\pi-cm}}\) is the modulation efficiency of the phase shifter and is a constant value. \(\Delta\Phi\) is proportional to both the length of the shifter \(L\) and the amount of applied voltage \(v\). Figure 7b shows an example modular dot product operation between two vectors, \(x\) and \(w\), using cascaded dual-rail phase shifters. \(w\) is encoded digit-by-digit using phase shifters with lengths proportional to \(2^{j}\) where \(j\) represents the digit number. In the example, each element (i.e., \(w_{0}\) and \(w_{1}\)) of the 2-element vector \(w\) consists of 3 digits and uses 3 phase shifters, each with lengths \(L,2L\), and \(4L\). If the \(j\)-th digit of the \(i\)-th element of \(w\), \(w_{i}^{j}=1\), a voltage \(v_{i}\) is applied to the phase shifter pair (top and bottom) with the length \(2^{j}L\). If the digit \(w_{i}^{j}=0\), then no voltage is applied and therefore no phase shift is introduced to the input signal. To encode the second operand \(x\), a voltage \(v_{i}\) that is proportional to \(x_{i}\) is applied to all non-zero digits of \(w_{i}\). To take modulo with a modulus \(m\) instead of 2\(\pi\), the input \(x\) and therefore the applied voltage \(v\) should be multiplied by the constant \(2\pi/m\). For encoding \(x_{i}\),
\[v_{i}=x_{i}\cdot\frac{V_{\rm{\pi-cm}}}{\pi L}\cdot\frac{2\pi}{m}, \tag{19}\]
should be applied so that the total phase shift at the end of the optical path is
\[\Delta\Phi_{\rm{total}}=\Big{|}\frac{2\pi}{m}\sum_{i}\big{(}\sum_{j}(2^{j}w_{ i}^{j})x_{i}\big{)}\big{|}_{2\pi}=\frac{2\pi}{m}\big{|}\sum_{i}(w_{i}x_{i}) \big{|}_{m}. \tag{20}\]
The resulting output values are collected at the end of the optical path and are in the form of the phase difference between input and output. These outputs are then re-multiplied by \(m/2\pi\) to obtain the outputs of the modular dot products for each residue.
## Extended RNS
By combining RNS and PNS, an integer value \(Z\) can be represented as \(D\) separate digits, \(z_{d}\) where \(d\in\{0,1,...,D-1\}\) and \(0\leq z_{d}<M\) where \(M\) is the RNS range:
\[Z=\sum_{d=0}^{D-1}z_{d}M, \tag{21}\]
and can provide up to \(D\log_{2}M\) bit precision. This hybrid scheme requires carry propagation from lower digits to higher digits, unlike the RNS-only scheme. For this purpose, one can use two sets of moduli, primary and secondary, where every operation is performed for both sets of residues. After every operation, overflow is detected for each digit and carried over to the higher-order digits.
Let us define and pick \(n_{p}\) primary moduli \(m_{i}\), where \(i\in\{1,...,n_{p}\}\) and \(n_{s}\) secondary moduli \(m_{i}\), where \(j\in\{1,...,n_{s}\}\) and \(m_{i}\neq m_{j}\)\(\bigvee\{i,j\}\). Here \(M=M_{p}\cdot M_{s}=\prod_{i=1}^{n_{p}}m_{i}\cdot\prod_{i=1}^{n_{s}}m_{j}\) is large enough to represent the largest possible output of the operations performed in this numerical representation and \(M_{p}\) and \(M_{s}\) are co-prime.
To execute an operation in this hybrid number system, the operation is performed separately for each digit of the output. These operations for each digit are independent of one another and can be parallelized except for the overflow detection and carry propagation. Assume \(z_{d}=z_{d}|_{p_{i}\times}\) consists of primary and secondary residues and is a calculated output digit of an operation before overflow detection.
\(z_{d}\) can be decomposed as \(z_{d}|_{p}=Q_{d}|_{p}M_{p}+R_{d}|_{p}\) where \(Q_{d}|_{p}\) and \(R_{d}|_{p}\) are the quotient and the remainder of the digit, with respect to the primary RNS. To detect a potential overshift in the digit \(z_{d}\), a base extension from primary to secondary RNS is performed on \(z_{d}|_{p}\) and the base extended residues are compared with the original secondary residues of the digit, \(z_{d}|_{s}\). If the residues are the same, this indicates that there is no overflow, i.e., \(Q_{d}|_{p_{i}\times}=0\), and both primary and secondary residues are kept without any error moved to the next higher digit. As against that, if the base-extended secondary residues and the original secondary residues are not the same, it means that there exists an overflow (i.e., \(Q_{d}|_{p_{i}\times}\neq 0\)). In the case of overflow, the remainder of the secondary RNS, \(R_{d}|_{s}\), is calculated through a base extension from primary to secondary RNS on \(R_{d}|_{p}\), where \(R_{d}|_{p}=z_{d}|_{p}\). \(Q_{d}|_{s}\) can then be computed as \(Q_{d}|_{s}=(z_{d}|_{s}-R_{d}|_{s})M_{p}^{-1}\) where \(|M_{p}\cdot M_{s}^{-1}|_{M_{s}}=1\). \(Q_{d}|_{p}\) is calculated through base extension from the secondary to primary RNS on the computed \(Q_{d}|_{s}\). The full quotient \(Q_{d}|_{p_{i}\times}\) is then propagated to the higher-order digit. Algorithm \(T\) shows the pseudo-code for handling an operation \(\square\) using the extended RNS representation. The operation can be replaced by any operation that is closed under RNS.
It should be noted that \(z_{d}|_{p_{i}\times}\) is not always computed as \(x_{d}|_{p_{i}\times}\square q_{d}|_{p_{i}\times}\). For operations such as addition, each digit before carry propagation is computed by simply adding the same digits of the
operands, i.e., \(z_{d}|_{psz}=x_{d}|_{psz}+y_{d}|_{psz}\). However, for multiplication, each digit of \(z_{d}|_{psz}\) should be constructed as in long multiplication. The multiplication of two numbers in the hybrid number system with \(D_{x}\) and \(D_{y}\) digits requires \(D_{x}D_{y}\) digit-wise multiplications and the output will result in \(D_{x}=D_{x}+D_{y}\) digits in total. Similarly, a dot product is a combination of multiply and add operations. If two vectors with \(h\) elements where such element has \(D_{x}\) and \(D_{y}\) digits, the output will require in \(D_{x}=D_{x}+D_{y}+\log_{2}h\) digits.
**Algorithm 1** **Pseudocode for performing the operation \(\square\) using the hybrid number system.** Here, \(x\) and \(y\) are the inputs for operation \(\square\) and \(z\) is the output with \(D\) digits. \(z_{d}\) represents the digits of the output where \(z_{d}|_{ps}\) are the primary residues and \(z_{d}|_{s}\) are the secondary residues. Primary and secondary residues together are referred to as \(z_{d}^{\prime}|_{psz}\). \(Q\) is the quotient and \(R\) is the remainder where \(z_{d}=Q_{d}M_{p}+R_{d}\). \(\mathrm{p2s}()\) and \(\mathrm{s2p}()\) refer to base extension algorithms from primary to secondary residues and from secondary to primary residues, respectively.
```
1:\(Q_{-1}|_{psz}=0\)
2:for\(d\gets 0\) to \(D_{z}\)do
3:\(z_{d}^{\prime}|_{psz}=(x|_{psz},\,\square\,\,y|_{psz})_{d}\)
4:endfor
5:for\(d\gets 0\) to \(D_{z}\)do
6:\(z_{d}|_{psz}=z_{d}|_{psz}+Q_{d-1}|_{psz}\)
7:\(R_{d}|_{psz}=z_{d}|\)
8:\(R_{d}|_{s}=p2s(R_{d}|_{p})\)
9:if\(R_{d}|_{s}=z_{d}|_{s}\)then
10:\(d|_{psz}=0\)
11:else
12:\(Q_{d}|_{psz}=(z_{d}^{\prime}|_{s}-R_{d}|_{s})M_{p}^{-1}\)
13:\(Q_{d}|_{ps}=z\mathrm{p}Q(Q_{d}|_{s})\)
14:endif
15:endfor
```
**ACKNOWLEDGEMENTS**
We thank Dr. Rashmi Agrawal and Prof. Vijay Janapa Reddi for their insightful discussions.
## Author contributions
D.B. conceived the project idea. C.D. and D.B. developed the theory. C.D. implemented the accuracy modeling and the analytical error models with feedback from D.B. and A.J. C.D. and L.N. conducted the experiments. D.B. and A.J. supervised the project. C.D. wrote the manuscript with input from all authors.
## Competing interests
The authors declare the following patent application: U.S. Patent Application No.: 17 / 543,676. L.N. and D.B. declare individual ownership of shares in Lightmatter, a startup company developing photonic hardware for AI.
Figure 7: **Analog modulo implementations.****a** Modulo operation performed using a ring oscillator. A ring oscillator with \(N=3\) inverters is shown to perform modulo against a modulus \(m=3\). This operation is performed after every analog dot product to perform a modular dot product. **b** Modular dot product performed using phase shifters. A modular dot product operation between two \(2\)-element vectors \(x\) and \(w\), each with \(3\) digits, is shown by using a dual-rail set of cascaded phase shifters. The transistor switch turns on and supplies voltage to the phase shifter when the corresponding digit of \(w\) is \(1\). |
2301.13472 | The Aharonov Casher phase of a bipartite entanglement pair traversing a
quantum square ring | We propose in this article a quantum square ring that conveniently generates,
annihilates and distills the Aharonov Casher phase with the aid of
entanglement. The non-Abelian phase is carried by a pair of spin-entangled
particles traversing the square ring. At maximal entanglement, dynamic phases
are eliminated from the ring and geometric phases are generated in discrete
values. By contrast, at partial to no entanglement, both geometric and dynamic
phases take on discrete or locally continuous values depending only on the
wavelength and the ring size. We have shown that entanglement in a non-Abelian
system could greatly simplify future experimental efforts revolving around the
studies of geometric phases. | Che-Chun Huang, Seng Ghee Tan, Ching-Ray Chang | 2023-01-31T08:40:40Z | http://arxiv.org/abs/2301.13472v1 | # The Aharonov Casher phase of a bipartite entanglement pair traversing a quantum square ring
###### Abstract
We propose in this article a quantum square ring that conveniently generates, annihilates and distills the Aharonov Casher phase with the aid of entanglement. The non-Abelian phase is carried by a pair of spin-entangled particles traversing the square ring. At maximal entanglement, dynamic phases are eliminated from the ring and geometric phases are generated in discrete values. By contrast, at partial to no entanglement, both geometric and dynamic phases take on discrete or locally continuous values depending only on the wavelength and the ring size. We have shown that entanglement in a non-Abelian system could greatly simplify future experimental efforts revolving around the studies of geometric phases.
## I Introduction
The quantum ring is a useful apparatus to study the physics of electron phase accumulating and interfering over the confined trajectories as prescribed by the design of the ring. Following the successful measurement of the Aharonov Bohm [1] phase, hot on the heels were a slew of experiments that had demonstrated the Aharonov Casher [2] and the Berry-Pancharatnam [3; 4] phases.In modern context, the Aharonov Casher phase associates primarily with the spin orbit coupling, particularly in 2D condensed matter systems. Ring structure carved out of a 2D spin-orbital semiconductor to enclose a magnetic field at the center [5; 6; 7] was proposed to study the simulatie of the Aharonov Casher and the Aharonov Bohm effects on interference. Efforts have also been made to study the parametric effects [8; 9; 10] of e.g. the Rashba constant, and the time-dependent magnetic field. At around the same time, the Aharonov Casher effect was experimentally measured in a number of ring structures [11; 12]. On a separate study, the Aharonov Casher phase is also associated with the non-Abelian gauge field for its spin phase [13; 14; 15] and spin force effects [16; 17; 18; 19], categorically reviewed in Ref [20]. While the spin phase which comprises the geometric and the dynamic parts has largely been determined in ring structures, the exact nature of the accumulated phases in these devices remain ambiguous. The dynamic phase remains an elusive component in most cases, and the process to extract the geometric phase continues to be complicated. For example, in Ref [12], the system is a Rashba 2D that comprises a hedgehog orientation of the effective magnetic fields turned crown-like by a vertical magnetic field. While the strength of the BP phase is proportional to the solid angle subtended in the rest frame of the electron, a dynamic phase proportional to \(\sin\theta\) is also formed in concomitance. By applying an in-plane B field, which modifies the geometric Berry-Pancharatnam, and keeps the dynamic unaffected to the first order, a distinction can be made about the two phases. Therefore, isolating the geometric phase is a complicated effort, the Aharonov Casher remains largely a total phase for most applications.
In this article, we propose a quantum square ring (QSR) that conveniently generates, annihilates or distills the Aharonov Casher phase with the aid of entanglement as shown in FIG. 1. The Aharonov Casher phase generated in this manner comprises the dynamic and the geometric components that can be further separated by tuning the entanglement strength and the device size measured by the wavelength multiple of a traversing particle pair. For example, at maximal entanglement, dynamic phases are eliminated from the device and geometric phases are generated in discrete values. Discrete geometric phases would in turn switch their values on different ring locations depending on the device size. At partial to no entanglement, the Aharonov Casher as well as its dynamic and geometric components can be tuned according to the quantum ring size to take on discrete values or vary continuously across the device. The device is made out of semiconductor or metallic materials that exhibit 2D spin-orbit effects, e.g., the Rashba-Vasko, Dresselhaus, or Dresselhaus-Perel effects [21; 22; 23; 24; 25; 26; 27]. The spin-orbit effects will be the source of both the geometric and the dynamic phases in our system. As the external magnetic field is not needed to generate the geometric phase, nor is it needed to help to eliminate the dynamic phase, a leaner QSR concept that rules out the Aharonov Bohm
and the Altshuler-Aronov-Spivak (AAS) effect, and co-opts only the electrically-controllable Aharonov Casher is employed in our design. In the absence of strong B or M field, the adiabatic Berry-Pancharatnam phases in the QSR [28; 29; 30; 31] is also ruled out. Novel to the functioning of our device though is the entanglement physics [32; 33]. On the bottom left of the device is an emitter electrode (FIG. 1) through which an entangled bipartite spin-pair is injected into the QSR. The top right is the collector electrode where the injected spin-pair meets again and carries with it a total phase moderated by the physics of entanglement and device geometry. Our QSR device is therefore, by essence a non-adiabatic and a non-Abelian Aharonov Casher system [34]. The spin-pair's total phase is accumulated via spin precession about the spin-orbit field but under the constant purview of bi-partite entanglement, which provides in this paper a viable method to generate geometric and dynamic phases in a controllable manner. As an aside, we note that quantum ring device has previously been studied for the practical purpose of producing spin entanglement in a controllable manner. [35; 36] There is, however, no discussion on its applicability in the context of geometric phases, let alone any specific discussion on its moderation of the dynamic phases or its distillation of the geometric phases.
When a pure quantum state \(|\psi(t)\rangle\) evolves on the Hilbert space trajectory in time range \(\Gamma:t\in[0,\tau]\), the total phase it accumulates is given by \(arg(\langle\psi(0)|\psi(t)\rangle)\). The dynamic phase can be derived from \(D=-i\int_{0}^{\tau}\mathrm{d}t\langle\psi(t)|\psi(t)\rangle\). One can then define the geometric phase as the result of a total phase minus the dynamic phase as follows
\[\gamma=arg(\langle\psi(0)|\psi(\tau)\rangle)+i\int_{0}^{\tau}\mathrm{d}t \langle\psi(t)|\psi(t)\rangle \tag{1}\]
Consider a QSR ring of size \(\eta\times\eta\) as shown in FIG. 1. The device comprises 2 paths, each consisting of a horizontal and a vertical arm. From the material point of view, the device exhibits the Rashba spin-orbit effect as follows
\[H=\sigma^{x}k_{y}-\sigma^{y}k_{x} \tag{2}\]
The QSR geometry conspires with the Rashba effect to generate phase factors for any particle traveling along path 1 and path 2 as follows
\[U_{I}=e^{\frac{i\delta\sigma_{x}}{2}}e^{\frac{-i\sigma_{y}\pi}{2}},U_{II}=e^{ \frac{-i\delta\sigma_{y}}{2}}e^{\frac{i\eta\sigma_{x}}{2}} \tag{3}\]
Note that \(\delta=\omega t\) for both paths as a result of \(\omega_{I}=\omega_{II}=\omega\). Therefore, a spin particle traversing the horizontal arm of Path 1 and the vertical arm of Path 2 would separately accumulate a total phase as denoted by the dimensionless \(\eta\). The phase accumulated over time can be translated to a phase at an actual location in space depending on the particle velocity (\(\nu\)) in the actual system as denoted by \(\omega=k\nu\), where \(k\) is the wave-vector. While \(\eta\) is hence a phase parameter for the first half of either Path 1 or 2, \(\delta\in[0,\eta]\) would represent the phase at any point on the second half of either path. For ease of illustrations, we will refer to the \(\eta\) parts of Paths 1 and 2 as, respectively, \(\eta_{1}\) and \(\eta_{2}\). Likewise, the same is prescribed for \(\delta_{1}\) and \(\delta_{2}\). The spin-orbit effect when viewed in the rest frame of the carrier is a form of effective magnetic field which sets up a perfect environment for spin precession. As the entangled spin-pair traverses both paths, its phase evolves as prescribed by the unitary operation of
\[\mathcal{U}\equiv U_{I}\bigotimes U_{II} \tag{4}\]
The geometric phase would thus be
\[\gamma=arg(\langle\psi(0)|\mathcal{U}|\psi(0)\rangle)+i\int_{0}^{\tau}\mathrm{ d}t\langle\psi(0)|\mathcal{U}^{\dagger}\dot{\mathcal{U}}|\psi(0)\rangle \tag{5}\]
Explicitly, the dynamic phase is given by
\[\begin{split} i\mathcal{U}^{\dagger}\dot{\mathcal{U}}& =(I\bigotimes(e^{-\frac{i\sigma_{x}}{2}\eta}(\frac{\sigma_{y}}{2})e^{ \frac{i\sigma_{x}}{2}\eta})\\ &\quad-(e^{\frac{i\sigma_{x}}{2}\eta}(-\frac{\sigma_{x}}{2})e^{- \frac{i\sigma_{y}}{2}\eta})\bigotimes I)\\ =&\frac{1}{2}\left(\begin{array}{cccc}0&-i\cos \eta&-\cos\eta&0\\ i\cos\eta&-2\sin\eta&0&-\cos\eta\\ -\cos\eta&0&2\sin\eta&-i\cos\eta\\ 0&-\cos\eta&i\cos\eta&0\end{array}\right)\end{split} \tag{6}\]
The dynamic phase is expressed in terms of the Pauli matrices so that the relatable picture of effective magnetic fields is not lost. For calculation though, use is often made of its 4 by 4 matrix representation.
## III Bipartite entangled states
The initial states of the entangled-spin-pair at the emitter is then prepared in the Bell-basis of \(|\phi(0)\rangle\) or
Figure 1: A square quantum ring device that takes a bipartite spin-pair at the emitter and generates an AC total phase as well as its separable components of geometric and dynamics phases.
\(\left|\psi(0)\right\rangle\) as follows:
\[\begin{array}{c}\left|\phi(0)\right\rangle=\sqrt{p_{0}}\left|00\right\rangle \pm\sqrt{p_{1}}\left|11\right\rangle\\ \left|\psi(0)\right\rangle=\pm\sqrt{p_{0}}\left|10\right\rangle+\sqrt{p_{1}} \left|01\right\rangle\\ \end{array}\bigg{\}} \tag{7}\]
where \(p_{0}\) and \(p_{1}\) determine the strength of the entanglement, and \(p_{0},p_{1}\geq 0\), \(p_{0}+p_{1}=1\).
We will now consider the initial states of \(\left|\phi(0)\right\rangle=\sqrt{p_{0}}\left|00\right\rangle\pm\sqrt{p_{1}} \left|11\right\rangle\) to be injected into the QSR through the emitter. As shown in FIG. 1, spin particles 1 and 2 take to paths of their respective namesakes. The geometric phase is the total Aharonov-Casher phase of the system minus the dynamic phase as shown below
\[\gamma=arg(\cos^{2}(\frac{\delta}{2})\cos^{2}(\frac{\eta}{2})+\sin^{2}(\frac{ \delta}{2})\sin^{2}(\frac{\eta}{2}))-D \tag{8}\]
The initial states \(\left|\phi(0)\right\rangle\) simply could not generate any dynamic phase anywhere on the QSR, i.e. \(D=0\). And, the argument of the total Aharonov Casher phase factor consists of parameters that are all real and non-negative. The geometric phase by virtue of \(\gamma\equiv arg(a+ib)-D\) vanishes as given by
\[\gamma=\tan^{-1}\frac{0}{(a>0)}-D\rightarrow\gamma=0 \tag{9}\]
It is clear that the strength of entanglement has no bearing on the geometric and the dynamic phases as \(p_{0},p_{1}\) could take on values of the un-entangled states. The results of zero phases might not seem as trivial though. It is a testament to the non-Abelian feature of the QSR device. In fact, the above shows that a bipartite state composed out of \(\left|00\right\rangle\) and \(\left|11\right\rangle\) is ideal for eliminating both geometric and dynamic phases from the propagating particles. In terms of applications, this could be a handy device to remove phases where they are not desired from all the particles. In the following, we provide an insight of how dynamic phases are removed from the bipartite state. Let's examine the dynamic phase by inspecting the constituent particles of the entangled pair. Spin 1 travels on arm-\(\eta_{1}\) as though it is in a superposition state of \(\left|0\right\rangle\) and \(\left|1\right\rangle\) as far as the dynamic phase is concerned. In either state, its expectation energy is zero as can be deduced by the circular fashion of its spin rotation about the effective magnetic field of \(-B_{y}\). Consider spin to precess about the effective magnetic field in an anti-clockwise manner, and that spin 1 to have rotated an angle \(\theta<\pi\) by the end of its journey on arm-\(\eta_{1}\). Spin 1 would thus continue on arm-\(\delta_{1}\) about \(+B_{x}\), now inscribing a conical spin rotation with negative energy for \(p_{0}\), and positive energy for \(p_{1}\). Likewise for spin 2, a corresponding process happens over arm-\(\eta_{2}\) about \(+B_{x}\) with zero energy for both components \(p_{0},p_{1}\). Spin 2 would continue its journey on arm-\(\delta_{2}\) about \(-B_{y}\), inscribing a conical rotation with positive energy for \(p_{0}\), and negative energy for \(p_{1}\). The cone energies on arm-\(\delta\) cancel one another identically independent of the strength of \(p_{0},p_{1}\). The effect is thus a complete negation and a net zero of dynamic phase at all times.
Note that the energy cones are drawn in different sizes to reflect the energy it carries. This stands against the reality that spin vector is constant in length. Therefore, the energy cones are crude illustrations meant only to provide an intuitive description of the dynamic phases. The precise description of the cone energies is given by the expressions below. Equations (10) and (11) describe the expectation energy for spin particle 1 on arm-\(\delta_{1}\).
\[p_{0}\langle 00|(e^{\frac{i\sigma_{y}}{2}\eta}(-\frac{\sigma_{x}}{2})e^{- \frac{i\sigma_{y}}{2}\eta})\bigotimes I)|00\rangle=-p_{0}\frac{\sin\eta}{2} \tag{10}\]
\[p_{1}\langle 11|(e^{\frac{i\sigma_{y}}{2}\eta}(-\frac{\sigma_{x}}{2})e^{- \frac{i\sigma_{y}}{2}\eta})\bigotimes I)|11\rangle=p_{1}\frac{\sin\eta}{2} \tag{11}\]
Equations (12) and (13) describe the expectation energy for spin particle 2 travelling on arm-\(\delta_{2}\).
\[p_{0}\langle 00|(I\bigotimes(e^{-\frac{i\sigma_{x}}{2}\eta}(\frac{\sigma_{y}}{2 })e^{\frac{i\sigma_{x}}{2}\eta})|00\rangle=p_{0}\frac{\sin\eta}{2} \tag{12}\]
\[p_{1}\langle 11|(I\bigotimes(e^{-\frac{i\sigma_{x}}{2}\eta}(\frac{\sigma_{y}}{2 })e^{\frac{i\sigma_{x}}{2}\eta})|11\rangle=-p_{1}\frac{\sin\eta}{2} \tag{13}\]
The equations above lend clarity and mathematical credence to our qualitative accounts that the cone energies on arms-\(\delta\) cancel one another identically independent of the strength of \(p_{0},p_{1}\), resulting in a net zero dynamic phase at all times.
We will now consider the initial states of \(\left|\psi(0)\right\rangle=\pm\sqrt{p_{0}}\left|10\right\rangle+\sqrt{p_{1}} \left|01\right\rangle\) to be injected into the QSR through the emitter, once again with the spin particles taking to paths of their respective namesakes. The geometric phase is given by
\[\begin{array}{c}\gamma=arg(\frac{\cos\eta+\cos\delta}{2}\mp\sqrt{p_{0}p_{1}} (\frac{\sin\delta\sin\eta}{2})\\ +i(p_{0}-p_{1})(\frac{\sin\delta\sin\eta}{2}))\\ +2(\sin\eta)(p_{0}-p_{1})\delta\end{array} \tag{14}\]
Figure 2: Schematic illustration of the dynamic phases of the bipartite spin pair \(\left|\psi(0)\right\rangle=\sqrt{p_{0}}\left|00\right\rangle\pm\sqrt{p_{1}} \left|11\right\rangle\) traversing the QSR. PE, NE, ZE, stand for positive energy, negative energy, zero energy, respectively.
where the dynamic phase is now \(-2(\sin\eta)(p_{0}-p_{1})\delta\) and the total phase is deduced accordingly. It is clear from the above that the physics of entanglement has entered the geometric phase. We will study the dynamic phase first. At maximum entanglement where \(p_{0}=p_{1}=\frac{1}{2}\), the dynamic phase vanishes with the equality of \(p_{0}\) and \(p_{1}\), leading to a total Aharonov Casher phase that is purely geometric. Intuitively, at maximal entanglement, the expected energy of the spin-pair is constantly zero. On arm-\(\eta_{1}\), inital spin \(|1\rangle\) or \(|0\rangle\) would precess about an effective \(-B_{y}\) field in a circular fashion, both with a zero expectation energy. Likewise on arm-\(\eta_{2}\), due to entanglement, the corresponding spin of \(|0\rangle\) or \(|1\rangle\) would precess about an effective \(+B_{x}\) field in a circular fashion, and once again both with a zero energy. In short, circular rotation translates to zero expectation of the Zeeman energy on both arms. Therefore, regardless of entanglement strength, dynamic phase is zero anywhere on arms-\(\eta\). The \(\delta\) sections of the QSR would, however, generate a dynamic phase at any strength of entanglement other than the maximum, i.e. when \(p_{0}\neq p_{1}\). This is because the initial states on arms-\(\delta\) are determined by the duration of precession on arms-\(\eta\). On path 1, let spin rotates an angle \(\theta<\pi\) about \(-B_{y}\) by the end of arm-\(\eta_{1}\). Spin 1 would continue on arm-\(\delta_{1}\) about \(+B_{x}\), inscribing a conical spin rotation with positive energy for component \(p_{0}\), and negative energy for component \(p_{1}\). Likewise for path 2, a corresponding process that happens over \(\eta_{2}\) about \(+B_{x}\) would continue on arm-\(\delta_{2}\) about \(-B_{y}\), inscribing once again a conical rotation with positive energy for \(p_{0}\), and negative energy for \(p_{1}\), as shown in FIG.3 It is clear that, on arms-\(\delta\), component \(p_{1}\) presents a counter effect of proportion \(p_{1}\) to the energy due to \(p_{0}\). The effect is thus a complete negation and a net zero dynamic phase on the equality of \(p_{0}=p_{1}\). The energy cones are drawn in different sizes to reflect the energy it carries. This clearly stands against the quantum reality that spin vector is constant in length. Therefore, the energy cones are crude illustrations meant only to provide an intuitive description of the dynamic phases. The precise description of the cone energies is given by the expressions below. Equations (15) and (16) describe the expectation energy for spin particle 1 on arm-\(\delta_{1}\).
\[p_{0}\langle 10|(e^{\frac{i\varphi_{y}}{2}\eta}(-\frac{\sigma_{x}}{2})e^{- \frac{i\varphi_{y}}{2}\eta})\bigotimes I)|10\rangle=p_{0}\frac{\sin\eta}{2} \tag{15}\]
\[p_{1}\langle 01|(e^{\frac{i\varphi_{y}}{2}\eta}(-\frac{\sigma_{x}}{2})e^{- \frac{i\varphi_{y}}{2}\eta})\bigotimes I)|01\rangle=-p_{1}\frac{\sin\eta}{2} \tag{16}\]
Equations (17) and (18) describe the expectation energy for spin particle 2 on arm-\(\delta_{2}\).
\[p_{0}\langle 10|(I\bigotimes(e^{-\frac{i\varphi_{x}}{2}\eta}(\frac{\sigma_{y} }{2})e^{\frac{i\varphi_{x}}{2}\eta})|10\rangle=p_{0}\frac{\sin\eta}{2} \tag{17}\]
\[p_{1}\langle 01|(I\bigotimes(e^{-\frac{i\varphi_{x}}{2}\eta}(\frac{\sigma_{y} }{2})e^{\frac{i\varphi_{y}}{2}\eta})|01\rangle=-p_{1}\frac{\sin\eta}{2} \tag{18}\]
Spin particles on arms \(\delta_{1}\) and \(\delta_{2}\) reinforces once another, the \(p_{0}\) and \(p_{1}\) components become more positive and negative, respectively. As the equality of the entanglement strength is crucial for suppressing the dynamic phase on arms-\(\delta\) but not on arms-\(\eta\), a net dynamic phase would accumulate on arms-\(\delta\) on the condition of \(p_{0}\neq p_{1}\). There is, however, an exception. If the length of arms-\(\eta\) translate to a spin rotation of \(\eta=n\pi\), subsequent conical precession on arms-\(\delta\) would not have happened. Spin would simply continue with circular precession and a zero dynamic phase throughout. The physics above lends further credence to the applicability of the QSR design as a phase purifier. Tuning the entanglement strength to \(p_{0}=p_{1}\) at the source, a maximally-entangled spin-pair injected at the emitter would propagate with a dynamic phase suppressed throughout. In the event of \(p_{0}\neq p_{1}\) though, dynamic phase could be suppressed by choosing the length of arms \(\eta=n\pi\). Having completed our study of the dynamic phase, we will now examine the geometric phase, \(\gamma\). At maximal entanglement, i.e. \(p_{0}=p_{1}=\frac{1}{2}\), the imaginary part of \(\gamma\), denoted by \(b\) as shown in Equation (19) below vanishes. The geometric phase is either 0 or \(\pi\) depending on the parameters of the real part \(a(\delta,\eta)\) as shown in the denominator of \(\tan\gamma=\frac{b}{a(\delta,\eta)}\), where a positive denominator corresponds to \(\gamma=0\) while a negative denominator corresponds to \(\gamma=\pi\).
\[\gamma=arg(\frac{1}{2}(\cos\delta+\cos\eta)-\frac{1}{4}\sin\delta\eta+i0)\equiv arg (a+ib) \tag{19}\]
Let us now study in slightly more details the geometric phase of the spin-pair traversing the \(\delta\) arms. The crucial quantity here is range \(0<\delta\leq\eta\). In the case of no entanglement, \((p_{0},p_{1})=(0,1)\) or \((1,0)\)
\[\gamma=\tan^{-1}\frac{\mp(\sin\delta\sin\eta)}{\cos\delta+\cos\eta}\mp 2(\sin \eta)\delta \tag{20}\]
Figure 3: Schematic illustration of the dynamic phases of the bipartite spin pair \(|\psi(0)\rangle=\pm\sqrt{p_{0}}\,|10\rangle+\sqrt{p_{1}}\,|01\rangle\) traversing the QSR. PE, NE, ZE, stand for positive energy, negative energy, zero energy, respectively.
Dynamic phase is eliminated at arms' length corresponding to \(\eta=n\pi\). Note that when \(\delta>0\), phase \(\eta\) corresponds to the end location of arms-\(\eta\). Spin would always be oriented along the \(z\) axis by the time it reaches the end location. Therefore, advancing on arms-\(\delta\), spin would be precessing in a circular fashion with a net zero expectation energy, and is thus precluded from generating the dynamic phase. But at the values of \(\eta=n\pi\), the total phase alternates between \(0\) and \(\pi\) on arms-\(\delta\). For \(\eta=2n\pi\), the denominator is always positive, and the device generates a total phase of \(0\). For \(\eta=(2n+1)\pi\), the denominator is always negative, and the total phase is \(\pi\). Since the dynamic phase is always \(0\), the total phase at \(\eta=n\pi\) is also the geometric phase. For other values of \(\eta\), the total phase takes on continuous values as a function of \(\eta\) and \(\delta\). Analysis above is focused only on the phases of arms-\(\delta\). Phases on arms-\(\eta\) for different \(\eta\) values could, on the other hand, be found by prescribing \(\delta=0\), details of which would be discussed later. For illustration, we refer to FIG.4 for \(\eta=\pi,2\pi\) and observe the geometric phases on arms-\(\delta\).
In the case of partial entanglement, i.e. \(p_{0}\neq p_{1}\)
\[\begin{split}\gamma=\tan^{-1}&\frac{(p_{0}-p_{1})( \sin\delta\sin\eta)}{(\cos\delta+\cos\eta)\mp\sqrt{p_{0}p_{1}}\sin\eta\sin \delta}\\ &+2(\sin\eta)(p_{0}-p_{1})\delta\end{split} \tag{21}\]
Like in the above, the dynamic phase can be eliminated by \(\eta=n\pi\). Once again at these values, the total phase is discrete and alternates between \(0\)(for \(\eta=2n\pi\)) and \(\pi\)(for \(\eta=(2n+1)\pi\)). As before, the total phase at \(\eta=n\pi\) is also the geometric phase. For other values of \(\eta\), the total phase takes on continuous values as a function of \(\eta\) and \(\delta\). Note again that analysis here is focused only on the phases of arms-\(\delta\).
We will now revert to the case of maximum entanglement again. It was known that at maximal entanglement, the dynamic phase vanishes and the geometric phase takes on discrete values of \(0\) and \(\pi\). We would now study the exact locations on the QSR where the geometric phase switches its value. As a matter of fact, the positions of switching from \(0\) to \(\pi\) happens on arms-\(\delta\). The exact location can be pinpointed by checking that the denominator of the geometric phase factor satisfies
\[2(\cos\delta+\cos\eta)\mp(\sin\delta\sin\eta)>0 \tag{22}\]
The equation above shows that the answer would depend on the length of arms-\(\eta\), i.e. the length of the arms before advancing into arms-\(\delta\). For illustration, we chose arm lengths that correspond to \(\eta=\frac{\pi}{2},\pi,\frac{3\pi}{2},2\pi\) as shown in FIG.5. The device generates a \(\gamma=0\) on arms \(\eta\) at all times as indicated in blue. As the bipartite spin pair advances into arms-\(\delta\), the geometric phase would switch to \(\pi\) on locations as indicated by the red segments. At \(\eta=2\pi\) though, no switching is possible and the geometric phase remains \(0\) at all times. In the event of a zero denominator, the total phase \(arg(\langle\psi(0)|\mathcal{U}|\psi(0)\rangle)=arg(a+ib)=\tan^{-1}\frac{0}{0}\) is undefined. The bipartite state at that juncture would have to either vanish or turn out orthogonal to the initial Bell states.
Last is the particular situation of \(\delta=0\) that corresponds to the point where spin-pair starts to take a right-angle bend into arms-\(\delta\). As long as \(\delta=0\), spin-pair is considered to reside in the \(\eta\) regions of the arms only. And a quick inspection shows that \(a(0,\eta)\) is positive throughout, which leads to the conclusion that the geometric phase on arms-\(\eta\) is \(0\) throughout, in spite of the entanglement strength. This is in fact indicated in FIG.4 and FIG.5 where arms-\(\eta\) are painted blue to indicate a zero geometric phase throughout. This is indeed the case, barring the issues of singular points corresponding to \(\cos\eta=-1\) which brings upon \(\gamma=\tan^{-1}\frac{0}{0}\). At these points, the geometric phase is undefined. In terms of spin precession, the singular points correspond to spin making a rotation of \((2n+1)\pi\). The odd-pi quantum states of the spin-pair at this point would then be orthogonal to its initial Bell states. In terms of the dynamic
Figure 4: Schematic illustration of the effect of \(\eta\) on the geometric phase \(\gamma\) on arms \(\delta\). The red and blue lines represent the paths with \(\pi\) and \(0\) geometric phases, respectively.
Figure 5: Quantum square rings (QSR) of different sizes are superimposed for ease of inspection. The red and blue lines represent the paths with \(\pi\) and \(0\) geometric phases, respectively.
phase, \(\delta=0\) suppresses dynamic phases in spite of the entanglement strength. Table. 1 provides a summary of all the analysis that have been carried out for the geometric and dynamic phases corresponding to all the Bell states spin-pair traversing a non-Abelian QSR device.
## IV Conclusion
We have explained in details how a non-Abelian system in the form of a QSR could be designed to generate and purify the Aharonov Casher phases into its geometric and dynamic components without elaborate experimental set ups. The device requires only an entangled-particle source to couple to a passive square ring. The Aharonov Casher phase is generated or annihilated as determined by the choice of the entanglement configuration. In the correct Bell states, the dynamic phase is eliminated outright at maximal entanglement. In the case of partial to no entanglement, dynamic phases are eliminated at \(\eta=n\pi\). In all manners of elimination, the Aharonov Casher phase becomes discrete and fully geometric. This device could thus be useful for future experimental efforts to study the physics of discrete geometric phases. The continuous spectrum of the Aharonov Casher phase remains accessible though at partial to no entanglement, in which case, the continuous phases are non-geometric. At maximum entanglement, there is no possibility to access any continuous form of the geometric phase. In terms of discrete phases, the manner in which the phase switch from one discrete value to another varies according to the entanglement strength. At partial to no entanglement, switching occurs only at \((\delta,\eta)=(0,(2n+1)\pi)\). By contrast, at maximal entanglement, switching could take place anywhere on arms-\(\delta\), i.e. any value of \((\delta,\eta)\). In summary, the device has been shown to generate continuous Aharonov Casher phases, annihilate dynamic phases, distill discrete geometric phases, and enable discrete phase switching at various locations, all within the simple construct of a square ring.
## V Acknowledgement
We would like to thank the Ministry of Science and Technology of Taiwan for supporting this work under Grant. No.: 110-2112-M-034-001-MY3.
|
2309.17036 | UniQuadric: A SLAM Backend for Unknown Rigid Object 3D Tracking and
Light-Weight Modeling | Tracking and modeling unknown rigid objects in the environment play a crucial
role in autonomous unmanned systems and virtual-real interactive applications.
However, many existing Simultaneous Localization, Mapping and Moving Object
Tracking (SLAMMOT) methods focus solely on estimating specific object poses and
lack estimation of object scales and are unable to effectively track unknown
objects. In this paper, we propose a novel SLAM backend that unifies ego-motion
tracking, rigid object motion tracking, and modeling within a joint
optimization framework. In the perception part, we designed a pixel-level
asynchronous object tracker (AOT) based on the Segment Anything Model (SAM) and
DeAOT, enabling the tracker to effectively track target unknown objects guided
by various predefined tasks and prompts. In the modeling part, we present a
novel object-centric quadric parameterization to unify both static and dynamic
object initialization and optimization. Subsequently, in the part of object
state estimation, we propose a tightly coupled optimization model for object
pose and scale estimation, incorporating hybrids constraints into a novel dual
sliding window optimization framework for joint estimation. To our knowledge,
we are the first to tightly couple object pose tracking with light-weight
modeling of dynamic and static objects using quadric. We conduct qualitative
and quantitative experiments on simulation datasets and real-world datasets,
demonstrating the state-of-the-art robustness and accuracy in motion estimation
and modeling. Our system showcases the potential application of object
perception in complex dynamic scenes. | Linghao Yang, Yanmin Wu, Yu Deng, Rui Tian, Xinggang Hu, Tiefeng Ma | 2023-09-29T07:50:09Z | http://arxiv.org/abs/2309.17036v2 | # UniQuadric: A SLAM Backend for Unknown Rigid Object 3D Tracking and Light-Weight Modeling
###### Abstract
Tracking and modeling unknown rigid objects in the environment play a crucial role in autonomous unmanned systems and virtual-real interactive applications. However, many existing Simultaneous Localization, Mapping and Moving Object Tracking (SLAMOT) methods focus solely on estimating specific object poses and lack estimation of object scales and are unable to effectively track unknown objects. In this paper, we propose a novel SLAM backend that unifies ego-motion tracking, rigid object motion tracking, and modeling within a joint optimization framework. In the perception part, we designed a pixel-level asynchronous object tracker (AOT) based on the Segment Anything Model (SAM) and DeAOT, enabling the tracker to effectively track target unknown objects guided by various predefined tasks and prompts. In the modeling part, we present a novel object-centric quadric parameterization to unify both static and dynamic object initialization and optimization. Subsequently, in the part of object state estimation, we propose a tightly coupled optimization model for object pose and scale estimation, incorporating hybrids constraints into a novel dual sliding window optimization framework for joint estimation. To our knowledge, we are the first to tightly couple object pose tracking with light-weight modeling of dynamic and static objects using quadric. We conduct qualitative and quantitative experiments on simulation datasets and real-world datasets, demonstrating the state-of-the-art robustness and accuracy in motion estimation and modeling. Our system showcases the potential application of object perception in complex dynamic scenes.
SLAMOT, Dynamic Quadric, Light-Weight modeling.
## I **Introduction**
Accurate and robust perception of ego-motion and the motion and scale of other objects in the surrounding environment is a key technology for autonomous unmanned system navigation, obstacle avoidance, and virtual/augmented reality interactions.
Traditional dynamic SLAM focuses on the issues of achieving robustly ego-localization in dynamic scenes and the modeling of static environments. It treats dynamic objects as outliers and remove them directly. Compare to these methods, SLAMMOT technology models dynamic objects in the scene into the SLAM system through geometric observations, enabling simultaneous perception of ego-motion and the motion of surrounding objects. **However, existing methods [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12] focus solely on perceiving object poses and motion states, neglecting the estimation of object scale.** Some approaches [13, 14, 15, 16] directly utilize 3D object detection techniques and deep learning networks to recover object scale, but such methods struggle with scale estimation for unknown objects in general scenes. Other approaches [9, 17, 18] employ simple bounding box fitting methods to estimate object scale, but they are susceptible to noise in the point cloud data and struggle to achieve robust and accurate scale estimation in complex scenes.
Compared to SLAMMOT, Object-Level SLAM focuses on higher-level environmental modeling and representation. It is capable of achieving hierarchical map representation of scenes beyond primitive features, making it suitable for more advanced downstream applications. Object-Level SLAM technology utilizes semantic and geometric observations to estimate the poses and scales of static objects in the scene. Some works [19, 20] employ prior models to represent
Figure 1: Our proposed method can estimate 6-DoF motion of ego-camera and 9-DoF motion of other unknown rigid objects simultaneously that have potential requirements in intelligent following or dynamic AR.
objects and register and update the modeling results into the global map. Others [4, 5, 21, 22, 23, 24] combine semantic observations with multi-view approaches to achieve more generalizable and light-weight representation of unknown object priors through 2D observations. Some of them use cube [4, 5, 22] to characterize objects and optimize object states through sampled projections. Compared to cubes, some others [21, 23, 24, 25, 26, 27] based on quadric possesses more compact mathematical models and a more complete formulation of projective geometry, enabling nonlinear optimization for object state optimization. **However, existing works can only model static objects and are unsuitable for dynamic objects. Moreover, they suffer from inaccurate and non-robust initialization under limited viewing angles, leading to erroneous optimization results.**
To address the identified challenges, we propose a unified framework that concurrently handles ego-motion tracking, 3D tracking of unknown rigid objects, and light-weight modeling using quadric representations for both static and dynamic objects. Table.I highlights the distinctions from existing solutions.
**The main contributions of this paper are the following:**
* SAM has been integrated into AOT and is capable of accomplishing near real-time detection and tracking of unknown objects, guided by various predefined tasks and prompts.
* A novel object-centric quadric parameterization is proposed to unify the modeling of static and dynamic objects in the scene. Additionally, we propose a tightly coupled dual-sliding window optimization framework that leverages both semantic and geometric information, enabling us to achieve precise 9 degrees of freedom (9-DoF) estimations for rigid objects.
* We propose the UniQuadric, which extends the SLAM-MOT system for 3D tracking and light-weight modeling of unknown rigid objects, while simultaneously providing ego-localization. Additionally, our system supports both visual and visual-LiDAR fusion configurations, making it suitable for indoor and outdoor scenes.
## II **Related Work**
It is crucial to accurately perceive the motion of surrounding objects while achieving ego positioning in augmented reality, autonomous driving and other applications. In contrast to traditional dynamic SLAM, Wang et al. [2] presented a system named SLAMOT, which incorporates dynamic object state estimation into the SLAM framework, enabling the simultaneous estimation of ego-motion and the motion of surrounding rigid moving objects. Apart from perceiving object motion, accurately estimating the scale of common objects holds significant importance in these applications. Object-Level SLAM has arisen as a pertinent technique for representing the static environment at an object-specific level. In this regard, we will provide a concise overview of the investigations carried out in the realms of SLAMMOT and Object-Level SLAM.
### **Slamnot**
Recently, the effectiveness of combining temporal and spatial information in improving the accuracy of object tracking and localization has been verified by Li et al. [4] those who tightly-couples semantic and geometric measurements into a optimization framework to estimate the object's state. To improve the quality of object feature data association, VDO-SLAM [7] combines instance segmentation and dense scene flow to achieve feature association, enabling accurate estimation of object pose and velocity. However, the use of multiple networks make it too heavy to satisfy the real-time performance. Qiu et al. [28] introduced an affordable SLAMMOT solution that fuses monocular camera and IMU data. This approach addresses scale ambiguity across different motion scenarios, leading to precise monocular range determination and object pose estimation. Nevertheless, it doesn't tackle the challenge of object scale estimation. ClusterSLAM [8] serves as a backend optimization module that implements a scene prior-free algorithm for rigid object detection and motion estimation based on motion consistency constraints. Furthermore, ClusterVO [9] proposes a more comprehensive system that combines object detection and heterogeneous conditional random fields to achieve robust and precise association of object and feature data. In a recent study, DymSLAM [29] tackles broader scenarios by employing motion segmentation algorithms to extract masks of unknown objects, enabling pose tracking and point cloud reconstruction for them. Considering the a prior constraints of the scenario, TwistSLAM [30] uses mechanical joint constraints to restrict the degrees of freedom of an object's pose estimation for specific scenarios, demonstrating the effectiveness of their novel formulation.
### **Object-Level SLAM**
The aforementioned researches focus on achieving more precise object pose tracking rather than scale estimation. Compared to it, the objective of Object-Level SLAM is to estimate the poses and scales of objects simultaneously, aiming to create a precise and complete static Object-Level map. SLAM++ [19] represents objects in the environment using prior CAD models and adjusts their poses based on multiple frame observations, pioneering the use of object representation in mapping. NodeSLAM [20] continuously refines the objects in the scene using prior CAD models and applies the idea of Object-Level modeling to the task of grasping. DSP-SLAM [31] has weakened the reliance on prior CAD models by incorporating Signed Distance Function (SDF) and geometric observations, thereby proposing a progressive object SDF model
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline & **EMT** & **MOT** & **SOM** & **DOM** \\ \hline SLAMMOT & ✓ & ✓ & \(\times\) & \(\times\) \\ Object-Level SLAM & ✓ & \(\times\) & ✓ & \(\times\) \\ Ours & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table I: Comparison with SLAMMOT and Object-Level framework. **EMT**: Ego Motion Tracking, **MOT**: Moving Object 3D Tracking, **SOM**: Static Object Modeling, **DOM**: Dynamic Object Modeling.
reconstruction system for static scenes, from coarse to fine. To achieve a more lightweight representation of the environment, CubeSLAM [5] represents objects as cubes and recovers object scales using vanishing points combined with multi-view observations. However, the cube-based representation is not compact in parameterization, making it dependent on sampling adjustments during backend optimization. QuadricSLAM [23] introduces the use of quadric as a representation for objects, using 2D bounding box as semantic constraints. Compared to the cube representation, the compact mathematical model of quadrics allows for nonlinear optimization based on gradient descent in the backend to achieve object modeling. However, its initialization process requires a significant number of frame observations and lacks robustness. OA-SLAM [27] utilized monocular images as input and reparameterized the detection box constraints using a more robust probability distribution. In addition, a quadric initialization method based on semantic observations and triangulation-based depth recovery strategy is developed. Cao et al. [25] utilize an RGB-D camera and construct convex hull constraints, achieving single-frame robust initialization for most objects. Rui et al. [26] use a stereo camera and incorporate prior axis lengths, enabling fast and robust initialization for specific objects with known axis lengths. And extend the strategy of quadric representation of static landmarks to the outdoor scenes. In our previous work [32], we employed both cube and quadric representations to model the scene, applying Object-Level SLAM to applications such as grasping, visual relocalization, and augmented reality in static scenes. However, all of them are only applicable to static scene reconstruction and cannot be applied to dynamic object modeling.
### **Unknown Scene Perception**
In the field of research on object detection and perception for unknown objects, the introduction of Grounding-DINO [33] allows us to perceive objects based on text guidance in an open world scenario. Additionally, the introduction of SAM [34] sheds new light on unknown object segmentation, enabling us to input various prompts to determine the objects to be tracked. The recent implementation of MobileSAM [35] further enhances its practical deployment capabilities, making it more useful in real-world scenarios.
While significant progress has been made in existing methods for SLAMMOT and Object-Level SLAM, certain limitations still endure. No existing system is capable of simultaneously tracking and modeling the poses of both static and dynamic objects in dynamic scenarios. Current quadric modeling methods are effective only for static objects and cannot be applied to dynamic object modeling. Moreover, current solutions are restricted to modeling specific objects and cannot be extended to unknown objects.
For the aforementioned challenges, we introduce a unified framework for dynamic and static object motion estimation and modeling. By integrating SAM, we couple high-level semantic information with low-level geometric information within a dual sliding window framework to simultaneously perform 3D motion tracking and lightweight quadric modeling for unknown rigid objects in the environment.
## III **System Overview**
### **System Architecture**
Fig.2 depicts our system framework, which can handle either RGB-D data for indoor scenes or a combination of monocular images and solid-state LiDAR data for outdoor scenes. After defining the task objectives, our system employs the _AOT_ module, guided by various prompts, to perform unknown object segmentation and tracking. The resulting tracking masks and associated visual feature points are then input into the _Ego-Pose Tracking_ module for ego motion tracking. This module conducts ego-pose tracking for indoor and outdoor environments based on different configurations and provides object masks, ego-pose, and object points to the _Object 9-DoF Tracking_ module. Within the Object 9-DoF Tracking module, we initiate object-centric quadric estimation for the tracked object and refine it by integrating geometric and semantic information using a dual sliding window. Finally, we export the optimized object to create a motion-aware object map.
### **Notation**
In this section, we describe the proposed system in detail. The notations used in this paper are as follows and which are visualized as Fig.3:
* The camera pose of the \(i^{th}\) frame in the world frame, which is composed of a camera rotation \({}_{i}^{w}R_{c}\in\mathbb{SO}(3)\) and a translation \({}_{i}^{w}t_{c}\in\mathbb{R}^{3\times 1}\).
* The \(j^{th}\) background feature position in the world frame. \({}_{j}^{i}z_{b}\in\mathbb{R}^{2\times 1}\)
- The \(j^{th}\) background feature observation in the \(i^{th}\) pixel frame.
Figure 2: The architecture of proposed system.
- The \(n^{th}\) object feature position in the \(k^{th}\) object frame. \(i\)\(z_{k}\in\mathbb{R}^{2\times 1}\)
- The \(n^{th}\) object feature observation of the \(k^{th}\) object in the \(i^{th}\) pixel frame.
* The relative pose and velocity of the \(k^{th}\) object between frame \(i-1\) and \(i\).
* The 2D object rotation bounding box (RotBbox) of \(k^{th}\) object at \(i^{th}\) frame, including axis length, RotBbox center and rotation angle.
* The quadric parameters of the \(k^{th}\) quadric \(\frac{v}{\mathbf{Q}_{k}}\) at frame \(i^{th}\) in the world frame, including semi-axis length, translation, rotation. The dual quadric is denoted by \(\frac{v}{i}\mathbf{Q}_{k}^{*}\in\mathbb{R}^{4\times 4}\).
* The projection dual conic of the dual quadric \(\frac{v}{i}\mathbf{Q}_{k}^{*}\) of \(k^{th}\) object at the \(i^{th}\) frame.
## IV **Asynchronous Object Tracker**
Accurate detection and stable tracking of the target object are fundamental for implementing backend batch optimization. Grounding-DINO, SAM, and DeAOT [36] have demonstrated impressive capabilities in unknown object tracking and segmentation. However, the detection process remains time-consuming. To meet practical application demands, we introduce the AOT (shown in Fig.4) designed for unknown object detection and tracking. This tracker operates with two threads: the detection thread detects and segments target objects in keyframes using various prompts. Where the \(\Delta t_{segment}\) is the time for each segmentation. After that, the tracking thread will conducts pixel-level tracking for \(SegMask_{i}\). After receiving the segmentation results, the tracking thread first performs a _Backward Association_ operation to initialize the new object or update the tracked objects' mask, ensuring synchronization of timestamps between the detection and tracking threads. Then, it employs _Jump Track_ operation to propagate the updated masks to the current timestamp to ensure continuity in subsequent tracking. Here, \(\Delta t_{track}\) represents the time for each tracking step.
Moreover, for each incoming frame, the object features association strategies are followed:
1) For each 2D RotBbox, we extract fast corners within the corresponding object's bounding box area, identified by its unique ID. We then establish inter-frame correlation by applying the Kanade-Lucas-Tomasi (KLT) optical flow algorithm. Additionally, for objects with pre-estimated poses, we leverage prior knowledge of their motion to enhance feature association. This approach helps alleviate the problem of erroneous feature associations caused by the coupling between object and ego-motion.
2) Benefiting from the envelope of the quadric, we can roughly filter out foreground and background features by applying a distance threshold within the initialized ellipsoid. Feature points located outside the axis length of the quadric are classified as background features, while those falling within the envelope are selected for subsequent object tracking and motion estimation.
## V **State Estimation**
The proposed system aims to estimate the state of ego and surrounding objects states simultaneously. The object state estimation factor graph is shown as Fig.6 which integrates hybrid observations in a dual sliding window to achieve joint multi-states estimation.
### **Ego-motion Estimation**
#### V-A1 **Indoor ego state estimation**
In indoor scene applications, we utilize RGB-D cameras as sensor input and perform camera pose tracking based on ORB-SLAM3 [37]. In contrast to the original system, we incorporate semantic information to assist in decoupling state estimation, enabling a more comprehensive description of various landmarks in the scene. The Maximum A Posteriori Estimation (MAP) problem, which constructs visual observations used for pose estimation, can be represented as:
\[\begin{split}\text{``}T_{c},\text{``}T_{o},f_{label}=\\ \operatorname*{arg\,max}_{\text{``}T_{c},\text{``}T_{o},f_{label} }=\prod\limits_{i=0}^{N}\prod\limits_{j=0}^{M_{label}}p(i_{j}z_{label}|_{i}^{ w}T_{c},\text{``}T_{o}^{label}\underset{\text{``}j}{w}f_{label})\end{split}, \tag{1}\]
where, \((\cdot)_{label}\) represents the semantic label of the landmark, \(N\) and \(M_{label}\) represent the number of frames and landmarks, respectively. When estimating its own pose, we utilize landmarks with a semantic label of _background_ as observations and \(\underset{\text{``}1}{\text{``}1}T_{o}^{b}=\underset{\text{``}}{w}T_{o}^{b}\), specifically, we make the assumption that all stationary landmarks are situated on a fixed, immobile rigid body. As a result, the pose of this rigid body remains constant across all frames.
Fig. 4: Asynchronous Object Tracker.
Fig. 3: Notation visualization.
#### Iii-A2 **Outdoor ego state estimation**
To overcome the constraints of RGB-D cameras, which may not provide precise depth measurements in outdoor environments, we adopt a solid-state LiDAR sensor in conjunction with monocular camera as input sensors for the subsequent state estimation in outdoor scenarios.
In the process of ego-pose estimation, when dealing with each incoming LiDAR scan, we account for motion distortions induced by ego-motion within the frame and dynamic objects by applying a motion model for compensation. Subsequently, we employ an error state iterated Kalman filter (ESIKF) that focuses on minimizing point-to-plane residuals to estimate the system's state, as detailed in [38]. In our sensor fusion configuration for object state estimation, we leverage LiDAR point clouds to provide depth information for visual features, while RGB images serve for inter-frame feature association and semantic extraction. To recover feature depth, we employ spherical projection in conjunction with K-Nearest Neighbor search (KNNs) to establish data association between LiDAR point clouds and visual features. We further refine feature depth through line-of-sight interpolation.
To address the synchronization issue between LiDAR and camera frame rates, we employ a soft synchronization strategy for timestamp alignment.
Additionally, when estimating the state of moving objects, the distortion present in LiDAR point clouds on these objects is a result of the coupling between their own motion and the motion of the objects. Utilizing existing distortion correction strategies designed for static scenes are ineffective in eliminating this distortion. To address this challenge, we integrate the estimated object motion and introduce a motion-aware compensation strategy. The algorithm diagram is illustrated in Fig. 5. First, the \(n^{th}\) raw LiDAR point \({}^{t}_{n}\tilde{p}\) with timestamp \(t\) in LiDAR frame can be undistorted to the current world frame by using ego-motion through:
\[{}^{t}_{n}p=R_{t}{}^{t}_{n}\tilde{p}+t_{t}=R_{t_{s}}\left(R_{t_{s}}{}^{t}_{n} \tilde{p}+t_{t_{s}t}\right)+t_{t_{s}}, \tag{2}\]
where \(R_{t_{s}t},t_{t_{s}t}\) can be obtained through quaternion spherical interpolation and linear interpolation methods as:
\[\left\{\begin{array}{l}R_{t_{s}t}=\frac{\sin\left(\left(1-\frac{t-t_{s}}{t_ {s}-t_{s}}\right)\right)^{*}_{n}q+\sin\left(\frac{t-t_{s}}{t_{s}-t_{s}}\right) ^{*}_{n}q}{\sin\left(t_{s}^{*}q\right)}\\ t_{t_{st}}=\frac{t-t_{s}}{t_{s}-t_{s}}\left.t_{t_{s}t_{s}}\right.\\ \end{array}\right. \tag{3}\]
Furthermore, we utilize the estimated object motion to obtain undistorted point clouds using Eq.4, which will be utilized in the subsequent estimation of the object's state.
### _Object-state Estimation_
To estimate the pose and scale of rigid objects within the environment, we introduce a dual sliding window optimization framework which combines long-term and short-term parts to fuse multiple sources of information. The novel object state estimation factor graph shown as Fig.6 which can be formulized as:
\[{}^{w}T^{k}_{o},S^{k}_{o},{}^{o}f^{k}=\operatorname*{arg\,min}_{ \begin{subarray}{c}\nu T_{e},\nu T^{k}_{o},S^{k}_{o},\sigma f^{k}\end{subarray}} \left\{e_{\text{prior}}+\right. \tag{5}\] \[\sum_{i=0}^{N_{k}}\sum_{n=0}^{M_{e}^{k}}\left\|e_{z}\left({}^{i}_ {n}z^{k}\mid\bar{\nu}_{i}T_{c},{}^{w}_{i-1}T_{c},{}^{w}_{i}T_{o},{}^{w}_{i-1} T^{k}_{o},{}^{o}_{n}f^{k}\right)\right\|^{2}_{\Sigma_{n}}+\] \[\left.+\sum_{i=0}^{N_{k}}\left\|e_{\mathcal{A}}\left({}^{i}_{i}A^ {k},\bar{A}^{k}\right)\right\|^{2}_{\Sigma^{i}_{A}}+\sum_{i=0}^{N_{k}}\left\|e _{\mathcal{C}}\left({}^{i}C^{k},\bar{C}_{i}\right)\right\|^{2}_{\Sigma^{k}_{o }}\}\]
In our implementation, we manage a short-term sliding window containing 8-10 keyframes. This window serves to incorporate the geometric constraint and motion model constraint for object pose estimation. In the long-term sliding window which contains 20-25 keyframes, we incorporate semantic observations of objects to constrain their states. Differing from conventional geometric features, semantic information can offer more enduring observations, thereby providing more consistent constraints. Additionally, to prevent convergence issues resulting from inaccurate initial pose estimation when integrating semantic constraints, we exclusively include the keyframes that have been refined within the short-term sliding window into the long-term sliding window. This selection is based on the observation that keyframes optimized within the short window exhibit a more reliable initial state, which in turn guarantees the convergence of the long-term window. Additionally, as we mainly focus on object state estimation, when estimating the object's state, we keep the ego-pose fixed and use it solely for coordinate transformations.
#### Iii-B1 **Short-term object state estimation**
**Geometric reprojection factor:** According to the AOT designed in Sec.4, we obtain masks for each tracked object, which help determine the feature points belonging to that object within the masked regions. In the process of modeling the visual geometric constraints, we use the standard
Figure 5: Motion Compensation.
Figure 6: Object State Estimation Factor Graph.
geometric reprojection factor [7][9].Firstly, we transform the landmark \({}^{e}_{n}f^{k-1}_{k}\) to the coordinate of the \(k^{th}\) object using \({}^{w}_{-1}T_{c}\) and \({}^{w}_{-1}T^{k}_{o}\). Based on the rigid assumption that geometric landmarks belonging to the same object maintain consistent positions relative to the object's coordinates across different frames. With this assumption, we can further transform the landmark \({}^{e}_{n}f^{i-1}_{k}\) to the world coordinate of the \(i^{th}\) frame using \({}^{w}_{-1}T_{c}\) and \({}^{w}_{-1}T^{k}_{o}\). Subsequently, by applying the reprojection operation \(\Omega(\cdot)\), we can project the landmark to the current pixel coordinate. Finally, we establish the object pose constraints by comparing the projected landmark with the associated feature point \({}^{i}_{n}z_{k}\) as follows:
\[\begin{array}{l}e_{z}\left({}^{i}_{n}z^{k}\mid\frac{w}{i}T_{c,i-1}^{w}T_{c,i }^{w}T_{o},{}^{w}_{i-1}T^{k}_{o},{}^{o}_{n}f^{k}\right)=\\ \Omega\left(\left({}^{w}_{i}T_{c}\right)^{-1}{}^{w}_{i}T^{k}_{o}\left({}^{w}_ {-1}T^{k}_{o}\right)^{-1}{}^{w}_{i-1}T^{k}_{o}{}^{i}_{k}\right)-{}^{i}_{n}z^{ k}\end{array} \tag{6}\]
**Motion model factor:** To address the problem of trajectory jumps caused by visual observation noise, we incorporate motion model factor \(e_{M}\) into the optimization process to smooth the 3D trajectory. The motion model factor can be formulized as:
\[e_{M}({}^{w}_{i}T^{k}_{o},{}^{w}_{i-1}T^{k}_{o})={}^{w}_{i}T^{k}_{o}-{}^{w}_{i }\hat{T}^{k}_{o}, \tag{7}\]
where \({}^{w}T^{k,i}_{o}\) is the observation of the \(k^{th}\) object in frame \(i\), and \({}^{w}\hat{T}^{k,i}_{o}\) is the predicted pose of the object in frame \(i-1\) obtained using the motion model.
#### Iii-B2 **Long-term object state estimation**
**Object Centric Quadric Initialization:** In this section, we present a mathematical analysis of the exist dual quadric formulation to illustrate the limitations of it for representing dynamic objects. The quadratic represented in the world frame can be given by:
\[{}^{w}_{i}\mathbf{Q}^{*}=\begin{bmatrix}{}^{w}_{i}R&{}^{w}_{i}t\\ 0^{T}&1\end{bmatrix}\begin{bmatrix}D&0\\ 0^{T}&-1\end{bmatrix}\begin{bmatrix}{}^{w}_{i}R^{T}&0\\ {}^{w}_{i}t^{T}&1\end{bmatrix}, \tag{8}\]
where the dual quadric is denoted by \({}^{w}_{i}\mathbf{Q}^{*}\in\mathbb{R}^{4\times 4}\), \(D\in R^{3\times 3}\) is the diagonal matrix composed of the squares of the quadric axis lengths, \({}^{w}_{i}R\in R^{3\times 3}\) and \({}^{w}_{i}t\in R^{3\times 1}\) is the quadric centroid translation and rotation in the world frame respectively. And the projection dual conic denoted by \({}^{w}_{i}\mathbf{C}^{*}\),
\[{}^{w}_{i}\mathbf{C}^{*}=K^{c}_{i}T_{w}{}^{w}_{i}\mathbf{Q}^{*}(K^{c}_{i}T_{w} )^{T} \tag{9}\]
The above formulation is based on the static environment assumption, it's means that the pose of an object in the world frame is fixed and independent of time. This limitation prevents its applicability to dynamic object representation. To address this problem, we based on rigid assumption and propose an _"object-centric quadric formulation"_ for both dynamic and static objects. We assume that the pose of the quadric \(\mathbf{Q}^{*}\) in the object frame is constant. And the expression for the object-centric quadric \({}^{o}\mathbf{Q}^{*}\) can be reformulated as:
\[{}^{o}\mathbf{Q}^{*}=\begin{bmatrix}\mathbf{I}&\mathbf{0}\\ \mathbf{0}^{T}&1\end{bmatrix}\begin{bmatrix}\mathbf{D}&\mathbf{0}\\ \mathbf{0}^{T}&-1\end{bmatrix}\begin{bmatrix}\mathbf{I}^{T}&\mathbf{0}^{T}\\ \mathbf{0}&1\end{bmatrix}, \tag{10}\]
where we initialize the rotation matrix \(R\) of the object in the object coordinate as the identity matrix, and the center of the quadric is located at the origin of the object coordinate. Furthermore, the new projection dual conic \({}^{o}_{i}\mathbf{C}^{*}\) of the quadric \({}^{o}\mathbf{Q}^{*}\) at the \(i^{th}\) frame can be described as follows:
\[{}^{o}_{i}\mathbf{C}^{*}=K^{c}_{i}T_{w}{}^{w}_{i}T_{o}{}^{o}_{i}\mathbf{Q}^{* }(K^{c}_{i}T_{w}{}^{w}_{i}T_{o})^{T} \tag{11}\]
```
0:\({}^{i}\mathbb{P}^{k}\) - The point cloud of \(k^{th}\) object at \(i^{th}\) frame, \(Num\) - The maximum iterations, \(\epsilon\) - Inliers threshold.
0:\({}^{i}\)\(obb^{k}_{best}\) - Best oriented bounding box, \({}^{i}\delta^{k}_{axis}\) - Axis length uncertainty.
1:Initialize:\(Inlier_{best}\) - Inlier set, \(\mathbb{A}\) - Axis length set.
2:for\(t=1;t\leq Num;n++\)do
3:\(Sample\leftarrow\) RandomSample(\({}^{i}\mathbb{P}^{k}\))
4:\(obb\leftarrow\) FitOrientedBoundingBox(\(Sample\))
5:\(Inliers\leftarrow\) ComputeInliers(\({}^{i}\mathbb{P}^{k},obb,\epsilon\)) \(\triangleright\) Eq.(12)
6:if\(len(Inliers)>len(Inlier_{best})\)then
7:\({}^{i}obb^{k}_{best}=obb\)
8:\(Inlier_{best}=Inliers\)
9:\(\mathbb{A}.append(obb.axis)\)
10:endif
11:endfor
12:\({}^{i}\mu^{k}_{axis}=mean(\mathbb{A}),{}^{i}\delta^{k}_{axis}=\frac{\sum_{n= 0}^{len(\mathbb{A})}(\mathbb{A}^{n-i}\mu^{k}_{axis})}{len(\mathbb{A})}\)
13:\({}^{i}obb^{k}_{best}.axis=(1-\omega)\cdot{}^{i}\mu^{k}_{axis}+\omega\cdot{}^{i} \delta^{k}_{axis}\)
```
**Algorithm 1** Oriented Bounding Box Fit based on RANSAC
To robustly initialize the quadric, we propose the scale-constrained quadric initialization strategy (SQI) in which the quadric is first initialized in a sphere and then refined in the form of a scale-constrained quadric as more 2D detections are observed. The quadric's axis length and object orientation can be initialized by OBB that construct by object point clouds. The procedure process of fitting OBB in our method is illustrated on Alg.1.In order to enhance the fitting robustness, we implemented the algorithm within the RANSAC framework.
\[\|(p-obb.center)\cdot obb.axis-obb.axis\|_{2} \tag{12}\]
where, the Eq.(12) is used to evaluate the inliers of the model fitting at each iteration. Additionally, during the fitting process, we assessed the uncertainty of the fitting results based on the stability observed throughout the iterations, which provides a fusion prior for subsequent multi-frame fusion.
**RotBbox observation factor:** The parameters of a quadric can be estimated through observations from multiple views. In our method, we can obtain the objects' RotBbox by leveraging the pixel-level semantic segmentation results provided by SAM. The semantic observation provided by RotBbox is illustrated as Fig.7, where the RotBbox can be parameterized as \(RotBbox_{2D}=\{x_{c},y_{c},a,b,\theta\}\). Based on the RotBbox, we can obtain the dual conic observation for the current object \(k^{th}\) at time \(i\):
\[\small\begin{split} C^{*}_{obs}=\begin{bmatrix}\cos\theta-\sin \theta\,x_{c}\\ \sin\theta\,\cos\theta\,\,y_{c}\\ 0\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
### **Evaluation details**
**Dataset and parameter settings**: For indoor scenes, we evaluate our method using the Oxford Multimotion dataset (OMD) [41], which is specifically designed for indoor simultaneous camera localization and rigid body motion estimation. The dataset provides ground-truth trajectories obtained through a motion capture system. Furthermore, we have developed synthesized simulation datasets, encompassing both indoor and outdoor scenes, for comprehensive evaluation purposes.
**Evaluation metrics**: The accuracy of object pose estimation can be evaluated using two different types of metrics: (1) the absolute pose error (APE) and the relative pose error (RPE) measure the quality of the object trajectory; (2) The 3D/2D Intersection over Union (IoU) metric is used to evaluate the accuracy of object scale estimation.
### **Oxford Multimotion dataset Evaluation**
The proposed method is compared with similar SLAMMOT SLAM systems, including MVO, ClusterVO, and DymSLAM, using the OMD. We follow the evaluation protocol described in [40], which involves computing the maximum drift in translation and rotation for both camera ego-motion and all moving objects. Our evaluations and comparisons are conducted on two sequences: swinging 4 unconstrained (SW4) and occlusion 2 unconstrained (O2). The swinging 4 sequence
\begin{table}
\begin{tabular}{l|c c c|c c|c c|c c|c c|c c|c c|c} \hline \multirow{2}{*}{Sequence} & \multicolumn{4}{c|}{ClusterVO [9]} & \multicolumn{4}{c|}{DymisSLAM [29]} & \multicolumn{4}{c|}{MVO [40]} & \multicolumn{4}{c}{Proposed Approach} \\ & Trans(m) & Roll(Y) & Year(O) & Prok(Y) & Trans(m) & Roll(Y) & Year(O) & Patch(Y) & Trans(m) & Roll(Y) & Year(O) & Patch(Y) & Trans(m) & Roll(Y) & Year(Y) & Patch(Y) \\ \hline SW4-Epo & 0.62 & -4.97 & 2.53 & 0.448 & **0.01** & **0.78** & **1.6** & -0.7 & 0.93 & -6.82 & 3.13 & **0.16** & 0.29 & **0.31** & 1.9 & 0.51 \\ SW4-C1 & 0.24 & **0.05** & **0.05** & **0.145** & **0.11** & -5.87 & -9.09 & -2.77 & 0.36 & 3.09 & 6.51 & **0.16** & 0.23 & 0.75 & **1.42** & 0.16 \\ SW4-C2 & 0.448 & 23.19 & -62.53 & **0.09** & 0.13 & 7.13 & 2.25 & -6.58 & 0.64 & 25.26 & -55.83 & 1.46 & **0.10** & **6.05** & 3.22 & 1.08 \\ SW4-C3 & 0.243 & -13.96 & **0.08** & 5.54 & 0.16 & 3.66 & -4.65 & **3.56** & 0.45 & 11.35 & 0.53 & **4.08** & 0.11 & 2.5 & **0.05** & 5.13 \\ SW4-C4 & 4.69 & 24.35 & 23.65 & -101.05 & **0.05** & **3.29** & **2.40** & 1.8 & 5.94 & 93.56 & 5.77 & -5.75 & 0.51 & 6.53 & 2.8 & **1.12** \\ O2-Epo & **0.24** & **1.00** & 0.238 & **0.48** & - & - & - & - & 0.31 & 3.45 & -3.21 & 1.73 & 0.19 & **0.65** & **1.11** & 0.51 \\ O2-Epoi & 0.19 & 14.2 & 8.45 & **0.44** & - & - & - & - & 0.51 & 1.94 & -22.83 & 1.75 & 0.15 & 0.95 & 6.27 & 0.80 \\ O2-Cube & **0.88** & **19.35** & -7.23 & **12.03** & - & - & - & - & 1.41 & 37.22 & -27.83 & 19.72 & **0.80** & 10.35 & **5.13** & **9.56** \\ \hline \end{tabular}
\end{table}
Table II: Performance comparison on SW4 and O2 sequence in Oxford Multimotion dataset. Best results are highlighted as first, second, and third.
Figure 8: Qualitative results in OMD Sequence SW4. The subfigures demonstrate the proposed method can estimate multi-states simultaneously.
Figure 9: Qualitative results in OMD Sequence O2. The subfigures demonstrate the system’s ability to maintain continuous and stable object pose and scale estimations even when the tracked object experiences temporary occlusions. Figures a to d depict scenarios with double occlusions.
consists of 500 frames with four moving bodies (SW4-C1, SW4-C2, SW4-C3, SW4-C4), while the occlusion 2 sequence comprises 300 frames with two moving bodies (O2-Cuboid and O2-Cube).
Since we cannot directly obtain the absolute coordinates of the moving objects in the vicon coordinate system, we need to multiply our recovered pose with a rigid transformation matrix \(T_{align}\) to align it with the ground-truth pose. This is a reasonable step in the evaluation process because different coordinate systems based on the same trajectory only require a rigid transformation matrix between them.
In the semantic extraction part, we employ the use of point prompt at the keyframe for the AOT. The usage of SAM allows us to perform segmentation on arbitrary objects in the images.
The experimental results is shown as Table.II, it can be seen that the proposed method achieves more accurate localization performance compared to existing systems in more than 50\(\%\) trajectories. Furthermore, our approach not only achieves object 3D pose tracking but also facilitates quadric modeling. The modeling results are shown as Fig.8 and Fig.9. Two main advantages of our method over the existing advanced methods have contributed to this improvement. First, our system optimizes the object states based on the dual sliding window framework, which fully utilizes temporal and spatial continuity constraints. Additionally, compared to existing geometry-based approaches, the robust utilization of semantic information in our system provides better consistency constraints. Second, the powerful segmentation capability of SAM and the pixel-level tracking performance of DeAOT allow our system to effectively suppress the influence of background points in object feature association, thereby avoiding misclassification of dynamic landmarks and degradation of pose estimation results that might occur in geometric-based methods.
Furthermore, the experimental results obtained from the O2 sequence provide further validation for the tracking performance of our meticulously designed AOT. This advanced system enables continuous object tracking even in the presence of temporary occlusions. Illustrated in Fig.9 subfigures a to d, the swing box undergoes a brief occlusion by the cuboid. However, its motion can still be accurately estimated using the historical observations recorded prior to the occlusion, showcasing the stability of our 3D tracking and scale estimation capabilities. Moreover, even in scenarios with extensive occlusion, as depicted in subfigure d, where the accuracy of the 2D RotBbox observations diminishes, the estimated scale of the cube remains stable. This observation confirms the advantages of employing quadrics to represent object scales and underscores the robust maintenance of object state consistency achieved through the long-term sliding window approach employed in our system. The more experimental details are shown in the attached video [https://youtu.be/_10VeaRSWQ0](https://youtu.be/_10VeaRSWQ0).
### _Synthesized Multimotion and Modeling dataset Evaluation_
Due to OMD lacks ground truth of objects scale, it is not suitable for object modeling accuracy evaluation. To overcome the gap, we developed the SMMD (Synthesized Multimotion and Modeling dataset) to extensively evaluate the capability of our method in dealing with 3D tracking and modeling of rigid objects with diverse shapes and complex motions. The synthesized dataset includes RGB images, depth images, and simulated LiDAR data, along with ground truth of camera pose, object's pose, and object's scale which can be used to evaluate the ego-localization of SLAM, object 3D tracking, and scale estimation performance. The simulator is released as [https://github.com/Linghao-Yang/Synthesized-Multimotion-and-Modeling-Dataset](https://github.com/Linghao-Yang/Synthesized-Multimotion-and-Modeling-Dataset).
The process of generating simulated LiDAR data is shown in Fig.10. Firstly, using OpenGL to render a depth image that aligns with the field of view (FOV) of the LiDAR. Subsequently, an octree map is employed to store the point clouds in the camera frame. Finally, employing the predefined LiDAR scan pattern, ray casting is executed to convert the depth image into simulated LiDAR data.
#### Iv-C1 **Object Pose Tracking Evaluation**
The Table.III quantitatively presents the comparison results of object pose estimation on the synthetic dataset. In the first two sequences (Subway \(\&\) car), simulated LiDAR is utilized for recovering depth, while in the subsequent sequences, depth images are employed. Regarding depth noise, we assume that the uncertainty of depth values is directly proportional to their distance from the camera's optical center. In other words, the farther the distance from the optical center, the larger the depth noise.
We conducted a comparative analysis between our proposed method and the OBB Fit Alg.1. The experimental results clearly demonstrate that our approach achieves a substantial enhancement in object localization accuracy, as indicated by the improvements in APE and RPE compared to solely fitting the OBB. One of the key factors contributing to this improvement is the incorporation of a multi-source observations factor graph model within a dual sliding window framework. The utilization of accurate and continuous observations empowers our system with greater robustness compared to algorithms (e.g., ClusterVO and DynaSLAM2) that rely solely on single-frame, single-source observations. Furthermore, the integration of continuous and stable long-term semantic information for tracking enables better local consi
\begin{table}
\begin{tabular}{|l|c c c c c c c|} \hline \hline \multirow{2}{*}{**Sequence**} & \multicolumn{5}{c||}{Single Frame OBB} & \multicolumn{5}{c|}{Proposed Approach} \\ & APFL & APFL & RPE & RPE & APFL & APFL & RPE & RPE \\ \hline Subway & 4.819 & 2.363 & 6.215 & 1.021 & **0.006** & **0.397** & **0.055** & **0.003** \\ car & 2.414 & 2.451 & 3.381 & 0.933 & **0.244** & **0.066** & **0.261** & **0.012** \\ Cuboid & 0.651 & 2.387 & 0.317 & 1.014 & **0.052** & **0.144** & **0.034** & **0.018** \\ Cube & 0.44 & 2.26 & 0.411 & 1.51 & **0.13** & **0.247** & **0.103** & **0.016** \\ StateBoard & 1.111 & 1.218 & 1.173 & 0.299 & **0.118** & **0.117** & **0.118** & **0.006** \\ Ball & 0.288 & 2.370 & 1.572 & 1.692 & **0.186** & **0.182** & **0.223** & **0.017** \\ \hline \hline \end{tabular}
\end{table}
Table III: Performance comparison in synthesized dataset.
Figure 10: LiDAR data Generation. The first column shows the raw depth image, the second displays ray casting correlation based on the LiDAR pattern, and the third reveals the generated LiDAR point cloud.
which is reflected in the significant improvement of our system in the RPE.
#### Iv-B2 **Modeling Accuracy Evaluation**
Since our method models objects as quadrics, the conventional 3D bounding box IoU evaluation strategy is not appropriate for our method. Consequently, we introduced new evaluation metrics based on 3D IoU and 2D IoU.
Due to the irregular geometry formed by the intersection of two ellipsoids with different coordinate origins, it is challenging to calculate the volume analytically. Therefore, we employ the Monte Carlo algorithm to compute the 3D IoU. In Fig.11, the blue ellipsoid represents the estimated ellipsoid, the red ellipsoid represents the reference ellipsoid, and the green point cloud represents the points within the intersection region. For the 2D IoU, we project the quadric onto a rotated rectangle and utilize the Vatti clipping algorithm to calculate the intersection area.
The Fig.12 showcases the qualitative IoU comparison results of our system in object pose and scale estimation. It is evident that our algorithm achieves consistent and reliable pose estimation with minimal inter-frame jitter. Furthermore, the fusion of multi-frame observation constraints and the utilization of the compact quadric representation enables robust scale estimation, even in scenarios where tracked objects are partially occluded. The further experimental details are shown in the attached video [https://youtu.be/DvanqHV9KNc](https://youtu.be/DvanqHV9KNc).
Additionally, the Fig.13 presents a quantitative comparison
Figure 11: Evaluation Metrics Visualization. The first column represents 3D ellipsoid IoU, the second column represents 2D ellipse IoU.
Figure 12: The qualitative results demonstrate the comparison of pose estimation accuracy and modeling accuracy for objects under different algorithms. The rows highlighted with colored boxes in the figure represent the qualitative results of our algorithm.
Figure 13: The quantitative results illustrate the comparison of modeling accuracy for objects under different constraints or methods.
of the object modeling accuracy on the generated synthetic dataset, we performed ablation experiments to assess the influence of the constraints employed in the scale estimation component. The experimental results demonstrate that incorporating observations based on rotated rectangles, along with the constraints of axis length priors and rigid priors leads to more precise scale estimation of the objects, the average 3D IoU and 2D IoU on all sequences reached over 80\(\%\). The constraints of various observations on the scale will be thoroughly analyzed in the upcoming experiments.
#### Iv-B3 **Hybrid quadric constrains ablation study**
The results of the ablation study on hybrid quadric constraints are presented in Fig.14. From the findings, it is evident that when utilizing only 2D bounding box information as a constraint, RotBbox observations demonstrate superior constraint effectiveness compared to Bbox, particularly when the ego platform and the observed object are not situated on the same plane and limited viewing angles are available. This superiority can be attributed to the fact that, from the perspective of the limited observation angle, RotBbox observations more accurately represent the actual position of the object on the image plane. Conversely, under conditions of limited viewing angles, Bbox cannot provide sufficient constraints. This limitation arises due to the fundamental principles underlying the multi-view reconstruction process based on quadric, as 2D constraints from restricted views result in under-constrained 3D recovery. The right column of the figure displays the results of 3D quadric recovery. The blue and red ellipsoids represent the reference ground truth, while the cyan and orange ellipsoids represent the estimation results. It is apparent that although the RotBbox observations appear to align well with the 2D projected quadric curves, the shape becomes under-constrained in 3D space. The recovered ellipsoids significantly deviate from the ground truth in terms of axis lengths, and due to insufficient constraints arising from the limited viewing angles, there is a shift in the object's coordinate origin. This issue arises because, in the current object-centric modeling approach, the axis lengths and poses are interconnected. When the axis length estimation is inaccurate, the object's position is adjusted to achieve the best 2D projection, leading to an incorrect overall result. Just as discussed in Sec.V-B2, we introduce the rigid prior assumption to constrain the 3D recovery process under limited viewing angles. From the experimental results, it is evident that this approach maintains accurate pose estimation results and improves the estimation of axis lengths. However, it does not entirely solve the under-constrained problem caused by limited viewing angles with only 2D observations. Therefore, we introduce the 3D prior axis length manifold constraint to tackle this issue. As described in Alg.2, this constraint integrates multiple frame observations and provides precise prior constraints for quadric
Figure 16: Ablation Study for object-centric parameterization. The first column utilizes the proposed object-centric quadratic parameterization method, while the second column follows global parameterization like other existing approaches.
Figure 14: Ablation Study for Scale Constraints. The top row displays results from left to right: combining the rect bbox, rigid prior constraint, and prior axis manifold constraint for scale estimation of the moving object. The bottom row shows results using the rot rect bbox as the observation.
Figure 15: The passive objects used for indoor and outdoor tracking.
axis length estimation. The results after incorporating the prior axis length manifold constraint are displayed in the third column of Fig.14. It can be observed that with the inclusion of 3D information, the under-constrained problem caused by limited viewing angles is effectively alleviated, resulting in more accurate quadric reconstruction.
### **Real-world dataset Evaluation**
Furthermore, we conducted verification and evaluation of our method on real-world scenarios. The experimental platform is shown as Fig.15, showcasing the input from an indoor scene captured by the RealSense D455 camera, along with the outdoor scene input from the Livox Avia LiDAR and a monocular camera. The objects depicted in the Fig.15 are used as passive objects for tracking and modeling.
#### Vi-D1 **Object-centric and Global Parameter Comparison**
Fig. 16 presents a qualitative comparison between our proposed object-centric quadric parameterization and the original global parameterization approach utilized in existing methods (e.g., [27]) for modeling moving objects. The global parameteri
Figure 17: Qualitative evaluation on real-world indoor dataset. The rows highlighted with colored boxes in the figure represent the qualitative results of our algorithm.
zation form fixes the global coordinate system begin at the initialization of the quadric, which limits its ability to model moving objects. In contrast, our method allows the quadric to adapt to the motion of objects by representing each object's pose individually, decoupled from the global coordinate system, which makes we can effectively handle both static and moving objects within a unified framework.
#### V-A2 **Indoor Evaluation**
The Fig.17 qualitatively demonstrates the modeling and pose tracking performance of our system by using an RGB-D camera in indoor scenes. The unified quadric parameterization enables seamless switching between static and dynamic objects, leading to strong consistency in pose estimation with minimal jitter.
In terms of pose estimation, our exhibits robust performance with accurate tracking and low jitter. Indeed, solely relying on single-frame observations for OBB fitting is susceptible to the impact of masking observation noise, leading to significant inter-frame jumps in the estimated results. Additionally, the formulation of unified pose and scale estimation enables us to address the challenges of estimating the state of both static and dynamic objects simultaneously. Combining the ablation study shown in Fig.14 and above results, we can conclude that the hybrid constraints contribute to our method's ability to achieve fast scale convergence and accurate object modeling. Moreover, benefit to the dual-sliding window approach in maintaining object states, our system can leverage more observations to enhance the consistency of state estimation and effectively handle occlusion issues.
#### V-A3 **Object Motion-Aided Undistortion Comparison**
As described in Sec.V-A2, to enhance the precision of feature depth estimation in outdoor environments, we have adopted the sensor configuration of Solid LiDAR combined with a monocular camera. The experiments in this subsection are intended to validate the Object Motion-Aided Undistortion algorithm proposed in this paper, and the experimental results are shown in Fig.18. The first two rows display the point cloud correction results using both the original undistortion method and our proposed method for three consecutive frames, while the third row depicts the corresponding real-world scene. The experimental results reveal that slower object motion and a higher resemblance between ego motion and object motion lead to less impact on the undistortion process. However, when these conditions are not met, it significantly impairs undistortion performance, causing inaccurate feature depth estimation and, in turn, affecting subsequent state estimation. By introducing the feedback of the object perception results into the point cloud undistortion process, we are able to
Figure 19: Qualitative evaluation on real-world outdoor dataset.
Figure 18: Ablation Study for Object Motion-Aided Undistortion. The original undistortion method’s results are shown at the top and the proposed method are displayed at the second row. The bottom row illustrates the corresponding scenarios.
decouple the motion and obtain a cleaner and more accurate point cloud.
#### Vi-B4 **Outdoor Evaluation**
Fig.19 depicts the object perception results obtained from our method in a real-world outdoor scene. The orange point cloud represents the object point cloud generated by our system. Notably, by employing a soft timestamp synchronization and motion compensation algorithm as detailed in Sec.V-A2, we significantly reduce distortions in the acquired point clouds. This refinement enhances the reliability of the depth information obtained. Moreover, we successfully leverage the robustness of image information in terms of texture feature association, enabling our system to achieve smooth and accurate 3D object tracking while effectively handling complex real-world scenes. The more experimental details are shown in the attached video [https://youtu.be/b7f3Wzr7nE](https://youtu.be/b7f3Wzr7nE).
## VII **Conclusion**
In this paper, we present a novel optimization framework that integrates the 3D tracking and modeling of both static and dynamic rigid objects. Through a comprehensive analysis, we identify the limitations of the original global parameterization method for quadric in the dynamic object modeling problem. To address these limitations, we introduce an object-centric dual quadric parameterization, which allows for the estimation of both static and dynamic quadric within a unified model. Our framework outperforms existing methods on diverse datasets by leveraging the SQI algorithm and a 9 DoF object state estimation algorithm based on the dual sliding window framework with hybrid constraints. Moreover, we extend the applicability of our framework to indoor and outdoor environments by proposing solutions that combine pure vision and visual-LiDAR fusion. In future research, our main objectives are to improve the accuracy and precision of pose estimation for objects with low-texture surfaces. Additionally, we aim to integrate the estimated object state into planning and decision-making algorithms, enabling the development of an accurate and robust intelligent tracking and control system.
|
2309.11447 | Equality of different definitions of conformal dimension for
quasiself-similar and CLP spaces | We prove that for a quasiself-similar and arcwise connected compact metric
space all three known versions of the conformal dimension coincide: the
conformal Hausdorff dimension, conformal Assouad dimension and Ahlfors regular
conformal dimension. This answers a question posed by Mathav Murugan.
Quasisimilar spaces include all approximately self-similar spaces. As an
example, the standard Sierpi\'nski carpet is quasiself-similar and thus the
three notions of conformal dimension coincide for it.
We also give the equality of the three dimensions for combinatorially
$p$-Loewner (CLP) spaces. Both proofs involve using a new notion of
combinatorial modulus, which lies between two notions of modulus that have
appeared in the literature. The first of these is the modulus studied by Pansu
and Tyson, which uses a Carath\'eodory construction. The second is the one used
by Keith and Laakso (and later modified and used by Bourdon, Kleiner,
Carrasco-Piaggio, Murugan and Shanmugalingam). By combining these approaches,
we gain the flexibility of giving upper bounds for the new modulus from the
Pansu-Tyson approach, and the ability of getting lower bounds using the
Keith-Laakso approach. Additionally the new modulus can be iterated in
self-similar spaces, which is a crucial, and novel, step in our argument. | Sylvester Eriksson-Bique | 2023-09-20T16:24:42Z | http://arxiv.org/abs/2309.11447v2 | # Equality of different definitions of conformal dimension for quasiself-similar and CLP spaces
###### Abstract.
We prove that for a quasiself-similar and arcwise connected compact metric space all three known versions of the conformal dimension coincide: the conformal Hausdorff dimension, conformal Assouad dimension and Ahlfors regular conformal dimension. This answers a question posed by Mathav Murugan. Quasisimilar spaces include all approximately self-similar spaces. As an example, the standard Sierpinski carpet is quasiself-similar and thus the three notions of conformal dimension coincide for it.
We also give the equality of the three dimensions for combinatorially \(p\)-Loewner (CLP) spaces. Both proofs involve using a new notion of combinatorial modulus, which lies between two notions of modulus that have appeared in the literature. The first of these is the modulus studied by Pansu and Tyson, which uses a Caratheodory construction. The second is the one used by Keith and Laakos (and later modified and used by Bourdon, Kleiner, Carrasco-Piaggio, Murugan and Shanmugalingam). By combining these approaches, we gain the flexibility of giving upper bounds for the new modulus from the Pansu-Tyson approach, and the ability of getting lower bounds using the Keith-Laakos approach. Additionally the new modulus can be iterated in self-similar spaces, which is a crucial, and novel, step in our argument.
The author was partially supported by Finnish Academy Grants n. 345005 and n. 356861. We thank Mathav Murugan for posing the question to us, for discussing the problem at the Okinawa Institute of Science and Technology in June 2023 and for giving helpful comments on a preprint version of this paper. The work was started at the workshop "Random walks and analysis on metric spaces". We thank the institute for its hospitality and care - especially given the weak typhoon that overlapped the event.
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.4 The Hausdorff dimension of the space
* 3.5 The Hausdorff dimension of the space
* 3.6 The Hausdorff dimension of the space
* 3.7 The Hausdorff dimension of the space
* 3.8 The Hausdorff dimension of the space
* 3.9 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.4 The Hausdorff dimension of the space
* 3.5 The Hausdorff dimension of the space
* 3.6 The Hausdorff dimension of the space
* 3.7 The Hausdorff dimension of the space
* 3.8 The Hausdorff dimension of the space
* 3.9 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.4 The Hausdorff dimension of the space
* 3.5 The Hausdorff dimension of the space
* 3.6 The Hausdorff dimension of the space
* 3.7 The Hausdorff dimension of the space
* 3.8 The Hausdorff dimension of the space
* 3.9 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.4 The Hausdorff dimension of the space
* 3.5 The Hausdorff dimension of the space
* 3.6 The Hausdorff dimension of the space
* 3.7 The Hausdorff dimension of the space
* 3.8 The Hausdorff dimension of the space
* 3.9 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.4 The Hausdorff dimension of the space
* 3.5 The Hausdorff dimension of the space
* 3.6 The Hausdorff dimension of the space
* 3.7 The Hausdorff dimension of the space
* 3.8 The Hausdorff dimension of the space
* 3.9 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.4 The Hausdorff dimension of the space
* 3.5 The Hausdorff dimension of the space
* 3.6 The Hausdorff dimension of the space
* 3.7 The Hausdorff dimension of the space
* 3.8 The Hausdorff dimension of the space
* 3.9 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.4 The Hausdorff dimension of the space
* 3.5 The Hausdorff dimension of the space
* 3.6 The Hausdorff dimension of the space
* 3.7 The Hausdorff dimension of the space
* 3.8 The Hausdorff dimension of the space
* 3.9 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.4 The Hausdorff dimension of the space
* 3.5 The Hausdorff dimension of the space
* 3.6 The Hausdorff dimension of the space
* 3.7 The Hausdorff dimension of the space
* 3.8 The Hausdorff dimension of the space
* 3.9 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.4 The Hausdorff dimension of the space
* 3.5 The Hausdorff dimension of the space
* 3.6 The Hausdorff dimension of the space
* 3.7 The Hausdorff dimension of the space
* 3.8 The Hausdorff dimension of the space
* 3.9 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.4 The Hausdorff dimension of the space
* 3.5 The Hausdorff dimension of the space
* 3.6 The Hausdorff dimension of the space
* 3.7 The Hausdorff dimension of the space
* 3.8 The Hausdorff dimension of the space
* 3.9 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.4 The Hausdorff dimension of the space
* 3.5 The Hausdorff dimension of the space
* 3.6 The Hausdorff dimension of the space
* 3.7 The Hausdorff dimension of the space
* 3.8 The Hausdorff dimension of the space
* 3.9 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.1 The Hausdorff dimension of the space
* 3.2 The Hausdorff dimension of the space
* 3.3 The Hausdorff dimension of the space
* 3.4 The Hausdorff dimension of the space
* 3.5 The Hausdorff dimension of the space
* 3.6 The Hausdorff dimension of the space
* 3.7 The Hausdorff dimension of the space
* 3.8 The Hausdorff dimension of the space
* 3.9 The Hausdorff dimension of the space
* 3.10 The Hausdorff dimension of the space
* 3.11 The Hausdorff dimension of the space
* 3.11 The Hausdorff dimension of the space
* 3.11 The Hausdorff dimension of the space
* 3.12 The Hausdorff dimension of the space
* 3.12 The Hausdorff dimension of the space
* 3.13 The Hausdorff dimension of the space
* 3.13 The Hausdorff dimension of the space
* 3.14 The Hausdorff dimension of the space
* 3.15 The Hausdorff dimension of the space
* 3.16 The Hausdorff dimension of the space
* 3.17 The Hausdorff dimension of the space
* 3.18 The Hausdorff dimension of the space
* 3.19 The Hausdorff dimension of the space
* 3.19 The Hausdorff dimension of the space
* 3.20 The Hausdorff dimension of the space
* 3.21 The Hausdorff dimension of the space
* 3.22 The Hausdorff dimension of the space
* 3.22 The Hausdorff dimension of the space
* 3.22 The Hausdorff dimension of the space
* 3.23 The Hausdorff dimension of the space
* 3.24 The Hausdorff dimension of the space
* 3.25 The Hausdorff dimension of the space
* 3.26 The Hausdorff dimension of the space
* 3.27 The Hausdorff dimension of the space
* 3.28 The Hausdorff dimension of the space
* 3.29 The Hausdorff dimension of the space
* 3.29 The Hausdorff dimension of the space
* 3.3.1 The Hausdorff dimension of the space
* 3.3.2 The Hausdorff dimension of the space
* 3.3.3 The Hausdorff dimension of the space
* 3.3.4 The Hausdorff dimension of the space
* 3.3.5 The Hausdorff dimension of the space
* 3.3.6 The Hausdorff dimension of the space
* 3.3.7 The Hausdorff dimension of the space
* 3.3.8 The Hausdorff dimension of the space
* 3.9 The Hausdorff dimension of the space
* 3.10 The Hausdorff dimension of the space
* 3.11 The Hausdorff dimension of the space
* 3.12 The Hausdorff dimension of the space
* 3.29 The Hausdorff dimension of the space
* 3.3.1 The Hausdorff dimension of the space
* 3.3.1 The Hausdorff dimension of the space
* 3.3.21 The Hausdorff dimension of the space
* 3.3.22 The Hausdorff dimension of the space
* 3.3.23 The Hausdorff dimension of the space
* 3.3.3 The Hausdorff dimension of the space
* 3.3.4 The Hausdorff dimension of the space
* 3.3.4 The Hausdorff dimension of the space
* 3.3.5 The Hausdorff dimension of the space
* 3.4.1 The Hausdorff dimension of the space
* 3.4.2 The Hausdorff dimension of the space
* 3.4.3 The Hausdorff dimension of the space
* 3.4.1 The Hausdorff dimension of the space
* 3.4.2 The Hausdorff dimension of the space
* 3.4.3 The Hausdorff dimension of the space
* 3.4.3 The Hausdorff dimension of the space
* 3.4.4 The Hausdorff dimension of the space
* 3.4.5 The Hausdorff dimension of the space
* 3.4.5 The Hausdorff dimension of the space
* 3.4.6 The Hausdorff dimension of the space
* 3.4.7 The Hausdorff dimension of the space
* 3.4.8 The Hausdorff dimension of the space
* 3.4.9 The Hausdorff dimension of the space
* 3.4.10 The Hausdorff dimension of the space
* 3.4.11 The Hausdorff dimension of the space
* 3.4.12 The Hausdorff dimension of the space
* 3.4.13 The Hausdorff dimension of the space
* 3.4.14 The Hausdorff dimension of the space
* 3.4.15 The Hausdorff dimension of the space
* 3.4.16 The Hausdorff dimension of the space
* 3.4.17 The Hausdorff dimension of the space
* 3.4.18 The Hausdorff dimension of the space
* 3.4.19 The Hausdorff dimension of the space
* 3.4.20 The Hausdorff dimension of the space
* 3.4.211 The Hausdorff dimension of the space
* 3.4.21 The Hausdorff dimension of the space
* 3.4.22 The Hausdorff dimension of the space
* 3.4.3 The Hausdorff dimension of the space
* 3.4.21 The Hausdorff dimension of the space
* 3.4.22 The Hausdorff dimension of the space
* 3.4.22 The Hausdorff dimension of the space
* 3.4.23 The Hausdorff dimension of the space
* 3.4.21 The Hausdorff dimension of the space
* 3.4.3 The Hausdorff dimension of the space
* 3.4.22 The Hausdorff dimension of the space
* 3.4.2 The Hausdorff dimension of the space
* 3.4.3 The Hausdorff dimension of the space
* 3.4.4.4 The Hausdorff dimension of the space
* 3.4.22 The Hausdorff dimension of the space
* 3.4.2.3 The Hausdorff dimension of the space
* 3.4.3 The Hausdorff dimension of the space
* 3.4.22 The Hausdorff dimension of the space
* 3.4.2.4 The Hausdorff dimension of the space
* 3.4.2.5 The Hausdorff dimension of the space
* 3.4.2.6 The Hausdorff dimension of the space
* 3.4.2.7 The Hausdorff dimension of the space
* 3.4.2.8 The Hausdorff dimension of the space
* 3.4.2.9 The Hausdorff dimension of the space
* 3.4.11.1 The Hausdorff dimension of the space
* 3.4.2.11 The Hausdorff dimension of the space
* 3.4.2.11 The Hausdorff dimension of the space
* 3.4.2.11 The Hausdorff dimension of the space
* 3.4.2.1 The Hausdorff dimension of the space
* 3.4.2.2.2 The Hausdorff dimension of the space
* 3.4.2.2.3 The Hausdorff dimension of the space
* 3.4.2.1.2 The Hausdorff dimension of the space
* 3.4.2.2.1.3 The Hausdorff dimension of the space
* 3.4.2.2.2.2 The Hausdorff dimension of the space
* 3.4.2.2.3 The Hausdorff dimension of the space
* 3.4.2.4.11.4.2.5.6 The Hausdorff dimension of the space
* 3.4.2.1.6.7 The Hausdorff dimension of the space
* 3.4.2.2.1.7 The Hausdorff dimension of the space
* 3.4.2.1.8.9 The Hausdorff dimension of the space
has so far not been studied in detail, beyond giving simple examples such as the following, when they are not equal.
**Example 1.5**.: Let \(X=\mathbb{Z}\times\mathbb{R}\). The conformal Assouad dimension can only drop under blowing the space down, and thus \(\dim_{CA}(X)\geq\dim_{CA}(\mathbb{R}^{2})=2\). The latter follows since the topological dimension of the plane is \(2\), and the Hausdorff dimension is always greater than the topological dimension. However, \(\dim_{CH}(X)=\dim_{H}(X)=1\).
If we set \(X=\mathbb{Z}\times\mathbb{R}\cup\mathbb{R}\times\mathbb{Z}\), we can even make \(X\) connected without altering the previous argument. It is possible to make the space compact and connected as well: Let \(X=(\{\frac{1}{n}:n\in\mathbb{N}\}\cup\{0\})\times[0,1]\cup[0,1]\times(\{\frac{1} {n}:n\in\mathbb{N}\}\cup\{0\})\). In this case, a blow-up of the space is \(\mathbb{R}^{2}\).
Assouad dimension involves a scale-invariant quantitative condition, while Hausdorff dimension is merely a qualitative statement on the dimension of the space. Further, as the previous example indicates \(\dim_{CA}(X)\) has stability properties under limits, while \(\dim_{CH}(X)\) does not. This means, that one may only hope for their equality in the case where one assumes some form of self-similarity. Consequently Mathav Murugan asked if the different definitions of conformal dimension agree for self-similar spaces [18]. Our main theorem answers this intuition in the affirmative. The notion of quasiself-similarity is given in Definition 2.4, and (to our knowledge) was introduced in [6].
**Theorem 1.6**.: _Let \(X\) be a compact quasiself-similar metric space, which is connected and locally connected. Then,_
\[\dim_{CH}(X)=\dim_{CA}(X)=\dim_{CAR}(X).\]
As stated, the equality \(\dim_{CA}(X)=\dim_{CAR}(X)\) for uniformly perfect spaces was already known, and follows directly from [17, Proposition 2.2.6]. Our contribution is to prove \(\dim_{CH}(X)=\dim_{CA}(X)\). Indeed, this equality has many further consequences. One may define a zoo of other conformal dimensions, such as: conformal upper and lower Minkowski dimension, conformal packing dimension... Since these dimensions lie between the Hausdorff dimension and the Assouad dimension, one gets equality for the corresponding notions of conformal dimension as well.
The only other result, which states equality of \(\dim_{CH}(X)\) with \(\dim_{CAR}(X)=\dim_{CA}(X)\) is that of [22, Theorem 3.4] and [19, Proposition 2.9.], which apply when \(X\) is \(Q\)-Ahlfors regular and possesses a curve family with positive continuous \(Q\)-modulus. We will discuss this further below. We are not aware of any other instances, where equality of all notions has been shown.
A concrete corollary of Theorem 1.6 is the following new result. The \(n\)-dimensional Sierpinski sponge \(M_{n}\) is obtained by iteratively subdividing the side of an \(n\)-dimensional cube by three, and removing the central cube.
**Corollary 1.7**.: _Let \(n\geq 2\). If \(M_{n}\) is an \(n\)-dimensional Sierpinski sponge, then_
\[\dim_{CH}(M_{n})=\dim_{CA}(M_{n})=\dim_{CAR}(M_{n}).\]
We will also give a result for non-self-similar spaces, where self-similarity is replaced with the combinatorial Loewner property (CLP) from [4, 8]; see Section 4 for a definition. It is worth noting, that this assumption usually is verified in the self-similar setting, and thus is not so much more general than Theorem 1.6. We present this here, since the argument for it is a bit simpler than for the general self-similar case. Further, it is worth to record a proof for this result here, since the developed tools may be useful in tackling the question of Bruce Kleiner, which asks if self-similar combinatorially Loewner spaces are quasisymmetric to Loewner spaces; see [15] for further background and the question.
**Theorem 1.8**.: _Let \(p\in(1,\infty)\). Let \(X\) be a compact, doubling and LLC space, which is \(p\)-combinatorially Loewner metric space. We have_
\[\dim_{CH}(X)=\dim_{CA}(X)=\dim_{CAR}(X)=p.\]
We note that Corollary 1.7 would also follow from this result, since Sierpinski sponges are \(p\)-combinatorially Loewener spaces; see the proofs in [4]. In the course of the proof of Theorem 1.8 we will in fact present some stronger results for CLP spaces in Section 4. In fact, while the statement \(\dim_{CH}(X)=p\) is qualitative, we will give a quantitative statement, Proposition 4.9, which gives a lower bound for the Hausdorff measure of the images of balls under quasisymmetries. This inequality may be useful in other settings as well, and is a generalization of an inequality which appeared in the work of Heinonen and Koskela [12, Theorem 3.6].
Our results here clarify a central point of ambiguity in much of the literature on conformal dimension, where the equality of the conformal Hausdorff and conformal Assouad dimensions is not addressed, but rather avoided and bypassed. We next describe the main idea of the proof.
The key tool in a majority of the research on conformal dimension is a notion of modulus - in particular discretized versions of moduli of path families. These generalize the notion of continuous modulus (later, often, modulus), see e.g. [10, 11, 13] for background. Our proof is also based on defining a new type of discrete modulus - or rather, discrete admissibility - and relating it to conformal Hausdorff dimension. At this point, there are several variants of discrete modulus, each with its own setting and application see e.g. [14, 16, 18, 19, 21, 7, 22, 14]. (There are also other notions, such as trans-boundary modulus, see e.g. [3, 20], but these are not relevant for our discussion here.) We will not discuss all these moduli here, but will focus on those which motivate our approach.
The motivation for our argument and notion of modulus comes from a result of Pansu [19, Proposition 2.9.], whose dual formulation1 was given by Tyson in [22, Theorem 3.4]. Tyson shows that if \(X\) is a \(Q\)-Ahlfors regular metric measure space, and if it possesses a family of curves \(\Gamma\) with positive continuous modulus, then \(\dim_{CH}(X)=Q\). The proof of Tyson uses the discrete \(Q\)-modulus as introduced by Pansu in [19]. To be very brief, this modulus is defined using a Caratheodory construction and involves discrete sums. One shows, both in [19] and [22], that the discrete \(Q\)-modulus is bounded from below by the continuous \(Q\)-modulus. Further, the discrete modulus, up to a variation of parameters, is invariant under quasisymmetries. The final nail in the coffin of the proof is that if \(\dim_{H}(Y)<Q\), then the discrete \(Q\)-modulus vanishes on \(Y\). Consequently, a family of curves with positive continuous \(Q\)-modulus obstructs lowering the dimension of \(X\) by a quasisymmetry below \(Q\).
Footnote 1: As a side note, we remark that Pansu considers measures on families of curves, while Tyson uses the notion of curve modulus in [22]. These two notions are roughly dual to each other, see e.g. [2, 9] for more precise statements.
The previous proof relies heavily on the fact that we can use the notion of continuous modulus to give a lower bound for discrete modulus. In many settings, such as the Sierpinski sponges mentioned above, the continuous moduli of all curves vanishes. Thus, we lack this lower bound, and we need find a way around this by giving a lower bound using a different quantity. In the quasiself-similar setting, and in the combinatorially Loewner setting, we can obtain this lower bound by slightly different mechanisms - and by employing a different modulus.
In the work [14, 7, 18], the inability to lower the dimension can be converted to a lower bound on some moduli - see Theorem 3.5 for a precise statement. Thus, one can use their result to obtain a lower bound for a different discrete modulus, which we call the Keith-Laakso modulus and which is defined in subsection 3.1. In the case of combinatorially Loewner spaces, the setting is a bit simpler and the lower bound is obtained directly by the assumption that the space is combinatorially Loewner [4, 8]: see Definition 4.1.
At this juncture, we have two moduli: the Keith-Laakso modulus and that of Pansu and Tyson. For the first we can obtain lower bounds. For the second, one can show upper bounds. Indeed, for Pansu and Tyson, the notion of discrete modulus is such that it is very easy to prove that if \(\dim_{H}(Y)<Q\), the discrete modulus vanishes. In the absence of \(Q\)-Ahlfors regularity, it is harder to give lower bounds for the modulus of Pansu and Tyson. On the other hand, for the Keith-Laakso modulus, one lacks the ability to give good upper bounds and thus to directly say that the discrete modulus vanishes if one has Hausdorff dimension lower than \(Q\).
The reason for this inability is the following technical, but crucial point. The definition of Keith-Laakso modulus can be summarized as assigning a value \(\operatorname{Mod}_{p}^{K\Gamma}(\Gamma,\mathcal{U})\) for a specific curve family \(\Gamma\) and a cover \(\mathcal{U}\) of \(X\), which [4] calls a \(\kappa\)-approximation _at some level \(r\)_. The key feature of their \(\kappa\)-approximations is that all sets in \(\mathcal{U}\) have roughly the same size. (See Subsection 2.4 for details.) This is also a key difference with the work in [19, 22], since there the Caratheodory construction involves _arbitrary covers_.
Similarly, Assouad dimension involves covering the space by balls of the same size, whereas Hausdorff dimension involves coverings by sets of various sizes. To give estimates for Hausdorff dimension, we need to allow arbitrary covers in the definition of discrete modulus. We bridge this gap, by introducing a new notion of modulus \(\overline{\operatorname{Mod}}_{p}(\Gamma,\mathcal{U})\) which lies between those of Pansu and Tyson in [19, 22] and Keith, Laakso and others in [14, 7, 18]. First, we get more flexibility by allowing arbitrary covers \(\mathcal{U}\) that consist of balls (or, in general, sufficiently round sets). This forces us to introduce a new admissibility condition, to address several key technical issues. Similar to Pansu's discrete modulus, we can show that if \(\dim_{H}(Y)<Q\), then
this modulus is very small for a given cover. Further, in the self-similar and CLP space settings, we can relate the Keith-Laakso modulus and the new modulus to each other.
For combinatorially Loewner spaces, the story is easier to finish. One can bound \(\overline{\mathrm{Mod}}_{p}(\Gamma,\mathcal{U})\) from below using the Keith-Laakso modulus, which in term has a lower bound from the combinatorial Loewner assumption. This estimate is given in Proposition 4.4. This gives a contradiction to the previous paragraph's conclusion of \(\overline{\mathrm{Mod}}_{p}(\Gamma,\mathcal{U})\) being small. In fact, this argument is somewhat easier to discover, and it served as a starting point for this paper and project. For this reason we also include the argument in this paper. Quickly, however, the author realized that a more technical version of the argument could be applied for general quasiself-similar spaces.
For _quasiself-similar spaces_ the argument is a bit different. Instead of directly using a lower bound, we use the fact that the ability to lower dimension gives an upper bound. Indeed, if there is a quasisymmetric map \(f:X\to Y\) and if \(Y\) has small Hausdorff measure, then we obtain a quantitative statement on moduli of annuli, see Lemma 5.2 and Proposition 5.23. Our quantitative statement can be converted algorithmically by using iteration to a statement on the smallness of the Keith-Laakso modulus. This allows us to prove the equality \(\dim_{CA}(X)=\dim_{CH}(X)\) for quasiself-similar spaces by using the result of Carrasco-Paggio, which we state below in Theorem 3.5. The iteration is algorithmic, but quite technical. The basic step of the iteration involves ideas from the proof of the result for CLP spaces. We will describe it in more detail in Subsection 5.2.
### Outline
We will present some general terminology in Section 2. Then, in Section 3 we introduce the different notions of discrete modulus needed in this paper, and present some known results on their relationships with the conformal dimension. For technical reasons, we will use mostly a variant of this modulus, the Bourdon-Kleiner modulus defined in [4], instead of the Keith-Laakso modulus. However, we will relate the two moduli to each other. In Subsection 3.4, we give the new modulus that is key to the approach of this paper. In Section 4 we focus on CLP spaces. There, we prove Theorem 1.8, which is the equality of the definitions of conformal dimension for CLP spaces. In the process, we give some useful stronger results on discrete moduli, and precise quantitative estimates, which hold for CLP spaces. In Section 5 we focus on quasiself-similar spaces. There, we study moduli of annuli, and give an push-down algorithm to adjust the scale of covers. This is then used to give a relationship between the two moduli used. Finally, in subsection 5.4 we collect the pieces and complete the proof of Theorem 1.6.
## 2. Notation and Basic properties
### Basic terminology
A compact metric space will be denoted \(X\), its metric \(d\), and open balls within it \(B(z,r):=\{w\in X:d(z,w)<r\}\) for \(z\in X,r>0\). An inflation of a ball \(B=B(z,r)\) is denoted \(CB:=B(z,Cr)\) for \(C>0\). Note that we consider each ball as having an associated center and radius - and it may happen that a different center and radius defines the same set. The radius of a ball is denoted \(\mathrm{rad}(B)\). Diameters of sets \(A\subset X\) will be denoted \(\mathrm{diam}(A)=\sup_{a,b\in A}d(a,b)\). A curve is a continuous map \(\gamma:I\to X\), where \(X\) is a non-empty compact interval in \(\mathbb{R}\). We often conflate \(\gamma\) and it's image set \(\mathrm{Image}(\gamma)\).
Recall the definition of \(N(A,r)\) from (1.4). We say that a metric space \(X\) is metrically doubling, if there exists a constant \(D\geq 1\), so that \(N(B(z,r),r/2)\leq D\) for every \(z\in X\) and \(r>0\).
We will need some connectivity properties. A space \(X\) is called locally connected, if it has a neighborhood basis consisting of connected open sets. A metric space is LLC, if for every \(x,y\in X\), there exists a curve \(\gamma\) with \(x,y\in\gamma\) and \(\mathrm{diam}(\gamma)\leq Cd(x,y)\).
We will consider collections of balls, which are often denoted by a script letter \(\mathcal{B}\). For these, we define unions by setting \(\bigcup\mathcal{B}:=\bigcup_{B\in\mathcal{B}}B\), inflations by setting \(C\mathcal{B}:=\{CB:B\in\mathcal{B}\}\) and radii \(\mathrm{rad}(\mathcal{B})=\sup_{B\in\mathcal{B}}\mathrm{rad}(B)\). If \(A\) is any finite set, we denote by \(|A|\) its cardinality.
### Relative distance and quasisymmetries
We need some standard results on quasisymmetries.
**Lemma 2.1**.: _If \(f:X\to Y\) is an \(\eta\)-quasisymmetry, then \(f^{-1}\) is a \(\tilde{\eta}\)-quasisymmetry with \(\tilde{\eta}(t)=\left(\eta^{-1}(t^{-1})\right)^{-1}\)._
We note the convention that the value of \(\tilde{\eta}\) at zero is given by \(\tilde{\eta}(0)=0\).
Proof of Lemma 2.1.: Let \(x,y,z\in Y\) and let \(x^{\prime},y^{\prime},z^{\prime}\in X\) be such that \(f(x^{\prime})=x,f(y^{\prime})=y,f(z^{\prime})=z\). Since \(f\) is an \(\eta\)-quasisymmetry, we have
\[\frac{d(f(x^{\prime}),f(z^{\prime}))}{d(f(x^{\prime}),f(y^{\prime}))}\leq\eta \left(\frac{d(x^{\prime},z^{\prime})}{d(x^{\prime},y^{\prime})}\right).\]
Taking resiprocals and an inverse function, we get
\[\frac{d(x,y)}{d(x,z)}\leq\left(\eta^{-1}\left(\left(\frac{d(f(x^{\prime}),f(y^ {\prime}))}{d(f(x^{\prime}),f(z^{\prime}))}\right)^{-1}\right)\right)^{-1}.\]
Replacing \(f(x^{\prime}),f(y^{\prime}),f(z^{\prime})\) with \(x,y,z\) and \(x^{\prime},y^{\prime},z^{\prime}\) with \(f^{-1}(x),f^{-1}(y),f^{-1}(z)\) yields that \(f^{-1}\) is an \(\tilde{\eta}\)-quasisymmetry.
Let \(X\) be a complete metric space. A continuum \(E\subset X\) is a compact connected set. A continuum is non-degenerate, if it is non-empty. We define the relative distance between two non-degenerate continua \(E,F\) as
\[\Delta(E,F):=\frac{d(E,F)}{\min\{\operatorname{diam}(E),\operatorname{diam}( F)\}}.\]
**Lemma 2.2**.: _Let \(f:X\to Y\) be an \(\eta\)-quasisymmetry and let \(E,F\) be two non-degenerate disjoint continua in \(X\). Then,_
\[\frac{1}{2\eta(\Delta(E,F)^{-1})}\leq\Delta(f(E),f(F))\leq\eta(2\Delta(E,F)).\]
Proof.: Assume by symmetry that \(\operatorname{diam}(E)\leq\operatorname{diam}(F)\). Let \(x\in E\) and \(y\in F\) be such that \(d(E,F)=d(x,y)\). Choose \(u\in E,v\in F\) so that \(d(x,u),d(y,v)\geq\operatorname{diam}(E)/2\). This is possible by connectivity. Then, we have
\[\frac{d(x,y)}{d(x,u)}\leq 2\Delta(E,F)\quad\text{ and }\frac{d(y,x)}{d(y,v)}\leq 2\Delta(E,F)\]
Let \(x^{\prime}:=f(x),y^{\prime}=f(y),u^{\prime}=f(u),v^{\prime}=f(v)\) be the image points in \(Y\). We have, since \(\eta\) is increasing and since \(f\) is an \(\eta\)-quasisymmetry:
\[d(f(E),f(F)) \leq d(x^{\prime},y^{\prime})\] \[\leq\eta\left(\frac{d(x,y)}{d(x,u)}\right)d(x^{\prime},u^{\prime})\] \[\leq\eta\left(2\Delta(E,F)\right)\operatorname{diam}(f(E)).\]
Similarly,
\[d(f(E),f(F)) \leq d(y^{\prime},x^{\prime})\] \[\leq\eta\left(\frac{d(y,x)}{d(y,v)}\right)d(y^{\prime},v^{\prime})\] \[\leq\eta\left(2\Delta(E,F)\right)\operatorname{diam}(f(F)).\]
The previous two inequalities combine to gives the inequality:
\[\Delta(f(E),f(F))=\frac{d(f(E),f(F))}{\min\{\operatorname{diam}(f(E)), \operatorname{diam}(f(F))\}}\leq\eta\left(2\Delta(E,F)\right).\]
Applying this to the inverse \(f^{-1}\), which by Lemma 2.1 is an \(\tilde{\eta}\)-quasisymmetric map, yields the other inequality of the claim.
The following lemma will also prove useful on a few occasions. Note that the additional assumption on the existence on \(y\in B(x,r)\) is automatically satisfied if \(X\) is connected and \(r<\operatorname{diam}(X)\).
**Lemma 2.3**.: _Let \(f:X\to Y\) be a quasisymmetric map and let \(B(x,r)\) be a ball in \(X\) for which there exists a \(y\in B(x,r)\) with \(d(x,y)\geq r/2\). Then, for every \(L\geq 1\), we have_
\[f(B(x,Lr))\subset B(f(x),\eta(2L)d(f(x),f(y))).\]
Proof.: Let \(z\in B(x,Lr)\), and apply the \(\eta\)-quasisymmetry to the triple of points \(x,y,z\). This gives
\[\frac{d(f(x),f(z))}{d(f(x),f(y))}\leq\eta\left(\frac{d(x,z)}{d(x,y)}\right)\leq \eta(2L).\]
Consequently, we get the claim from
\[d(f(x),f(z))\leq\eta(2L)d(f(x),f(y)).\]
### Quasiself-similarity
We define a notion of quasiself-similarity. This is motivated by the notion of approximate self-similarity discussed in [4].
**Definition 2.4**.: We say that a compact space \(X\) is quasiself-similar, if there exists a homeomorphism \(\eta:[0,\infty)\to[0,\infty)\) and a constant \(\delta>0\) so that for \(B(x,r)\subset X\) there is a \(\eta\)-quasisymmetry \(f:B(x,r)\to U_{x,r}\) where \(U_{x,r}\subset X\) is an open set with \(\operatorname{diam}(U_{x,r})\geq\delta\operatorname{diam}(X)\). We also say that \(X\) is \(\eta\)-quasiself-similar, if this property holds for a given function \(\eta\).
The principal advantage of defining quasiself-similar spaces is that they are more general than approximately self-similar spaces. Further, quasiself-similarity is an invariant under quasisymmetries: if \(X\) is quasiself-similar and \(Y\sim_{q.s.}X\), then \(Y\) is also quasiself-similar. The same fails for approximate self-similarity.
We recall the following result [6, Chapter 2].
**Lemma 2.5**.: _If \(X\) is a compact quasiself-similar space, which is connected and locally connected, then \(X\) is LLC._
### \(\kappa\)-approximations
We introduce some terminology on approximations. Throughout this paper, \(\mathcal{U}\) and \(\mathcal{V}\) will denote finite collections of open sets.
**Definition 2.6**.: Let \(\kappa\geq 1\). A finite collection of open sets \(\mathcal{U}\) of a metric space \(X\) is called a \(\kappa\)-round collection, if for every \(U\in\mathcal{U}\) there exists a \(z_{U}\) so that
\[B(z_{U},\kappa^{-1}r_{U})\subset U\subset B(z_{U},r_{U}),\]
where \(r_{U}=\sup\{d(z_{U},x):x\in U\}\). If further there is some \(r>0\), so that \(r_{U}=r\) for every \(U\in\mathcal{U}\), we call \(\mathcal{U}\) a \(\kappa\)-round collection at level \(r\).
From here on out, if \(U\) is any open set and \(z_{U}\in U\) has been fixed, we define \(r_{U}:=\sup\{d(z_{U},x):x\in U\}\).
**Definition 2.7**.: Let \(\kappa\geq 1\). A \(\kappa\)-round collection of open sets \(\mathcal{U}\) of a metric space \(X\) is called a \(\kappa\)-locally bounded collection, if there exist \(z_{U}\in U\) for every \(U\in\mathcal{U}\) for which Definition 2.6 holds and for which moreover the following two properties hold.
1. The balls \(\{B(z_{U},\kappa^{-1}r_{U}),U\in\mathcal{U}\}\) are pairwise disjoint.
2. For every \(L\geq 1\), there exists a constant \(\kappa_{L}\) so that if \(B(z_{U},Lr_{U})\cap B(z_{V},Lr_{V})\neq\emptyset\), then \(r_{U}\leq\kappa_{L}r_{V}\).
If \(\mathcal{U}\) also covers \(X\), then we call it a \(\kappa\)-approximation. If further there is some \(r>0\), so that \(r_{U}=r\) for every \(U\in\mathcal{U}\), we call \(\mathcal{U}\) a \(\kappa\)-approximation at level \(r\).
Let \(\operatorname{rad}(\mathcal{U})=\sup\{r_{U}:U\in\mathcal{U}\}\). A standard way to obtain a \(\kappa\)-approximation is the following. A set \(N\subset X\) is called \(r\)-separated if for all \(x,y\in X\) we have \(d(x,y)\geq r\). A maximal \(r\)-separated set is called an \(r\)-net. Given any \(r\)-net \(N\) in a connected space \(X\), with \(r\in(0,\operatorname{diam}(X)/2)\), it is straightforward to show that the collection \(\mathcal{U}=\{B(x,2r):x\in N\}\) is a \(\kappa\)-approximation at level \(r\) with \(r_{U}=2r\) and \(z_{U}=x\) for every \(U=B(x,2r)\in\mathcal{U}\), and \(\kappa=1,\kappa_{L}=4\) for all \(L\geq 1\).
We note that we have made some adjustments in the notation and terminology to bridge small differences in the literature, and in order to connect more directly to our work. The following remark explains some of these choices and how the other definitions/concepts can be expressed in our framework.
**Remark 2.8**.: We briefly explain the relationships between different definitions used in [7, 14, 18, 21] and [4]. In the first four of these, one takes \(\alpha\geq 2\) and considers a sequence \(N_{k}\) of \(\alpha^{-k}\) nets and a parameter \(\lambda>1\), and defines graphs \(G_{n}\) whose vertex set is \(N_{k}\), and with edges \(v,w\) if \(B(v,\lambda 2^{-k})\cap B(w,\lambda 2^{-k})\neq\emptyset\). In our case, this would correspond to the \(\kappa\)-approximation given by \(\mathcal{U}=\{B(v,\lambda 2^{-k})\}\), and setting \(\kappa=2\lambda\). Doing so, the incidence graph associated to \(\mathcal{U}\) is isomorphic to that of \(G_{n}\). This isomorphism is relevant in Section
3, since we will define discrete moduli using incidences of sets in \(\mathcal{U}\), while in [7, 14, 18, 21] the moduli are defined in the graphs \(G_{n}\). Since these two graphs are isomorphic, the relevant notions of moduli coincide.
On the other hand, compared to [4] we use a slightly more general framework of arbitrary \(\kappa\)-approximations in order to ensure the quasisymmetry invariance of our definitions. In their work, one only uses \(\kappa\)-approximations at a given level \(r\).
Let \(\mathcal{V}\) be a \(\kappa\)-round collection in \(X\), and let \(f:X\to Y\) be a quasisymmetry. Then define the image collection \(f(\mathcal{V}):=\{f(V):V\in\mathcal{V}\}\). We then then have the following.
**Lemma 2.9**.: _Let \(\mathcal{V}\) be a \(\kappa\)-round collection in a space \(X\) and if \(f:X\to Y\) is an \(\eta\)-quasisymmetric map, then \(f(\mathcal{V})\) is a \(\kappa^{\prime}\)-round collection with \(\kappa^{\prime}=2\eta(\kappa)\)._
_Moreover, if \(\mathcal{V}\) is a \(\kappa\)-approximation, then \(f(\mathcal{V})\) is a \(\kappa^{\prime}\)-approximation with \(\kappa^{\prime}=2\eta(\kappa)\)._
Proof.: For every \(V\in\mathcal{V}\) let \(z_{V}\in V,r_{V}>0\) be the center and radius specified in Definition 2.6. Define \(z_{f(V)}=f(z_{V})\) and \(r_{f(V)}=\sup\{d(y,f(z_{V})):y\in f(V)\}\).
Suppose first that \(\mathcal{V}\) is \(\kappa\)-round and let \(\kappa^{\prime}=2\eta(\kappa)\). We will show that \(V\) is \(\kappa^{\prime}\)-round, that is, we prove
\[B(z_{f(V)},\kappa^{\prime-1}r_{f(V)})\subset f(V)\subset B(z_{f(V)},r_{f(V)}). \tag{2.10}\]
The second of these inclusions follows from the definition of \(r_{f(V)}\). Now, let \(y\in B(z_{f(V)},\kappa^{\prime-1}r_{f(V)})\), and let \(b\in X\) be such that \(f(b)=y\). Choose a point \(w\in f(V)\) so that \(d(w,z_{f(V)})\geq 2^{-1}r_{f(V)}\) and let \(c\in V\) be such that \(f(c)=w\). Since \(f\) is a quasisymmetry, we get
\[\kappa^{\prime}/2=\frac{2^{-1}r_{f(V)}}{\kappa^{\prime-1}r_{f(V)}}\leq\frac{d (z_{f(V)},w)}{d(z_{f(V)},y)}\leq\eta\left(\frac{d(z_{V},c)}{d(z_{V},b)}\right).\]
Thus,
\[d(z_{V},b)\leq d(z_{V},c)\eta^{-1}(\kappa^{\prime}/2)^{-1}\leq r_{V}\kappa^{-1}.\]
Therefore \(b\in B(z_{V},r_{V}\kappa^{-1})\subset V\) and \(y\in f(V)\). This yields the first of the inclusions in (2.10). Thus, \(f(\mathcal{V})\) is \(\kappa^{\prime}\)-round.
Let us know assume further that \(\mathcal{V}\) is a \(\kappa\)-approximation. Indeed, it is \(\kappa\)-locally bounded and covers \(X\). Clearly \(f(\mathcal{V})\) covers \(Y\). Thus, it suffices to prove that \(f(\mathcal{V})\) is \(\kappa^{\prime}\)-locally bounded.
The proof above showed in fact that
\[B(z_{f(V)},\kappa^{\prime-1}r_{f(V)})\subset f(B(z_{V},\kappa^{-1}r_{f(V)})). \tag{2.11}\]
Thus, the balls \(\{B(z_{f(V)},\kappa^{\prime-1}r_{f(V)}):V\in\mathcal{V}\}\) are pairwise disjoint. Therefore, we are left to show that for every \(L\geq 1\) there exists a \(\kappa^{\prime}_{L}\) so that if
\[B(z_{f(V)},Lr_{f(V)})\cap B(z_{f(U)},Lr_{f(U)})\neq\emptyset\]
for some \(U,V\in\mathcal{V}\), then \(r_{f(U)}\leq\kappa^{\prime}_{L}r_{f(V)}\). This is obtained by first finding an \(L^{\prime}\geq 1\) so that \(B(z_{V},L^{\prime}r_{V})\cap B(z_{U},Lr_{U})\neq\emptyset\), which yields an estimate for \(d(z_{U},z_{V})\) in terms of \(r_{V}\), and then using the quasisymmetry to translate this into a bound for \(r_{f(U)}\) in terms of \(r_{f(V)}\).
Let \(w\in B(z_{f(V)},Lr_{f(V)})\cap B(z_{f(U)},Lr_{f(U)})\) and let \(u\in B(z_{f(U)},r_{f(U)}),v\in B(z_{f(V)},r_{f(V)})\) be points with \(d(u,z_{f(U)})\geq r_{f(U)}/2\) and \(d(v,z_{f(V)})\geq r_{f(V)}/2\). Let \(a\in X,b_{U}\in U,b_{V}\in V\) points so that \(f(a)=w,f(b_{U})=u,f(b_{V})=v\).
By Lemma 2.1, the map \(f^{-1}\) is a \(\tilde{\eta}\)-quasisymmetry, with \(\tilde{\eta}(t)=\left(\eta^{-1}(t^{-1})\right)^{-1}\). By the quasisymmetry condition applied to the three points \(b_{U},a,z_{U}\), we have
\[d(w,z_{U})\leq\tilde{\eta}\left(\frac{d(w,z_{f(U)})}{d(u,z_{f(U)})}\right)d(z_{ U},b_{U})\leq\tilde{\eta}(2L)r_{U}.\]
Thus, \(a\in B(z_{U},\tilde{\eta}(2L)r_{U})\). Similarly, we get \(a\in B(z_{V},\tilde{\eta}(2L)r_{V}).\) Consequently
\[a\in B(z_{U},\tilde{\eta}(2L)r_{U})\cap B(z_{V},\tilde{\eta}(2L)r_{V}). \tag{2.12}\]
Therefore, since \(\mathcal{U}\) is locally bounded, there exists a constant \(\kappa_{\tilde{\eta}(2L)}\) for which
\[\tilde{\eta}(2L)^{-1}r_{V}\leq r_{U}\leq\kappa_{\tilde{\eta}(2L)}r_{V}. \tag{2.13}\]
From (2.12) and (2.13) we get
\[d(z_{U},z_{V})\leq d(z_{U},a)+d(z_{V},a)\leq\tilde{\eta}(2L)(1+\kappa_{\tilde{ \eta}(2L)})r_{V}.\]
We have \(z_{U}\in B(z_{V},\tilde{\eta}(2L)(1+\kappa_{\tilde{\eta}(2L)})r_{V})\). Again, by Lemma 2.3, we get that
\[z_{f(U)}=f(z_{U})\in B(z_{f(V)},\eta(2\tilde{\eta}(2L)(1+\kappa_{\tilde{\eta}(2L )}))r_{f(V)}).\]
In particular,
\[d(z_{f(U)},z_{f(V)})\leq\eta(2\tilde{\eta}(2L)(1+\kappa_{\tilde{\eta}(2L)}))r_{ f(V)}. \tag{2.14}\]
We also have \(z_{V}\not\in B(z_{U},\kappa^{-1}r_{U})\), and thus
\[d(z_{U},z_{V})\geq\kappa^{-1}r_{U}. \tag{2.15}\]
Finally, apply the \(\eta\)-quasisymmetry to the points \(z_{U},z_{V}\) and \(b_{U}\) and use (2.15) to give
\[\frac{r_{f(U)}}{2d(z_{f(V)},z_{f(U)})}\leq\frac{d(z_{f(U)},u)}{d(z_{f(V)},z_{f( U)})}\leq\eta\left(\frac{d(z_{U},b_{U})}{d(z_{U},z_{V})}\right)\leq\eta\left( \frac{\kappa r_{U}}{r_{U}}\right)\leq\eta(\kappa). \tag{2.16}\]
Thus, by applying (2.14) we get
\[r_{f(U)}\leq 2\eta(\kappa)\eta(2\tilde{\eta}(2L)(1+\kappa_{\tilde{\eta}(2L)}) )r_{f(V)}.\]
This is the desired estimate with \(\kappa^{\prime}_{L}=2\eta(\kappa)\eta(2\tilde{\eta}(2L)(1+\kappa_{\tilde{\eta} (2L)}))\) and yields the local boundedness.
## 3. Discrete moduli
### Discrete modulus of a collection
We will define all the relevant discrete moduli in this section. First, we define a discrete modulus of a collection of discrete subsets. Let \(\mathcal{U}\) be a \(\kappa\) round collection and let \(\mathcal{P}\) be a collection of subsets of \(\mathcal{U}\). (Indeed, in general \(\mathcal{U}\) could be any finite collection of objects, but in our application, we will restrict to such collections.) We say that \(\rho:\mathcal{U}\to[0,\infty)\) is discretely admissible for \(\mathcal{P}\), and write \(\rho\wedge_{\mathcal{U}}\mathcal{P}\), if
\[\sum_{U\in\mathcal{P}}\rho(U)\geq 1,\text{ for all }P\in\mathcal{P}.\]
Define the discrete modulus by
\[\operatorname{Mod}_{p}^{D}(\mathcal{P},\mathcal{U}):=\inf_{\rho\wedge_{ \mathcal{U}}\mathcal{P}}\sum_{U\in\mathcal{U}}\rho(U)^{p}.\]
The sum on the right will often also be called the _\(p\)-energy_ of \(\rho\).
We recall some basic properties of modulus, whose proofs are standard. For similar arguments, see e.g. [10, Section 1]. The existence of minimizers follows directly from the fact that \(\mathcal{U}\) must be finite, and the optimization is done in a finite dimensional space.
**Lemma 3.1**.: _Let \(\mathcal{U}\) be a \(\kappa\)-round collection of \(X\) and let \(p\geq 1\)._
1. _Monotonicity: If_ \(\mathcal{P}\subset\mathcal{P}^{\prime}\) _are two collections of sets, then_ \[\operatorname{Mod}_{p}^{D}(\mathcal{P},\mathcal{U})\leq\operatorname{Mod}_{p} (\mathcal{P}^{\prime},\mathcal{U}).\]
2. _Sub-additivity: If_ \(\mathcal{P},\mathcal{P}^{\prime}\) _are two collections of subsets, then_ \[\operatorname{Mod}_{p}^{D}(\mathcal{P}\cup\mathcal{P}^{\prime},\mathcal{U}) \leq\operatorname{Mod}_{p}^{D}(\mathcal{P}^{\prime},\mathcal{U})+ \operatorname{Mod}_{p}^{D}(\mathcal{P},\mathcal{U}).\]
3. _Majorization: If_ \(\mathcal{P},\mathcal{P}^{\prime}\) _are two collections of subsets so that every set_ \(P\in\mathcal{P}\) _contains a subset in_ \(\mathcal{P}^{\prime}\)_, then_ \[\operatorname{Mod}_{p}^{D}(\mathcal{P},\mathcal{U})\leq\operatorname{Mod}_{p} ^{D}(\mathcal{P}^{\prime},\mathcal{U}).\]
4. _Existence of minimizers: If_ \(X\) _is compact, then there exists a_ \(\rho\wedge_{\mathcal{U}}\mathcal{P}\) _with_ \[\operatorname{Mod}_{p}^{D}(\mathcal{P},\mathcal{U})=\sum_{U\in\mathcal{U}}\rho( U)^{p}.\]
In what follows, since these properties are so standard, we will often simply apply these facts without explicit reference to this Lemma.
### Modulus of annulus
Let \(B\) be a ball in \(X\) and \(L>1\). Consider a \(\kappa\)-round collection \(\mathcal{U}\). We say that \(P=\{U_{1},\ldots,U_{n}\}\subset\mathcal{U}\) is a \((\mathcal{U},B,L)\)-path, if \(U_{1}\cap B\neq\emptyset\), \(U_{n}\cap X\setminus LB\neq\emptyset\) and if \(U_{i}\cap U_{i+1}\neq\emptyset\) for all \(i=1,\ldots,n-1\). Let \(\mathcal{P}_{\mathcal{U},B,L}\) be the collection of all \((\mathcal{U},B,L)\)-paths.
Then, we define the Keith-Laakso modulus as
\[\operatorname{Mod}_{p,L\mathcal{U}}^{KL}(B):=\operatorname{Mod}_{p}^{D}( \mathcal{P}_{\mathcal{U},B,L},\mathcal{U}).\]
This notion of modulus coincides with that of [7, 14, 18] if we use the collection \(\mathcal{U}\) indicated in Remark 2.8.
In [4], a slightly different form of modulus is obtained by using a more restrictive admissibility constraint. Note that this modulus is defined for collections of curves, while the previous one is only defined for balls (and corresponds to a family of objects which traverse an annulus). Let \(\Gamma\) be a family of curves, and let \(\mathcal{P}_{\Gamma}=\{P_{\gamma}:\gamma\in\Gamma\}\), where \(P_{\gamma}=\{U\in\mathcal{U}:U\cap\gamma\neq\emptyset\}\). We define the (Bourdon-Kleiner) modulus of the curve family as
\[\operatorname{Mod}_{p,\mathcal{U}}(\Gamma)=\operatorname{Mod}_{p}^{D}( \mathcal{P}_{\Gamma},\mathcal{U}).\]
We relate the two moduli via the following lemma. This justifies using the Bourdon-Kleiner modulus, instead of the Keith-Laakso modulus in the context of conformal dimension, see Theorem 3.5. Let \(\Gamma_{L,B}\) be the collection of curves \(\gamma\) connecting \(B\) to \(X\setminus LB\).
**Lemma 3.2**.: _Suppose that \(X\) is compact, metrically doubling and LLC. Then, for any \(\kappa\)-approximation \(\mathcal{U}\), any ball \(B\subset X\) and any \(L>1\), we have_
\[\operatorname{Mod}_{p,\mathcal{U}}(\Gamma_{B,L})\sim\operatorname{Mod}_{p,L \mathcal{U}}^{KL}(B).\]
Proof.: We have \(\mathcal{P}_{\Gamma}\subset\mathcal{P}_{\mathcal{U},B,L}\), where \(\Gamma=\Gamma_{B,L}\). From the definition of the modulus, and Lemma 3.1 it is thus direct that
\[\operatorname{Mod}_{p,\mathcal{U}}(\Gamma_{B,L})=\operatorname{Mod}_{p}^{D}( \mathcal{P}_{\Gamma},\mathcal{U})\leq\operatorname{Mod}_{p}^{D}(\mathcal{P}_ {\mathcal{U},B,L},\mathcal{U}))=\operatorname{Mod}_{p,L\mathcal{U}}^{KL}(B).\]
For the other direction of the proof we need the assumptions of \(X\) being LLC and metrically doubling. Let \(X\) be \(C\)-LLC and \(D\)-metrically doubling. Next, let \(\rho\wedge_{\mathcal{U}}\mathcal{P}_{\Gamma}\) be arbitrary. We will define another \(\tilde{\rho}\) so that \(\tilde{\rho}\wedge_{\mathcal{U}}\mathcal{P}_{\mathcal{U},B,L}\) and so that
\[\sum_{U\in\mathcal{U}}\tilde{\rho}(U)^{p}\leq M\sum_{U\in\mathcal{U}}\rho(U)^{p} \tag{3.3}\]
for a constant \(M\) depending on \(C\), the local boundedness constants and \(D\). From these the claim of the Lemma follows by taking an infimum over all \(\rho\wedge_{\mathcal{U}}\mathcal{P}_{\Gamma}\). The rest of the proof consists of defining \(\tilde{\rho}\), showing (3.3) and proving \(\tilde{\rho}\wedge_{\mathcal{U}}\mathcal{P}_{\mathcal{U},B,L}\).
We do a small preliminary estimate. Fix \(V\in\mathcal{U}\) and any constant \(S\geq 1\) and let
\[\mathcal{U}_{V,S}=\{U\in\mathcal{U}:U\cap B(z_{V},(1+2S)r_{V})\neq\emptyset\}.\]
If \(U_{1},U_{2}\in\mathcal{U}_{V,S}\) are distinct, we have by local boundedness
\[\kappa_{1+2S}^{-1}r_{V}\leq r_{U_{1}},r_{U_{2}}\leq\kappa_{1+2S}r_{V}. \tag{3.4}\]
Thus, the balls \(B(z_{U_{i}},r_{V}\kappa_{1+2S}^{-1}\kappa^{-1})\) are disjoint for \(i=1,2\) and are contained in \(B(z_{V},(1+2S+2\kappa_{1+2S})r_{V})\). Thus, by metric doubling, there are at most \(D^{m}\) sets contained in \(\mathcal{U}_{V,S}\) for any \(V\in\mathcal{U}\) as long as \(2^{m}\geq 4\kappa\kappa_{1+2S}(1+2S+2\kappa_{1+2S})\).
We will next consider \(\mathcal{U}_{V,C}\). Let \(L=1+\kappa_{1+2C}(1+2C)\). Choose \(k,l\in\mathbb{N}\) with \(l\leq k\) and so that \(4\kappa\kappa_{1+2C}(1+2C+2\kappa_{1+2C})\leq 2^{l}\) and so that \(4\kappa\kappa_{1+2L}(1+2L+2\kappa_{1+2L})\leq 2^{k}\), and let \(M=D^{2kp}\). By the argument after (3.4) with \(S=C\), we have that \(|\mathcal{U}_{V,C}|\leq 2^{l}\leq 2^{k}\). Let
\[\tilde{\rho}(V)=D^{k}\max\{\rho(U):U\in\mathcal{U}_{V,C}\}.\]
For each \(V\in\mathcal{U}\) choose a \(U_{V}\in\mathcal{U}_{V,C}\) so that \(\tilde{\rho}(V)=D^{k}\rho(U_{V})^{p}\). Let \(\tilde{\mathcal{U}}_{U,C}=\{V\in\mathcal{U}:U_{V}=U\}\). For every \(V\in\tilde{\mathcal{U}}_{U,C}\), we have \(U_{V}=U\) and thus \(U\in\mathcal{U}_{V,C}\). Thus \(B(z_{U},r_{U})\cap B(z_{V},(1+2C)r_{V})\neq\emptyset\), and \(V\subset B(z_{V},r_{V})\subset B(z_{U},r_{U}+(1+2C)r_{V})\). Consequently, from using (3.4) we get \(V\cap B(z_{U},(1+\kappa_{1+2C}(1+2C))r_{U})\neq\emptyset\). In particular, we have \(\tilde{\mathcal{U}}_{U,C}\subset\mathcal{U}_{U,L}\). Thus, we have by the argument after (3.4) with \(S\) replaced with \(L\) that \(|\tilde{\mathcal{U}}_{U,C}|\leq D^{k}\).
Let \(P=\{U_{1},\ldots,U_{n}\}\in\mathcal{P}_{\mathcal{U},B,L}\). Define a sequence of points \((x_{i})_{i=1}^{n+1}\) as follows. Let \(x_{1}\in U_{1}\cap B,x_{n+1}\in U_{n}\cap X\setminus LB\), and let \(x_{i}\in U_{i}\cap U_{i-1}\) for \(i=2,\ldots n\). Since \(X\) is \(C\)-LLC, we can find curves \(\gamma_{i}\) connecting
\(x_{i}\) to \(x_{i+1}\) with \(\operatorname{diam}(\gamma_{i})\leq Cd(x_{i},x_{i+1})\leq\operatorname{diam}(U_{i})\) for \(i=1,\dots,n\). Let \(\gamma\) be the concatenation of \(\gamma_{i}\). We have that \(\gamma\in\Gamma_{B,L}\).
Now, let \(P_{\gamma}=\{U\in\mathcal{U}:U\cap\gamma\neq\emptyset\}\). We have
\[\sum_{U\in P_{\gamma}}\rho(U)\geq 1,\]
since \(\rho\wedge_{\mathcal{U}}\mathcal{P}_{\Gamma}\). Now, for each \(U\in P_{\gamma}\) we have some \(i=1,\dots,n\) so that \(U\cap\gamma_{i}\neq\emptyset\). Therefore, we have \(d(U,U_{i})\leq C\operatorname{diam}(U_{i})\leq 2Cr_{U_{i}}\). Thus, \(U\in\mathcal{U}_{U_{i},C}\) for some \(i\). Let \(P_{\gamma,i}=\mathcal{U}_{U_{i},C}\cap P_{\gamma}\). Since \(|\mathcal{U}_{U_{i},C}|\leq D^{k}\), we get
\[\tilde{\rho}(U_{i})\geq\sum_{U\in P_{\gamma,i}}\rho(U).\]
Summing these, we get
\[1\leq\sum_{U\in P_{\gamma}}\rho(U) \leq\sum_{i=1}^{n}\sum_{U\in P_{\gamma,i}}\rho(U)\] \[\leq\sum_{i=1}^{n}\tilde{\rho}(U_{i}).\]
Thus, \(\tilde{\rho}\wedge\mathcal{P}_{\mathcal{U},B,L}\), since \(P\) was arbitrary.
Finally, we show (3.3) for \(M=D^{k(1+p)}\). We have by the size bound for \(\tilde{U}_{U,C}\) that
\[\sum_{V\in\mathcal{U}}\tilde{\rho}(V)^{p} \leq\sum_{U\in\mathcal{U}}\sum_{V\in\tilde{\mathcal{U}}_{U,C}} \tilde{\rho}(V)^{p}\] \[\leq\sum_{U\in\mathcal{U}}D^{k}D^{kp}\rho(U)^{p}\leq D^{k(1+p)} \sum_{U\in\mathcal{U}}\rho(U)^{p}.\]
### Relationship to Conformal dimension
The proof of the main Theorem 1.6 is based on the following Theorem of Carrasco-Piaggio. An interested reader may see also [21] and [18] for slightly different versions and proofs of this statement. We have used Lemma 3.2 and Remark 2.8 to reformulate the theorem using our notion of \(\kappa\)-approximations and moduli.
**Theorem 3.5** (Theorem 1.3 in [7]).: _Suppose that \(X\) is a compact, metrically doubling LLC space, and let \(\mathcal{U}_{k}\) be \(\kappa\)-approximations at level \(2^{-k}\). Then_
\[\operatorname{dim}_{CAR}(X)=\inf\left\{Q>0:\liminf_{m\to\infty}\sup_{z\in X,k \geq 0}\operatorname{Mod}_{Q,\mathcal{U}_{m+k}}(\Gamma_{B(z,2^{-k}),2})=0 \right\}.\]
### New discrete modulus
In this subsection, we introduce a new notion of modulus, which allows for an arbitrary \(\kappa\)-round collection \(\mathcal{U}\), which may or may not be a \(\kappa\)-approximation. Indeed, formally we shall permit that the collection even fails to be a cover. This brings the definition closer to that considered by Pansu and Tyson in [19, 22]. We note that it may be interesting to study more carefully the relationships between their modulus and the one presented here. However, since it would be a side track in the present paper, we do not pursue this here.
**Definition 3.6**.: Fix \(\tau\geq 4\). Let \(\mathcal{U}\) be a \(\kappa\)-round, and let \(\Gamma\) a family of sets in \(X\). We say that \(\rho:\mathcal{U}\to[0,\infty)\) is strongly discretely \(\tau\)-admissible for \(\Gamma\), and write \(\rho\overline{\gamma}_{\tau,\mathcal{U}}\Gamma\), if for every \(\gamma\in\Gamma\) there exists a collection \(\mathcal{U}_{\gamma}\subset\mathcal{U}\) with the following properties:
* \(\{B(z_{U},\tau r_{U}):U\in\mathcal{U}_{\gamma}\}\) pairwise disjoint;
* \(U\cap\gamma\neq\emptyset\) for all \(U\in\mathcal{U}_{\gamma}\); and
* we have \[\sum_{U\in\mathcal{U}_{\gamma}}\rho(U)\geq 1.\]
Define
\[\overline{\operatorname{Mod}}_{p,\tau}(\Gamma,\mathcal{U})=\inf_{\rho\nmid\gamma_{ \tau},U\Gamma}\sum_{U\in\mathcal{U}}\rho(U)^{p}.\]
The first result we prove, is that the new modulus bounds from above the notion of discrete modulus defined before, when the collections are roughly at the same level.
**Proposition 3.7**.: _Let \(k\in\mathbb{N}\). Assume that \(X\) is metrically doubling, and that \(\kappa\geq 1\), \(\tau\geq 4\). There exists a constant \(C>0\) so that the following holds for \(r>0\)._
_Suppose that \(\mathcal{U}\) is a \(\kappa\)-approximation at level \(r\) and \(\mathcal{V}\) is a \(\kappa\)-round collection with \(\kappa^{-1}r\leq r_{V}\leq r\) for every \(V\in\mathcal{V}\). If \(\Gamma\) is a collection of curves in \(X\), then_
\[\operatorname{Mod}_{p,\mathcal{U}}(\Gamma)\leq C\overline{\operatorname{Mod }}_{p,\tau}(\Gamma,\mathcal{V}).\]
Proof.: Since \(X\) is metrically doubling, and by an argument similar to that in Lemma 3.2, there is a constant \(D\) so that each \(U\in\mathcal{U}\) intersects at most \(D\) pairwise disjoint sets in \(\mathcal{V}\). Similarly, for each \(V\in\mathcal{V}\) there are at most \(D\) many \(U\in\mathcal{U}\) with \(U\cap V\neq\emptyset\).
Without loss of generality, assume that \(\overline{\operatorname{Mod}}_{p,\kappa,\tau}(\Gamma,\mathcal{V})<\infty\). Let \(\overline{\rho}\wedge_{\tau,\mathcal{V}}\Gamma\) be any admissible function so that
\[\sum_{V\in\mathcal{V}}\overline{\rho}(V)^{p}<\infty.\]
For each \(U\in\mathcal{U}\), define
\[\rho(U)=D\max\{\overline{\rho}(V):U\cap V\neq\emptyset,V\in\mathcal{V}\},\]
if there exists some \(V\in\mathcal{V}\) with \(U\cap V\neq\emptyset\). If there does not exist any \(V\in\mathcal{V}\) with \(V\cap U\neq\emptyset\), set \(\rho(U)=0\).
For each \(U\in\mathcal{U}\), for which it is possible, choose one \(V_{U}\in\mathcal{V}\) so that \(U\cap V\neq\emptyset\) and \(\rho(U)=D\overline{\rho}(V)\). Let \(\mathcal{U}_{V}=\{U\in\mathcal{U}:V_{U}=U\}\) for \(V\in\mathcal{V}\). We have, by the first paragraph of the proof, that \(|\mathcal{U}_{V}|\leq D\).
We claim that \(\rho\wedge_{\mathcal{U}}\mathcal{P}_{\Gamma}\). Let \(\gamma\in\Gamma\) be arbitrary. Let \(\mathcal{V}_{\gamma}\subset\mathcal{V}\) be the collection, where each set intersects \(\gamma\) and such that the collection \(\{B(z_{V},\tau r_{V}):V\in\mathcal{V}_{\gamma}\}\) is disjoint with
\[\sum_{V\in\mathcal{V}_{\gamma}}\overline{\rho}(V)\geq 1.\]
Let \(\mathcal{U}_{\gamma}:=\{U\in\mathcal{U}:U\cap\gamma\neq\emptyset\}\). Since \(\mathcal{U}\) is a cover of \(X\), for each \(V\in\mathcal{V}_{\gamma}\), we may choose a \(U^{V}\in\mathcal{U}\) so that \(U\cap(V\cap\gamma)\neq\emptyset\). For each \(U\in\mathcal{U}\), let \(\mathcal{V}_{\gamma,U}=\{V\in\mathcal{V}_{\gamma}:U^{V}=U\}\). This means that
\[\mathcal{V}_{\gamma}\subset\bigcup_{U\in\mathcal{U}_{\gamma}}\mathcal{V}_{ \gamma,U} \tag{3.8}\]
We also have by the first paragraph of the proof that \(|\mathcal{V}_{\gamma,U}|\leq D\) for every \(U\in\mathcal{U}_{\gamma}\). Thus, for every \(U\in\mathcal{U}_{\gamma}\), we have
\[\rho(U)\geq\sum_{V\in\mathcal{V}_{\gamma,U}}\overline{\rho}(V). \tag{3.9}\]
By applying (3.8) and (3.9) we get:
\[\sum_{U\cap\gamma\neq\emptyset}\rho(U) \geq\sum_{U\cap\gamma\neq\emptyset}\sum_{V\in\mathcal{V}_{\gamma,U }}\overline{\rho}(V)\] \[\geq\sum_{V\in\mathcal{V}_{\gamma}}\overline{\rho}(V)\geq 1.\]
Note that \(\mathcal{U}=\bigcup_{V\in\mathcal{V}}\mathcal{U}_{V}\). By using this, we estimate the \(p\)-energy of \(\rho\) using the bound \(|\mathcal{U}_{V}|\leq D\) for every \(V\in\mathcal{U}\).
\[\sum_{U\in\mathcal{U}}\rho(U)^{p} \leq\sum_{V\in\mathcal{V}}\sum_{U\in\mathcal{U}_{V}}\rho(U)^{p}\] \[\leq\sum_{V\in\mathcal{V}}D^{p}|\mathcal{U}_{V}|\overline{\rho}(V )^{p}\] \[\leq\sum_{B\in\mathcal{B}}D^{p+1}\overline{\rho}(V)^{p}.\]
Thus, the claim holds for \(C=D^{p+1}\) after we take an infimum over \(\overline{\rho}\wedge_{\tau,\mathcal{V}}\Gamma\).
One of the benefits of this notion of modulus, is that we can give simple bounds for it in terms of the Hausdorff measure of the space. The following will be an example of such a bound, which we will use. Recall the definition of Hausdorff content \(\mathcal{H}^{p}_{\delta}\) from (1.3).
**Proposition 3.10**.: _Let \(\kappa\geq 1,\tau\geq 4\). Let \(X\) be any connected compact metric space, and suppose that \(\Gamma\) is a family of curves, where each curve in \(\Gamma\) is contained in a ball \(B(x,R)\subset X\) and has diameter at least \(r\). Then, for every \(\epsilon\in(0,1),\delta\in(0,\operatorname{diam}(X)/2)\) there exists a \(\kappa\)-round collection \(\mathcal{V}\) for which_
\[\overline{\operatorname{Mod}}_{p,\tau}(\Gamma,\mathcal{V})\leq\frac{(20\tau)^ {p}\mathcal{H}^{p}_{\delta}(B(x,R))+\epsilon}{r^{p}},\]
_and \(\sup_{V\in\mathcal{V}}r_{V}\leq\delta\) and each ball in \(\mathcal{V}\) intersects \(B(x,R)\) as well as some curve in \(\Gamma\)._
Proof.: Fix \(\epsilon>0\). From the definition of Hausdorff content in (1.3), and by replacing each set in the cover by an enclosing ball, we can find a covering \(\mathcal{V}\) of \(B(x,R)\) by balls \(V=B=B(z_{V},r_{V})\) with \(r_{V}\leq\delta\) so that
\[\sum_{V\in\mathcal{V}}\operatorname{diam}(V)^{p}\leq 2^{p}\mathcal{H}^{p}_{ \delta}(B(x,R))+(10\tau)^{-p}\epsilon.\]
Moreover, by possibly making the collection smaller, we assume that each ball \(V\in\mathcal{V}\) intersects \(B(x,R)\) and some curve in \(\Gamma\). This modified collection still covers every curve \(\gamma\in\Gamma\). Now, \(\mathcal{V}\) is \(\kappa\)-round with \(\kappa=1\). Let \(\rho(V)=10\tau\operatorname{diam}(V)/r\). We have by the choice of \(\mathcal{V}\) and \(\rho\) that
\[\sum_{V\in\mathcal{V}}\rho(V)^{p}\leq\sum_{V\in\mathcal{V}}(10\tau)^{p} \operatorname{diam}(V)^{p}r^{-p}\leq\frac{(20\tau)^{p}\mathcal{H}^{p}_{\delta }(B(x,R))+\epsilon}{r^{p}}.\]
Therefore, the claim will follow once we show that \(\rho\overline{\gamma}_{\tau,\mathcal{V}}\Gamma\). Let \(\gamma\in\Gamma\). We need to find a collection \(\mathcal{V}_{\gamma}\) so that the properties i,ii and iii from Definition 3.6 hold. Since \(\mathcal{V}\) is a cover of \(\gamma\), we have that \(\{B(z_{V},\tau r_{V})\}\) is a cover of \(\gamma\). Applying the \(5r\)-covering lemma, we get a finite subcollection \(\mathcal{V}_{\gamma}\subset\mathcal{V}\) so that i) \(\{B(z_{V},\tau r_{V}):V\in\mathcal{V}_{\gamma}\}\) is pairwise disjoint, ii) so that \(\gamma\cap V\neq\emptyset\) for all \(V\in\mathcal{V}_{\gamma}\) and so that we have that \(\mathcal{V}^{\prime}=\{B(z_{V},5\tau r_{V}):V\in\mathcal{V}_{\gamma}\}\) is a covering of \(\gamma\). Note that
\[\operatorname{diam}(B(z_{V},5\tau r_{V}))\leq 10\tau r_{V}\leq 10\tau \operatorname{diam}(V)=\rho(V)r.\]
Since the balls \(\{B(z_{V},5\tau r_{V}):V\in\mathcal{V}_{\gamma}\}\) cover \(\gamma\), we get
\[\sum_{V\in\mathcal{V},V\cap\gamma\neq\emptyset}\rho(V)\geq\sum_{V\in\mathcal{ V}^{\prime}}\operatorname{diam}(B(z_{V},5\tau r_{V}))/r\geq\operatorname{diam}( \gamma)/r\geq 1.\]
Thus, \(\rho\overline{\gamma}_{\tau,\mathcal{V}}\Gamma\) and the claim follows.
The new notion of modulus is also invariant under quasisymmetries, except for adjusting the \(\tau\) parameter. In the following, if \(\Gamma\) is a collection of curves in \(X\) and \(f:X\to Y\) is a homeomorphism, we write \(f(\Gamma)=\{f\circ\gamma:\gamma\in\Gamma\}\). The opposite inequality can be obtained by adjusting \(\tau\), and applying this lemma to the inverse mapping \(f^{-1}\).
**Lemma 3.11**.: _Let \(\tau\geq 4\) and let \(f:X\to Y\) be an \(\eta\)-quasisymmetry. If \(\mathcal{V}\) is a \(\kappa\)-round collection, and if \(\Gamma\) is any collection of curves in \(X\), then_
\[\overline{\operatorname{Mod}}_{p,\tau}(\Gamma,\mathcal{V})\leq\overline{ \operatorname{Mod}}_{p,\max\{4,\eta(\tau)\}}(f(\Gamma),f(\mathcal{V})).\]
Proof.: Let \(\tau^{\prime}=\max\{4,\eta(2\tau)\}\). By Lemma 2.9, we have that \(f(\mathcal{V})\) is a \(\kappa^{\prime}\)-round collection for some \(\kappa^{\prime}\). Let \(\rho\overline{\gamma}_{\tau^{\prime},f(\mathcal{V})}f(\Gamma)\). Define \(\overline{\rho}(V)=\rho(f(V))\) for \(V\in\mathcal{V}\). We clearly have
\[\sum_{V\in\mathcal{V}}\overline{\rho}(V)^{p}=\sum_{V\in f(\mathcal{V})}\rho(V)^ {p}.\]
Thus, the claim will follow, if we can show that \(\overline{\rho}\overline{\gamma}_{\tau,\mathcal{V}}\Gamma\).
In the following, elements of \(f(\mathcal{V})\) will be written as \(f(V)\), where \(V\in\mathcal{V}\). Let \(\gamma\in\Gamma\). Then, \(f\circ\gamma\in f(\Gamma)\) and, since \(\rho\overline{\gamma}_{\tau^{\prime},f(\mathcal{V})}f(\Gamma)\), there exists a collection \(\mathcal{U}_{f(\gamma)}\subset f(\mathcal{V})\) so that
* \(\{B(z_{f(V)},\tau^{\prime}r_{f(V)}):f(V)\in\mathcal{U}_{f(\gamma)}\}\) is pairwise disjoint;
* \(f(V)\) intersects \(\gamma\) for every \(f(V)\in\mathcal{U}_{f(\gamma)}\);
* we have \[\sum_{U\in\mathcal{U}_{f(\gamma)}}\rho(U)\geq 1.\] Let \(\mathcal{U}_{\gamma}=\{V\in\mathcal{V}:f(V)\in\mathcal{U}_{f(\gamma)}\}\). We need to check the three properties from Definition 3.6 of \(\overline{\rho}\overline{\gamma}_{\tau,\gamma}\Gamma\): 1. \(\{B(z_{V},\tau r_{V}):V\in\mathcal{U}_{\gamma}\}\) is pairwise disjoint; 2. \(V\) intersects \(\gamma\) for every \(V\in\mathcal{U}_{\gamma}\); 3. we have \[\sum_{U\in\mathcal{U}_{\gamma}}\overline{\rho}(U)\geq 1.\] From these, b) and c) follow immediately from the properties ii) and iii) above. By Lemma 2.3, we have \[f(B(z_{V},\tau r_{V}))\subset f(B(z_{f(V)},\eta(2\tau)r_{f(V)}),\] for every \(V\in\mathcal{U}_{\gamma}\). Since \(\tau^{\prime}\geq\eta(2\tau)\), the disjointness in a) follows from that in i).
## 4. Combinatorially Loewner Spaces
### Definition and basic property
For two closed sets \(E,F\), let \(\Gamma(E,F)\) be the collection of curves which join them. We adapt the definition of Bourdon and Kleiner of the combinatorial Loewner property slightly, as modified by Clais in [8, Definition 2.6]. Let \(\mathcal{U}_{k}\) be a sequence of \(\kappa\)-approximations at level \(2^{-k}\).
**Definition 4.1**.: Fix \(p>1\). We say that a compact LLC space \(X\) satisfies the combinatorial \(p\)-Loewner property, if there exist some increasing continuous functions \(\phi,\psi:(0,\infty)\to(0,\infty)\) with \(\lim_{t\to 0}\psi(t)=0\), with the following two properties.
1. For every pair of disjoint continua \(E,F\subset X\) and all \(k\geq 0\) with \(2^{-k}\leq\min\{\operatorname{diam}(E),\operatorname{diam}(F)\}\), we have \[\phi(\Delta(E,F)^{-1})\leq\operatorname{Mod}_{p\,\mathcal{U}_{k}}(\Gamma(E,F)).\]
2. For every \(z\in X\) and \(0<r<R\) and all \(k\geq 0\) with \(2^{-k}\leq r\), we have \[\operatorname{Mod}_{p,\mathcal{U}_{k}}(\Gamma(\overline{B(z,r)},X\setminus B (z,R)))\leq\psi\left(\frac{r}{R-r}\right).\]
Spaces with the combinatorial \(p\)-Loewner property are also called CLP -spaces or \(p\)-CLP spaces, if we wish to explicate the exponent \(p>1\).
We first note that a combinatorially \(p\)-Loewner space has conformal Assoad dimension, as well as Ahlfors regular conformal dimension, equal to \(p\). This Lemma is quite well known and is a rather direct consequence from the known Theorem 3.5. However, we present a proof for the sake of clarity, and since its proof does not appear to have been published elsewhere. The proof is very similar, or rather a localized version, of the proof of [4, Corollary 3.7]. Later, we will prove Theorem 1.8, which is one of our main contributions, and which improves the following statement by showing that also \(\dim_{CH}(X)=p\).
**Lemma 4.2**.: _For a compact LLC space \(X\), which is combinatorially \(p\)-Loewner, it holds that_
\[\dim_{CA}(X)=\dim_{CAR}(X)=p.\]
Proof.: Let \(\psi\) and \(\psi\) be the functions appearing in Definition 4.1.
Since \(X\) is a compact LLC space, it is uniformly perfect, and by [17, Proposition 2.2.6] and [11, Chapters 14 and 15] we have \(\dim_{CA}(X)=\dim_{CAR}(X)\). Next, let \(\mathcal{U}_{k}\) be a sequence of \(\kappa\)-approximations at levels \(2^{-k}\) for \(k\in\mathbb{N}\). Let \(z\in X\) and \(0<r\leq\operatorname{diam}(X)/4\). Then, by the LLC property, there exists a continuum \(E\subset\overline{B(z,r)}\) with \(\operatorname{diam}(E)\geq r\) and another continuum \(F\subset\overline{B(z,3r)}\setminus B(z,2r)\) with \(\operatorname{diam}(F)\geq r\). Since every curve connecting \(E\) to \(F\) contains a sub-curve within \(\Gamma(\overline{B(z,r)},X\setminus B(z,2r))\), we have
\[\operatorname{Mod}_{p}(\Gamma(E,F),\mathcal{U}_{k})\leq\operatorname{Mod}_{p }(\Gamma(\overline{B(z,r)},X\setminus B(z,2r)),\mathcal{U}_{k}).\]
However, by the CLP property and since \(\Delta(E,F)\leq 6\), we get for all \(k\geq 0\) such that \(2^{-k}\leq r\) that
\[\phi(6^{-1})\leq\operatorname{Mod}_{p}(\Gamma(E,F),\mathcal{U}_{k})\leq \operatorname{Mod}_{p}(\Gamma(\overline{B(z,r)},X\setminus B(z,2r)),\mathcal{U} _{k}).\]
We thus get:
\[\liminf_{m\to\infty}\sup_{z\in X,k\geq 0}\operatorname{Mod}_{p,\mathcal{U}_{m+ k}}(\Gamma_{B(z,2^{-k}),2})\geq\phi(6^{-1}).\]
Thus, \(p\leq\dim_{CAR}(X)\) by Theorem 3.5.
The inequality \(p\geq\dim_{CAR}(X)\) follows by showing that for all \(\epsilon>0\), we have
\[\lim_{m\to\infty}\sup_{z\in X,k\geq 0}\mathrm{Mod}_{p+\epsilon\mathcal{U}_{m+k}}( \Gamma_{B(z,2^{-k}),2})=0.\]
The idea in showing this is to compare the discrete moduli with exponents \(p+\epsilon\) and \(p\). Indeed, we will show that for all \(m\geq 3\) we have
\[\mathrm{Mod}_{p+\epsilon\mathcal{U}_{m+k}}(\Gamma_{B(z,2^{-k}),2})\leq\psi(2^ {2-m})^{\epsilon}\mathrm{Mod}_{p\mathcal{U}_{m+k}}(\Gamma_{B(z,2^{-k}),2})\leq \psi(2^{2-m})^{\epsilon}\psi(1). \tag{4.3}\]
Then, since \(\lim_{t\to 0}\psi(t)=0\), the claim follows.
Let \(\rho\) be the optimal function for \(\mathrm{Mod}_{p\mathcal{U}_{m+k}}(\Gamma_{B(z,2^{-k}),2})\), which exists by Lemma 3.2. We will show that \(\rho(U)\leq\psi(2^{1-m})\) for every \(U\in\mathcal{U}_{m+k}\). This uses a bound for modulus coming from [4, Lemma 2.3], which in turn relies on estimating the modulus of the curves which pass through the set \(U\). Let \(U\in\mathcal{U}_{m+k}\). Let \(\Gamma_{U}\) be the collection of curves in \(\Gamma_{B(z,2^{-k}),2}\) which intersect \(U\). Then any curve in \(\Gamma_{B(z,2^{-k}),2}\) which intersects \(U\) will contain a sub-curve connecting \(\overline{B(z_{U},r_{U})}\) to \(X\setminus B(z_{U},2^{m-1}r_{U})\). Thus,
\[\mathrm{Mod}_{p,\mathcal{U}_{k}}(\Gamma_{U})\leq\mathrm{Mod}_{p,\mathcal{U}_ {k}}(\Gamma(\overline{B(z_{U},r_{U})},X\setminus B(z_{U},2^{m-1}r_{U}))\leq \psi(2^{2-m}).\]
By [4, Lemma 2.3], we get for all \(U\in\mathcal{U}_{m+k}\)
\[\rho(U)\leq\mathrm{Mod}_{p,\mathcal{U}_{k}}(\Gamma_{U})\leq\psi(2^{2-m}).\]
This, together with the optimality of \(\rho\) yields
\[\sum_{U\in\mathcal{U}}\rho(U)^{p+\epsilon}\leq\max_{U\in\mathcal{U}}\rho(U)^{ \epsilon}\sum_{U\in\mathcal{U}}\rho(U)^{p}\leq\psi(2^{2-m})^{\epsilon}\mathrm{ Mod}_{p,\mathcal{U}_{m+k}}(\Gamma_{B(z,2^{-k}),2}),\]
which is the desired estimate (4.3).
### Estimates for Modulus
If the space is combinatorially Loewner, then we can give a lower bound of our modulus, which we introduced in Subsection 3.4, in terms of the Bourdon-Kleiner modulus. This is a strengthening of the Proposition 3.7. In a sense, the following Proposition is the starting point of our paper, since its argument was the first to be discovered.
**Proposition 4.4**.: _Let \(k\in\mathbb{N}\), \(p>1\). Assume that \(X\) is metrically doubling, LLC and combinatorially \(p\)-Loewner, and that \(\kappa\geq 1\), \(\tau\geq 4\). There exists a constant \(C>0\) so that the following holds for \(r>0\)._
_Suppose that \(\mathcal{U}\) is a \(\kappa\)-approximation at level \(r\) and \(\mathcal{V}\) is a \(\kappa\)-round collection with \(\inf\{r_{V}:V\in\mathcal{V}\}\geq 2r\). If \(\Gamma\) is a collection of curves in \(X\) with \(2\tau\sup_{V\in\mathcal{V}}r_{V}\leq\mathrm{diam}(\gamma)\) for all \(\gamma\in\Gamma\), then_
\[\mathrm{Mod}_{p,\mathcal{U}}(\Gamma)\leq C\overline{\mathrm{Mod}}_{p,\tau}( \Gamma,\mathcal{V}).\]
Proof.: Again, assume that \(\overline{\mathrm{Mod}}_{p,\tau}(\Gamma,\mathcal{V})<\infty\), and that \(\overline{\rho}\wedge_{\tau,\mathcal{V}}\Gamma\) with
\[\sum_{V\in\mathcal{V}}\overline{\rho}^{p}(V)<\infty.\]
For each \(V\in\mathcal{V}\) consider the collection of curves \(\Gamma_{V}=\Gamma(\overline{B(z_{V},r_{V})},X\setminus B(z_{V},(\tau-1)r_{V}))\). By the \(p\)-combinatorial Loewner assumption and since \(r\leq r_{V}/2\), we have
\[\mathrm{Mod}_{p,\mathcal{U}}(\Gamma_{V})\leq C, \tag{4.5}\]
for \(C=\psi(\frac{1}{\tau-1})>0\), where \(\psi\) is from Definition 4.1. Let \(\rho_{V}:\mathcal{U}\to[0,\infty)\) be such that \(\rho_{V}\wedge_{\mathcal{U}}\Gamma_{V}\) and so that
\[\sum_{U\in\mathcal{U}}\rho_{V}(U)^{p}\leq 2C. \tag{4.6}\]
Let
\[\rho(U)=\max\{\rho_{V}(U)\overline{\rho}(V):V\in\mathcal{V}\}.\]
We claim that \(\rho\wedge_{\mathcal{U}}\Gamma\). Let \(\gamma\in\Gamma\). Since \(\overline{\rho}\wedge_{\tau,\mathcal{V}}\Gamma\), there exists a collection \(\mathcal{V}_{\gamma}\) of \(V\in\mathcal{V}\) with
1. \(V\cap\gamma\neq\emptyset\) for all \(V\in\mathcal{V}_{\gamma}\);
2. \(\{B(z_{V},\tau r_{V}):V\in\mathcal{V}_{\gamma}\}\) is a pairwise disjoint collection of balls; and
\[\sum_{V\in\mathcal{V}_{\gamma}}\overline{\rho}(V)\geq 1. \tag{3}\]
For each \(V\in\mathcal{V}_{\gamma}\), let \(\gamma|_{V}\) be a minimal subcurve which connected \(B(z_{V},r_{V})\) to \(B(z_{V},(\tau-1)r_{V})\). Such a subcurve exists since \(\operatorname{diam}(\gamma)\geq 2\tau r_{V}\) and \(\gamma\cap B(z_{V},r_{V})\neq\emptyset\). These subcurves are disjoint and \(d(\gamma|_{V},\gamma|_{V^{\prime}})\geq 2\min\{r_{V},r_{V^{\prime}}\}\geq 4r\), for distinct \(V,V^{\prime}\in\mathcal{V}_{\gamma}\). Therefore, if we let \(\mathcal{U}_{V}=\{U\in\mathcal{U}:U\cap\gamma|_{V}\neq\emptyset\}\) for \(V\in\mathcal{V}_{\gamma}\), then \(\mathcal{U}_{V}\cap\mathcal{U}_{V^{\prime}}=\emptyset\) for distinct \(V,V^{\prime}\in\mathcal{V}_{\gamma}\). We also have, since \(\rho_{V}\wedge\Gamma_{V}\) and \(\rho\geq\rho_{V}\overline{\rho}(V)\) that
\[\sum_{U\in\mathcal{U}_{V}}\rho(U)\geq\sum_{U\in\mathcal{U},U\cap\gamma|_{V} \neq\emptyset}\rho_{V}(U)\overline{\rho}(V)\geq\overline{\rho}(V). \tag{4.8}\]
Now, let \(\mathcal{U}_{\gamma}=\{U\in\mathcal{U}:U\cap\gamma\neq\emptyset\}\). We also have
\[\bigcup_{V\in\mathcal{V}_{\gamma}}\mathcal{U}_{V}\subset\mathcal{U}_{\gamma}.\]
By the disjointness of the collections \(\mathcal{U}_{V}\), for distinct \(V\in\mathcal{V}_{\gamma}\), and by applying (4.7), (4.8) and the choice of \(\rho\), we get
\[\sum_{U\in\mathcal{U}_{\gamma}}\rho(U) \geq\sum_{V\in\mathcal{V}_{\gamma}}\sum_{U\in\mathcal{U}_{V}}\rho (U)\] \[\geq\sum_{V\in\mathcal{V}_{\gamma}}\overline{\rho}(V)\geq 1.\]
Thus, since \(\gamma\) is arbitrary, \(\rho\wedge_{\mathcal{U}}\Gamma\).
Next, we show a mass-bound for \(\rho\). For each \(U\in\mathcal{U}\) let \(V_{U}\in\mathcal{V}\) be such that \(\rho(U)=\rho_{V_{U}}(U)\overline{\rho}(V_{U})\). This yields a partition of \(\mathcal{U}\) into sets \(\mathcal{U}^{V}=\{U\in\mathcal{U}:V_{U}=V\}.\) Thus, we have, since \(\mathcal{U}^{V}\subset\mathcal{U}\)
\[\operatorname{Mod}_{p}(\Gamma,\mathcal{U})\leq\sum_{U\in\mathcal{U }}\rho(U)^{p} =\sum_{V\in\mathcal{V}}\sum_{U\in\mathcal{U}^{V}}\rho_{V}(U)^{p} \overline{\rho}(V)^{p}\] \[\leq\sum_{V\in\mathcal{V}}\overline{\rho}(V)^{p}\sum_{U\in \mathcal{U}}\rho_{V}(U)^{p}\] \[\leq\sum_{V\in\mathcal{V}}2C\overline{\rho}(V)^{p}=2C\sum_{V\in \mathcal{V}}\overline{\rho}(V)^{p}.\]
By infimizing over \(\overline{\rho}\) such that \(\overline{\rho}\wedge_{\tau,\mathcal{V}}\Gamma\) the claim follows.
We obtain the following proposition, which gives a lower bound for the Hausdorff measure of a combinatorially Loewner space. In this way, this generalizes to combinatorially Loewner spaces the classical estimate of Heinonen and Koskela, [12, Theorem 3.6]. That result is much easier to show using continuous modulus. For discrete modulus one needs to do some extra work.
**Proposition 4.9**.: _Let \(X\) be a \(p\)-combinatorially Loewner LLC and metrically doubling space. Then, there exists a constant \(C\geq 1\) so that for every \(r\in(0,\operatorname{diam}(X))\) and any \(x\in X\) we have_
\[\mathcal{H}^{p}(B(x,r))\geq Cr^{p}.\]
Proof.: Let \(x\in X\). It is sufficient to prove
\[\mathcal{H}^{p}(B(x,2L^{\prime}r))\geq Cr^{p}. \tag{4.10}\]
for some uniform constants \(L^{\prime}\geq 1,C>0\) for all \(r\in(0,\operatorname{diam}(X)/8)\). Since \(X\) is LLC, we can find a continuum \(E\subset B(x,r)\) with \(r\geq\operatorname{diam}(E)\geq r/2\) and with \(x\in E\). Further, there exists a continuum \(F\subset\overline{B(x,4r)}\setminus B(x,3r)\) with \(8r\geq\operatorname{diam}(F)\geq r\). We have
\[1\leq\Delta(E,F)\leq 16.\]
Let \(\Gamma\) be the collection of continuous curves connecting \(E\) to \(F\).
Next, our strategy in proving (4.10) is to show three estimates. We will show that
* There is a collection \(\Gamma_{B}\) so that for any \(\kappa\)-approximation \(\mathcal{U}\) at a small enough level the quantity \(\operatorname{Mod}_{p,\mathcal{U}}(\Gamma\setminus\Gamma_{B})\) can be bounded from below by using the CLP property, and each curve in \(\Gamma\setminus\Gamma_{B}\) is contained in a ball.
* Proposition 3.10 gives a lower bound for the Hausdorff measure in terms of the discrete modulus \(\overline{\operatorname{Mod}}_{p,\tau}(\Gamma\setminus\Gamma_{B},\mathcal{V})\).
* Finally, Proposition 4.4 is used to find a collection \(\mathcal{U}\) for which we can bound \(\overline{\operatorname{Mod}}_{p,\tau}(\Gamma_{X}\setminus\Gamma_{B}, \mathcal{V})\) from below by \(\operatorname{Mod}_{p,\mathcal{U}}(\Gamma_{X}\setminus\Gamma_{B})\) for some \(\kappa\)-approximation \(\mathcal{U}\) at a small enough level. These estimates together yield the desired bound.
We focus on **A)** first and determine \(\Gamma_{B}\). Let \(\mathcal{U}\) be a \(\kappa\)-approximation at level \(2^{-k}\) for some \(k\in\mathbb{N}\) s.t. \(2^{-k}\leq\min\{\operatorname{diam}(E),\operatorname{diam}(F)\}\). We have
\[\operatorname{Mod}_{p,\mathcal{U}}(\Gamma_{X})\geq\phi(16^{-1}).\]
Let \(L\geq 2\) be such that \(\psi(2L^{-1})\leq 2^{-1}\phi(16^{-1})\). Let \(\Gamma_{B}\) be the collection of curves \(\gamma\in\Gamma_{X}\) with a subcurve in \(\Gamma(\overline{B(x,r)},X\setminus B(x,Lr))\). We have, since \(X\) is CLP, that
\[\operatorname{Mod}_{p,\mathcal{U}}(\Gamma_{B})\leq\operatorname{Mod}_{p, \mathcal{U}}(\Gamma(\overline{B(x,r)},X\setminus B(x,Lr)))\leq\psi(2L^{-1}) \leq\frac{\phi(16^{-1})}{2}.\]
Thus, by subadditivity of modulus, we get for \(\Gamma_{G}:=\Gamma_{X}\setminus\Gamma_{B}\) the estimate
\[\operatorname{Mod}_{p,\mathcal{U}}(\Gamma_{G})\geq\frac{\phi(16^{-1})}{2}. \tag{4.11}\]
Next, we deduce **B)** in our strategy. Let \(\tau\geq 4\). Choose \(\delta\in(0,4^{-1}\tau^{-1}r)\). Each of the curves in \(\Gamma_{G}\) has diameter at least \(r\) and is contained in \(B(x,Lr)\). So, we can apply Proposition 3.10 to find for any \(\epsilon>0\) a \(1\)-round collection \(\mathcal{V}\) of balls which intersect \(B(x,Lr)\) and some curve in \(\Gamma_{G}\) with \(\operatorname{rad}(\mathcal{V})\leq\delta\) and with
\[\overline{\operatorname{Mod}}_{p,\tau}(\Gamma_{G},\mathcal{V})\leq(20\tau)^{ p}\left(\mathcal{H}_{\delta}^{p}(B(e,Lr))+\epsilon\right)r^{-p}\leq(20\tau)^{p} \left(\mathcal{H}^{p}(B(e,Lr))+\epsilon\right)r^{-p}. \tag{4.12}\]
Finally, we deduce **C)**. Each curve in \(\Gamma_{G}\) connects \(E\) to \(F\), and thus we have \(\operatorname{diam}(\gamma)\geq r\) for all \(\gamma\in\Gamma_{G}\). This means that \(2\tau\sup_{V\in\mathcal{V}}r_{V}\leq\inf_{\gamma\in\Gamma_{G}}\operatorname{ diam}(\gamma)\). Thus, by Proposition 4.4, there exists a constant \(C\) so that
\[\operatorname{Mod}_{p,\mathcal{U}}(\Gamma_{G})\leq C\overline{\operatorname{ Mod}}_{p,\tau}(\Gamma_{X,G},\mathcal{V}), \tag{4.13}\]
if \(k\) is so large so that \(\inf\{r_{V}:V\in\mathcal{V}\}\geq 2^{-k-1}\).
By combining Estimates **A-C)**, we get the following once \(k\) is large enough
\[\begin{array}{c}\frac{\phi(\eta(16)^{-1})}{2}\stackrel{{(\ref{eq: 1.1})}}{{\leq}}\operatorname{Mod}_{p,\mathcal{U}}(\Gamma_{G})\\ \stackrel{{(\ref{eq:1.1})}}{{\leq}}C\overline{\operatorname{Mod}}_{p,\tau}(\Gamma_{G},\mathcal{V})\\ \stackrel{{(\ref{eq:1.2})}}{{\leq}}(20\tau)^{p}C(\mathcal{H}^{p}(B (x,Lr))+\epsilon)r^{-p}.\end{array}\]
Consequently, since this holds for all \(\epsilon>0\), we get
\[\frac{\phi(\eta(16)^{-1})}{2(20\tau)^{p}C}r^{p}\leq\mathcal{H}^{p}(B(x,L^{ \prime}r)).\]
This yields the desired estimate (4.10).
### Proof of Theorem 1.8
Using the previous properties, we are able to prove the equality of different forms of conformal dimension for CLP spaces.
Proof of Theorem 1.8.: Assume that \(X\) is combinatorially \(p\)-Loewner. By Lemma 4.2, we have
\[\operatorname{dim}_{CA}(X)=\operatorname{dim}_{CAR}(X)=p.\]
We also have \(\operatorname{dim}_{CH}(X)\leq\operatorname{dim}_{CAR}(X)=p\). Thus, we only need to show that \(\operatorname{dim}_{CH}(X)\geq p\). Let \(f:X\to Y\) be a quasisymmetry. The space \(Y\) is \(p\)-combinatorially Loewner, since the combinatorial Loewner property is invariant under quasisymmetries, see [4, Theorem 2.6 (2)]. It is also easy to see, that the LLC and metric doubling properties are invariant under quasisymmetries, and thus \(Y\) is LLC and metric
doubling. Then, by Proposition 4.9 there exists a constant \(C\) so that we have for every \(y\in Y\) and any \(r\in(0,\operatorname{diam}(Y))\) that
\[\mathcal{H}^{p}(B(y,r))\geq Cr^{p}>0.\]
From the definition of Hausdorff dimension, and since \(\mathcal{H}^{p}(B(y,r))>0\) if and only if \(\mathcal{H}^{p}_{\infty}(B(y,r))>0\), we have \(\dim_{H}(Y)\geq p\). Consequently, by taking an infimum over all \(Y\) which are quasisymmetric to \(X\), we get \(\dim_{CH}(X)\geq p\).
## 5. Quasiself-similar Spaces
### Uniform bound for Annuli
Define an annulus as \(A(x,r,R):=B(x,R)\setminus\overline{B(x,r)}\).
**Definition 5.1**.: Let \(p\in(1,\infty)\). Let \(\tau\geq 4\). We say that a metric space \(X\) has uniformly small \(p\)-moduli of annuli, if there exists \(\epsilon\in(0,1)\) and constants \(0<\delta_{-}<\delta_{+}<\tau^{-1}\), so that the following holds. For every annulus \(A(x,r,(\tau-2)r)\) in \(X\), with \(x\in X,r\in(0,2^{-1}\tau^{-1}\operatorname{diam}(X))\), there exists a finite collection of balls \(\mathcal{V}_{x,r}\) contained in \(B(x,\tau r)\) and which intersect \(B(x,(\tau-2)r)\), with \(r_{V}\in[\delta_{-}r,\delta_{+}r]\) for each \(V\in\mathcal{V}_{x,r}\), and there exists a function \(\rho_{x,r}:\mathcal{V}_{x,r}\to[0,\infty)\) with
\[\rho_{x,r}\overline{\wedge}_{\tau,\mathcal{V}_{x,r}}\Gamma(\overline{B(x,r)}, X\setminus B(x,(\tau-2)r))\]
and with
\[\sum_{B\in\mathcal{V}_{B}}\rho_{x,r}(B)^{p}\leq\epsilon.\]
The following Lemma is a refinement of Proposition 3.10 to the quasiself-similar setting.
**Lemma 5.2**.: _Suppose that \(\dim_{CH}(X)<p\), that \(p\in(1,\infty)\), and that \(X\) is an arcwise connected quasiself-similar compact metric space. Then \(X\) has uniformly small \(p\)-moduli of annuli._
Proof.: Assume that \(X\) is \(\eta\)-quasiself-similar and let \(\tau\geq 4\). Fix any \(\delta_{+}\in(0,\tau^{-1})\). Since \(\dim_{CH}(X)<p\), there exists a compact space \(Y\) with \(\dim_{H}(Y)<p\) and a quasisymmetry \(g:X\to Y\). Fix \(C\geq 1,\sigma\in(0,2^{-1})\) to be determined. By adjusting \(\eta\), we may assume that \(g\) is an \(\eta\)-quasisymmetry. Let \(\epsilon>0\), and choose a covering of \(Y\) by a collection of balls \(\mathcal{B}_{Y}\) with
\[\sum_{B\in\mathcal{B}_{Y}}\operatorname{diam}(B)^{p}\leq\epsilon C^{-p} \operatorname{diam}(Y)^{p},\]
and for which \(\operatorname{rad}(B)\leq\sigma\operatorname{diam}(Y)\) for every \(B\in\mathcal{B}_{Y}\). Let \(A(x,r,(\tau-2)r)\) be an annulus in \(X\) with \(x\in X\) and \(r\in(0,2^{-1}\tau^{-1}\operatorname{diam}(X))\). There is a homeomorphism \(f:B(x,2\tau r)\to U\), for some open set \(U\subset X\), which is an \(\eta\)-quasisymmetry, where \(\operatorname{diam}(U)\geq\delta\operatorname{diam}(X)\).
We first define the collection \(\mathcal{V}_{x,r}\) used in Definition 5.1. For each \(B=B(y,s)\in\mathcal{B}_{Y}\) with
\[B\cap g(f(B(x,(\tau-2)r)))\neq\emptyset,\]
choose \(x_{V_{B}}\in(g\circ f)^{-1}(B)\cap B(x,(\tau-2)r)\), and let \(r_{V_{B}}=\sup\{d(x,x_{B}):x\in(g\circ f)^{-1}(2B)\cap B(x,\tau r)\}\). Define \(V_{B}:=B(x_{V_{B}},r_{V_{B}})\). Let
\[\mathcal{V}_{x,r}:=\{V_{B}:B\in\mathcal{B}_{Y},B\cap g(f(B(x,(\tau-2)r)))\neq \emptyset\}\]
be the collection of balls we seek. Next, we give bounds for \(r_{V_{B}}\) by using the fact that \(X\) is connected and that \(g\circ f\) is a \(\tilde{\eta}\)-quasisymmetry with \(\tilde{\eta}=\eta\circ\eta\).
Since \(\operatorname{diam}(U)\geq\delta\operatorname{diam}(X)\), we can choose \(a,b\in U\) with \(d(a,b)\geq 2^{-1}\delta\operatorname{diam}(X)\). Choose a point \(c\in X\) so that \(d(g(c),g(a))\geq\operatorname{diam}(Y)2^{-1}\). Since \(g\) is an \(\eta\)-quasisymmetry, we have
\[\frac{d(g(a),g(c))}{d(g(a),g(b))}\leq\eta\left(\frac{d(c,a)}{d(b,a)}\right)\leq \eta(2\delta^{-1}).\]
Thus,
\[d(g(a),g(b))\geq\eta(2\delta^{-1})^{-1}2^{-1}\operatorname{diam}(Y). \tag{5.3}\]
We will use (5.3) to give an upper bound for \(r_{V_{B}}\) for each \(V_{B}\in\mathcal{V}_{x,r}\), where \(B\in\mathcal{B}_{Y}\). Let \(u,v\in B(x,2\tau r)\) be such that \(f(u)=a,f(v)=b\). Choose \(s,t\in(g\circ f)^{-1}(2B)\cap B(x,\tau r)\) so that \(d(s,t)\geq r_{V_{B}}/2\). Up to possibly switching \(u\) and \(v\), and \(a,b\), we can assume by (5.3) that
\[d(g(f(s)),g(a))\geq\frac{d(g(a),g(b))}{2}\geq\eta(2\delta^{-1})^{-1} \operatorname{diam}(Y)2^{-2}. \tag{5.4}\]
We have
\[\frac{d(g(f(s)),g(f(u)))}{d(g(f(s)),g(f(t)))}\leq\tilde{\eta}\left(\frac{d(s,u)}{d (s,t)}\right). \tag{5.5}\]
Since \(g(f(s)),g(f(t))\in 2B\), we get \(d(g(f(s)),g(f(t)))\leq 4\text{rad}(B)\leq 4\sigma\,\text{diam}(Y)\). Thus, from (5.4), we get
\[\frac{1}{2^{4}\eta(2\delta^{-1})\sigma}\leq\frac{\text{diam}(Y)}{2^{4}\text{ rad}(B)\eta(2\delta^{-1})}\leq\frac{d(g(f(s)),g(a))}{d(g(f(s)),g(f(t)))}.\]
By combining this with (5.5), we deduce
\[\tilde{\eta}^{-1}\left(\frac{1}{2^{4}\sigma\eta(2\delta^{-1})}\right)\leq \frac{d(s,u)}{d(s,t)}\leq\frac{4\tau r}{r_{V_{B}}}.\]
Thus,
\[r_{V_{B}}\leq\frac{4\tau}{\tilde{\eta}^{-1}\left(\frac{1}{2^{4}\sigma\eta(2 \delta^{-1})}\right)}r.\]
Choose now \(\sigma\leq\tilde{\eta}(\frac{4\tau}{\delta_{+}})^{-1}\eta(2\delta^{-1})^{-1}2 ^{-4}\). We then have, \(r_{V_{B}}\leq\delta_{+}r\). Since \(\delta_{+}<1\), we also have \(r_{V_{B}}\leq r\) and since \(x_{V_{B}}\in B(x,(\tau-2)r)\) we clearly have \(V_{B}\subset B(x,\tau r)\).
Next, we give a uniform lower bound for the radii \(r_{V_{B}}\) for \(V_{B}\in\mathcal{V}_{x,r}\). Since \(\mathcal{B}_{Y}\) is finite, there exists a constant \(\beta>0\) so that \(\text{rad}(B)\geq\beta\,\text{diam}(Y)\) for all \(B\in\mathcal{B}_{Y}\). Choose \(\delta_{-}=\tilde{\eta}^{-1}(\beta)/2\). Let \(c\in B(x_{V_{B}},\delta_{-}r)\) be an arbitrary point. Also, choose \(b\in B(x,2\tau r)\) with \(d(b,x_{V_{B}})\geq r\), which is possible by connectivity. Then, by the quasisymmetry condition, we get
\[\frac{d(g(f(c)),g(f(x_{V_{B}})))}{d(g(f(b)),g(f(x_{V_{B}})))}\leq\tilde{\eta} \left(\frac{d(c,x_{V_{B}})}{d(b,x_{V_{B}})}\right)\leq\tilde{\eta}\left( \delta_{-}\right).\]
The choice of \(\delta_{-}\) guarantees \(\tilde{\eta}(\delta_{-})\leq\beta\), and thus
\[d(g(f(c)),g(f(x_{V_{B}})))\leq\tilde{\eta}(\delta_{-})\,\text{diam}(Y)\leq r _{B}.\]
Therefore, since \(g(f(x_{V_{B}}))\in B\), we get \(g(f(c))\in 2B\). This holds for all \(c\in B(x_{V_{B}},\delta_{-}r)\), and thus
\[g(f(B(x_{V_{B}},\delta_{-}r)))\subset 2B.\]
This yields, by connectivity and the definition of \(r_{V_{B}}\) that \(r_{V_{B}}\geq\delta_{-}r\).
Finally, we define the admissible function \(\rho\). Define \(\rho(V)=\max\{C\,\text{diam}_{Y}(B)\,\text{diam}_{Y}(Y)^{-1}:V_{B}=V,B\in \mathcal{B}_{Y}\}\) for \(V\in\mathcal{V}_{x,r}\). We have
\[\sum_{V\in\mathcal{V}_{B}}\rho(V)^{p}\leq C^{p}\sum_{B\in\mathcal{B}_{Y}} \text{diam}_{Y}(B)^{p}\,\text{diam}_{Y}(Y)^{-p}\leq\epsilon, \tag{5.6}\]
since for every \(V\in\mathcal{V}_{x,r}\) there exists at least one \(B\in\mathcal{B}_{Y}\) so that \(V_{B}=V\), and for every \(B\in\mathcal{B}_{Y}\) there is only one \(V\in\mathcal{V}_{x,r}\) for which \(V_{B}=V\).
Next, we show that \(\rho\overline{\gamma}_{\tau,\mathcal{V}_{x,r}}\Gamma(\overline{B(x,r)},X \setminus B(x,(\tau-2)r))\). Let \(\gamma\in\Gamma(\overline{B(x,r)},X\setminus B(x,(\tau-2)r))\) be arbitrary. Let \(\sigma\) be a sub-curve of \(\gamma\) so that \(\sigma\subset\overline{(\tau-2)B}\) and \(\sigma\in\Gamma(\overline{B(x,r)},X\setminus B(x,(\tau-2)r))\). To show admissibility, we will combine the fact that \(\mathcal{B}_{Y}\) covers \(g\circ f\circ\sigma\) with a lower bound for the diameter of \(g\circ f\circ\sigma\).
Since \(\sigma\) connects \(B(x,r)\) to \(X\setminus B(x,(\tau-2)r)\) there exist \(j,k\in\sigma\) with \(d(j,k)\geq(\tau-3)r\). Let \(a,b\) and \(u,v\) be as before. By possibly switching \(j\) and \(k\), we can assume that \(d(j,u)\geq d(j,k)/2\geq 2^{-1}(\tau-3)r\). We get
\[\frac{d(g(f(j)),g(f(u)))}{d(g(f(j)),g(f(k))}\leq\tilde{\eta}\left(\frac{d(j,u)}{ d(j,k)}\right)\leq\tilde{\eta}\left(\frac{4\tau r}{(\tau-3)r}\right)\leq \tilde{\eta}(16).\]
Thus,
\[\text{diam}(g\circ f\circ\sigma)\geq d\left((g(f(j)),g(f(k))\right)\geq d\left( g(f(j)),g(f(u))\right)\tilde{\eta}(16)^{-1}. \tag{5.7}\]
Next, \(d(j,u)\geq 2^{-1}(\tau-3)r\geq d(u,v)/8\). Thus, by a similar reasoning that uses the quasisymmetry of \(g\circ f\) and by employing (5.3), we get
\[d(g(f(j)),g(f(u)))\geq d(g(f(u),g(f(v)))\tilde{\eta}(8)^{-1}\geq\tilde{\eta}(4 8)^{-1}\eta(2\delta^{-1})^{-1}2^{-1}\,\text{diam}(Y). \tag{5.8}\]
By combining (5.7) and (5.8), we obtain
\[\text{diam}(g\circ f\circ\sigma)\geq d(g(f(j)),g(f(k))\geq\tilde{\eta}(16)^{-1} \tilde{\eta}(8)^{-1}\eta(2\delta^{-1})^{-1}2^{-1}\,\text{diam}(Y). \tag{5.9}\]
Recall that \(\mathcal{V}_{x,r}\) consists of balls. The open sets \(\mathcal{V}_{x,r}\) cover the ball \(B(x,2(\tau-2))\), and thus the curve \(\sigma\). Therefore, by the Vitali covering theorem, there exists a finite collection of balls \(\mathcal{V}_{\gamma}\) with \(\sigma\subset\bigcup 5\tau\mathcal{V}_{\gamma}\), and for which \(\tau\mathcal{V}_{\gamma}\) are disjoint, and so that each ball in \(\mathcal{V}_{\gamma}\) intersects \(\gamma\). For each \(V\in\mathcal{V}_{\gamma}\), choose a ball \(B(V)\in\mathcal{B}_{Y}\) so that \(V=V_{B(V)}\) and \(\rho(V)=C\operatorname{diam}_{Y}(B(V))\operatorname{diam}_{Y}(Y)^{-1}\).
First, we note that the quasisymmetry condition and Lemma 2.3, we have \(g(f(5\tau V))\subset\tilde{\eta}(10\tau)B(V)\). Therefore, we get that the balls \(\tilde{\eta}(10\tau)B(V)\) for \(V\in\mathcal{V}_{\gamma}\) cover \(g(f(\sigma))\). Thus,
\[\sum_{V\in\mathcal{V}_{\gamma}}\rho(V) =\sum_{V\in\mathcal{V}_{\gamma}}C\operatorname{diam}_{Y}(B(V)) \operatorname{diam}_{Y}(Y)^{-1}\] \[\geq\sum_{V\in\mathcal{V}_{\gamma}}C(2\tilde{\eta}(10\tau))^{-1} \operatorname{diam}_{Y}(Y)^{-1}\operatorname{diam}_{Y}(\tilde{\eta}(10\tau)B (V))\] \[\stackrel{{\eqref{eq:B_V_V_V_V_V_V_V_V_V_V_V_V_V_V}}}{{ \geq}}\operatorname{diam}(g\circ f\circ\sigma)C(2\tilde{\eta}(10\tau))^{-1} \operatorname{diam}_{Y}(Y)^{-1}\] \[\geq C2^{-2}\tilde{\eta}(16)^{-1}\tilde{\eta}(8)^{-1}\eta(2 \delta^{-1})^{-1}\tilde{\eta}(10\tau)^{-1}.\]
If \(C\geq 4\tilde{\eta}(16)\tilde{\eta}(8)\eta(2\delta^{-1})\tilde{\eta}(10\tau)\), then \(\rho\overline{\wedge}_{\tau,\mathcal{V}_{x,r}}\Gamma(\overline{B(x,r)},X \setminus B(x,(\tau-2)r))\) is admissible and the claim follows.
### Algorithm for pushing down a cover
The following lemma describes a "push down" algorithm. It uses admissible functions for annuli in order to push down a collection of balls \(\mathcal{B}\) and a strongly discretely \(\tau\)-admissible function \(\rho\). This is done by replacing a ball \(\mathbf{B}\in\mathcal{B}\) by a collection \(\mathcal{B}_{\mathbf{B}}\) and an associated function \(\rho_{\mathbf{B}}\). A new admissible function \(\overline{\rho}\) is defined by taking a maximum over \(\mathbf{B}\in\mathcal{B}\), and a new collection by taking a union of all the new balls. This arguments for admissibility and the construction of \(\overline{\rho}\) are similar to Proposition 4.4. To distinguish the "parent" balls from the "descendant balls", we will bold the parent balls. This replacement algorithm is depicted and explained more in Figure 5.2. As seen in this figure, we permit all sorts of overlaps, and balls of different sizes. This is one of the technical reasons for using the new modulus from Subsection 3.4.
Recall that \(\Gamma_{L,B}\) denotes the collection of curves \(\gamma\) connecting \(B\) to \(X\setminus LB\).
**Lemma 5.10**.: _Let \(\epsilon,\eta\in(0,1)\). Assume that \(\mathcal{B}\) is a finite collection of balls, \(\Gamma\) is a collection of curves, \(2(\tau-2)\mathrm{rad}(\mathcal{B})\leq\inf_{\gamma\in\Gamma}\operatorname{diam }(\gamma)\) and \(\rho\wedge_{\mathcal{B}}\Gamma\). Suppose further that \(\mathcal{C}\subset\mathcal{B}\) is any finite collection of balls, and for every \(\mathbf{B}\in\mathcal{C}\), there exists a finite collection of balls \(\mathcal{B}_{\mathbf{B}}\) and a function \(\rho_{\mathbf{B}}:\mathcal{B}_{\mathbf{B}}\to[0,\infty)\) with_
1. \(\mathrm{rad}(\mathcal{B}_{\mathbf{B}})\leq\tau^{-1}\mathrm{rad}(\mathbf{B})\)_,_
2. \(\rho_{\mathbf{B}}\overline{\wedge}_{\tau,\mathcal{B}_{\mathbf{B}}}\Gamma_{ \mathbf{B},(\tau-2)}\)_,_
3. _every ball in_ \(\mathcal{B}_{\mathbf{B}}\) _intersects_ \((\tau-2)\mathbf{B}\)_, and satisfies_ \[\sum_{B\in\mathcal{B}_{\mathbf{B}}}\rho_{\mathbf{B}}(B)^{p}\leq\eta.\]
_For \(\mathbf{B}\not\in\mathcal{C}\) assume that \(\mathcal{B}_{\mathbf{B}}=\{\mathbf{B}\}\) and \(\rho_{\mathbf{B}}(\mathbf{B})=1\)._
_For the collection \(\overline{\mathcal{B}}:=\bigcup_{\mathbf{B}\in\mathcal{B}}\mathcal{B}_{ \mathbf{B}}\), and function_
\[\overline{\rho}(B):=\max\{\rho(\mathbf{B})\rho_{\mathbf{B}}(B):\boldsymbol{B} \in\mathcal{B}\text{ s.t. }B\in\mathcal{B}_{\mathbf{B}}\},\]
_we have \(\overline{\rho}\overline{\wedge}_{\tau,\overline{\mathcal{B}}}\Gamma\) and_
\[\sum_{B\in\overline{\mathcal{B}}}\overline{\rho}(B)^{p}\leq\sum_{\mathbf{B}\in \mathcal{C}}\eta\rho(\mathbf{B})^{p}+\sum_{\mathbf{B}\in\mathcal{B}\setminus \mathcal{C}}\rho(\mathbf{B})^{p}.\]
Proof of Lemma 5.10.: We first show that \(\overline{\rho}\overline{\wedge}_{\tau,\overline{\mathcal{B}}}\Gamma\). Let \(\gamma\in\Gamma\). Since \(\rho\overline{\wedge}_{\tau,\mathcal{B}}\Gamma\), there exists a collection \(\mathcal{B}_{\gamma}\subset\Gamma\) so that \(\tau\mathcal{B}_{\gamma}\) is pairwise disjoint, so that \(\mathbf{B}\cap\gamma\neq\emptyset\) for every \(\mathbf{B}\in\mathcal{B}_{\gamma}\) and
\[\sum_{\mathbf{B}\in\mathcal{B}_{\gamma}}\rho(\mathbf{B})\geq 1. \tag{5.11}\]
We next define \(\overline{\mathcal{B}}_{\gamma}\subset\overline{\mathcal{B}}\). First, set \(\mathcal{B}_{\gamma}^{1}=\mathcal{B}_{\gamma}\setminus\mathcal{C}\). Next, for each \(\mathbf{B}\in\mathcal{B}_{\gamma}\cap\mathcal{C}\) we have \(\rho_{\mathbf{B}}\overline{\wedge}_{\tau,\mathcal{B}_{\mathbf{B}}}\Gamma_{ \mathbf{B},(\tau-2)}\). Since \(\gamma\cap\mathbf{B}\neq\emptyset\), and \(\operatorname{diam}(\gamma)\geq 2(\tau-2)\mathrm{rad}(\mathbf{B})\), we have that \(\gamma\cap(X\setminus(\tau-2)\mathbf{B})\neq\emptyset\). Thus, \(\gamma\) contains
a sub-arc in \(\Gamma_{\mathbf{B},(\tau-2)}\). Therefore, there exists a collection \(\mathcal{B}_{\gamma,\mathbf{B}}\subset\mathcal{B}_{\mathbf{B}}\) so that \(\tau\mathcal{B}_{\gamma,\mathbf{B}}\) is pairwise disjoint, so that \(B\cap\gamma\neq\emptyset\) for every \(B\in\mathcal{B}_{\gamma,\mathbf{B}}\) and
\[\sum_{B\in\mathcal{B}_{\gamma,\mathbf{B}}}\rho_{\mathbf{B}}(B)\geq 1.\]
Since \(\overline{\rho}(B)\geq\rho_{\mathbf{B}}(B)\rho(\mathbf{B})\) for every \(B\in\mathcal{B}_{\gamma,\mathbf{B}}\), we have
\[\sum_{B\in\mathcal{B}_{\gamma,\mathbf{B}}}\overline{\rho}(B)\geq\rho(\mathbf{B}). \tag{5.12}\]
Set \(\mathcal{B}_{\gamma}^{2}=\bigcup_{\mathbf{B}\in\mathcal{B}_{\gamma}\cap \mathcal{C}}\mathcal{B}_{\gamma,\mathbf{B}}\). Finally, let \(\overline{\mathcal{B}}_{\gamma}=\mathcal{B}_{\gamma}^{1}\cup\mathcal{B}_{ \gamma}^{2}\). Note that for every \(\mathbf{B}\in\mathcal{B}_{\gamma}\cap\mathcal{C}\), we have \(\operatorname{rad}(\tau\mathcal{B}_{\gamma,\mathbf{B}})\leq\tau\text{rad}( \mathcal{B}_{\mathbf{B}})\leq\operatorname{rad}(B)\) and \(B\cap(\tau-2)\mathbf{B}\neq\emptyset\) for every \(B\in\mathcal{B}_{\gamma,\mathbf{B}}\). Thus, every \(B\in\tau\mathcal{B}_{\gamma,\mathbf{B}}\) satisfies \(\tau B\subset\tau\mathbf{B}\). This inclusion implies that the collections \(\mathcal{B}_{\gamma,\mathbf{B}}\) are pairwise disjoint for distinct \(\mathbf{B}\in\mathcal{B}_{\gamma}\), and each of these is disjoint from \(\mathcal{B}_{\gamma}^{2}\). Thus, the collection \(\tau\overline{\mathcal{B}}_{\gamma}\) is pairwise disjoint.
Next,
\[\sum_{B\in\overline{\mathcal{B}}_{\gamma}}\overline{\rho}(B) =\sum_{B\in\mathcal{B}_{\gamma}^{1}}\overline{\rho}(B)+\sum_{B\in \mathcal{B}_{\gamma}^{2}}\overline{\rho}(B)\] \[\geq\sum_{B\in\mathcal{B}_{\gamma}^{1}}\rho(B)+\sum_{\mathbf{B} \in\mathcal{B}_{\gamma}\cap\mathcal{C}}\sum_{B\in\overline{\mathcal{B}}_{ \gamma,\mathbf{B}}}\overline{\rho}(B)\] \[\stackrel{{\eqref{eq:2.1}}}{{\geq}}\sum_{\mathbf{B} \in\mathcal{B}_{\gamma}\setminus\mathcal{C}}\rho(\mathbf{B})+\sum_{\mathbf{B} \in\mathcal{B}_{\gamma}\cap\mathcal{C}}\rho(\mathbf{B})\] \[\stackrel{{\eqref{eq:2.2}}}{{=}}\sum_{\mathbf{B} \in\mathcal{B}_{\gamma}}\rho(\mathbf{B})\geq 1.\]
Thus, since \(\gamma\) was arbitrary, we have \(\overline{\rho\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Finally, we compute the \(p\)-energy of \(\overline{\rho}\). First, by construction, for every \(\mathbf{B}\in\mathcal{C}\), we have
\[\sum_{B\in\mathcal{B}_{\mathbf{B}}}\rho_{\mathbf{B}}(B)^{p}\leq\eta, \tag{5.13}\]
and for every \(\mathbf{B}\in\mathcal{B}\setminus\mathcal{C}\), we have
\[\sum_{B\in\mathcal{B}_{\mathbf{B}}}\rho_{\mathbf{B}}(B)^{p}=\rho_{\mathbf{B}} (\mathbf{B})=1, \tag{5.14}\]
For every \(B\in\overline{\mathcal{B}}\), there may be multiple \(\mathbf{B}\in\mathcal{B}\) so that \(B\in\mathcal{B}_{\mathbf{B}}\). However, for every \(B\in\overline{\mathcal{B}}\), we have:
\[\overline{\rho}(B)=\max\{\rho(\mathbf{B})\rho_{\mathbf{B}}(B):\mathbf{B}\in \mathcal{B}\text{ s.t. }B\in\mathcal{B}_{\mathbf{B}}\}\leq\left(\sum_{ \begin{subarray}{c}\mathbf{B}\in\mathcal{B}\\ \text{s.t. }B\in\mathcal{B}_{\mathbf{B}}\end{subarray}}(\rho(\mathbf{B}) \rho_{\mathbf{B}}(B))^{p}\right)^{\frac{1}{p}}. \tag{5.15}\]
By combining these two we get:
\[\sum_{B\in\overline{\mathcal{B}}}\overline{\rho}(B)^{p} \stackrel{{\eqref{eq:p-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energyenergy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energyenergy-energy-energyenergy-energy-energy-energy-energy-energy-energyenergy-energy-energyenergy-energy-energy-energy-energy-energy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energy-energy-energyenergy-energy-energy-energy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energyenergy-energy-energyenergy-energyenergy-energy-energy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energyenergy-energy-energyenergy-energy-energyenergy-energy-energyenergy-energy-energyenergy-energy-energyenergy-energy-energy-energy-energyenergy-energy-energy-energy-energy-energyenergy-energy-energyenergy-energy-energy-energyenergy-energy-energyenergy-energy-energyenergy-energy-energy-energy-energyenergy-energyenergy-energy-energyenergy-energy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energy-energy-energy-energyenergy-energy-energy-energy-energy-energyenergy-energyenergy-energy-energyenergy-energy-energy-energyenergy-energy-energyenergy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energyenergy-energy-energy-energy-energy-energy-energy-energy-energy-energyenergy-energy-energy-energy-energy-energy-energy-energyenergy-energy-energy-energy-energy-energyenergy-energy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energy-energy-energyenergy-energy-energyenergy-energy-energyenergy-energy-energyenergy-energy-energyenergy-energy-energyenergy-energy-energyenergy-energy-energy-energy-energy-energy-energy-energyenergy-energy-energy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energyenergy-energy-energyenergy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energyenergy-energy-energyenergy-energy-energy-energyenergy-energy-energyenergy-energy-energy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energyenergy-energy-energy-energy-energyenergy-energy-energy-energy-energy-energyenergy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energyenergy-energy-energy-energyenergy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energyenergy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy-energy
Further, for any ball \(\mathbf{B}=B(x,s)\) in \(X\), let \(\mathcal{B}_{\mathbf{B}}\) and \(\rho_{B}\) denote the functions \(\mathcal{V}_{x,s}\) and \(\rho_{x,s}\) given in Definition 5.1 with
\[\sum_{B\in\mathcal{B}_{\mathbf{B}}}\rho_{\mathbf{B}}(B)^{p}\leq\epsilon, \tag{5.17}\]
and \(\rho\overline{\wedge}_{\tau,\mathcal{B}_{\mathbf{B}}}\Gamma_{\mathbf{B},\tau-2}\).
Let \(\mathcal{B}_{0}\) be a finite cover of \(X\) by balls with \(\inf_{\gamma\in\Gamma}\operatorname{diam}(\gamma)\geq\tau\mathrm{rad}( \mathcal{B}_{0})\) for which \(\overline{\mathrm{Mod}}_{p,\tau}(\Gamma,\mathcal{B}_{0})<M\). Then, there exists a \(\rho_{0}:\mathcal{B}_{0}\to[0,\infty)\) with \(\rho\overline{\wedge}_{\tau,\mathcal{B}_{0}}\Gamma\) and with mass
\[\sum_{B\in\mathcal{B}_{0}}\rho(B)^{p}<M.\]
First, we replace \(\mathcal{B}_{0}\) through a finite sequence of replacements by a collection of balls with respect to which \(\Gamma\) has small modulus. This we call the "weight reduction phase". We construct a sequence of covers \(\mathcal{B}_{k}\), for \(k\in\mathbb{N}\), as follows. Proceed inductively and apply Lemma 5.10 for each \(k\in\mathbb{N}\) with \(\mathcal{B}=\mathcal{B}_{k}\), \(\rho=\rho_{k}\) and \(\mathcal{C}=\mathcal{B}_{k}\), and with \(\rho_{\mathbf{B}},\mathcal{B}_{\mathbf{B}}\) and \(\epsilon=\eta\) satisfying (5.17), to obtain a collection \(\mathcal{B}_{k+1}=\overline{\mathcal{B}}\) and function \(\rho_{k+1}=\overline{\rho}\) with \(\rho_{k+1}\overline{\wedge}_{\tau,\mathcal{B}_{k+1}}\Gamma\) and
\[\sum_{B\in\mathcal{B}_{k+1}}\rho(B)^{p}\leq\epsilon^{k}M.\]
We note that for each ball \(B\in\mathcal{B}_{\mathbf{B}}\) we have \(\mathrm{rad}(B)\geq\delta_{-}\mathrm{rad}(\mathbf{B})\), by the assumption of uniformly small moduli of annuli. Therefore, for all \(k\in\mathbb{N}\) we get
\[\inf\{\mathrm{rad}(B):B\in\mathcal{B}_{k+1}\}\geq\delta_{-}\inf\{\mathrm{rad}( B):B\in\mathcal{B}_{k}\}. \tag{5.18}\]
By iteration of this inequality, we get
\[\inf\{\mathrm{rad}(B):B\in\mathcal{B}_{N}\}\geq\delta_{-}^{N}\inf\{\mathrm{rad} (B):B\in\mathcal{B}_{0}\}. \tag{5.19}\]
By the choice of \(N\), we have \(\epsilon^{N}M\leq\epsilon_{0}\). Thus, \(\rho_{N}\) satisfies the desired mass bound:
\[\sum_{B\in\mathcal{B}_{N}}\rho_{N}(B)^{p}\leq\epsilon_{0}. \tag{5.20}\]
Figure 2. The equalizing algorithm: By using replacement and a uniform bound on moduli of annuli, we can “uniformize” a wild cover \(\mathcal{B}\). Let \(\mathcal{B}\) be a covering using balls, where the size of the largest ball is much bigger than the smallest. We take all the “large” balls, and form a collection \(\mathcal{C}\) of them. To them, we apply the push-down procedure to reduce their size. We repeat this process until all large balls have been pushed down to a size comparable to the smallest ball in our collection. In the figure \(\mathcal{B}\) consists of balls filled with white. The two large balls have solid line boundaries, and are replaced by smaller light gray filled balls. Two of these light gray balls are still too large, and are replaced by even smaller dark gray filled balls.
The balls in \(\mathcal{B}_{N}\) have various different sizes. Next, we will embark on a "size reduction phase". Let \(\overline{\mathcal{B}}_{0}=\mathcal{B}_{N}\), and \(\overline{\rho}_{0}=\rho_{N}\). Let \(s:=\min\{\operatorname{rad}(B):B\in\mathcal{B}_{N}\}\), and let \(S_{0}=\operatorname{rad}(\overline{\mathcal{B}}_{0})\). From the assumption and (5.19), we obtain
\[s\geq\delta_{-}^{N}\inf\{\operatorname{rad}(B):B\in\mathcal{B}_{0}\}\geq r. \tag{5.21}\]
If \(S_{0}\leq\kappa r\), then we do not do anything and we let \(L=0\). If on the other hand \(S_{0}>\kappa r\), we start running the following algorithm.
Set \(k=0\). While \(S_{k}>\kappa r\), let \(\mathcal{C}_{k}=\{B\in\overline{\mathcal{B}}_{k}:\operatorname{rad}(B)> \kappa r\}\). Apply Lemma 5.10 with \(\mathcal{B}=\overline{\mathcal{B}}_{k}\), \(\rho=\overline{\rho}_{k}\) and \(\mathcal{C}=\mathcal{C}_{k}\) and with \(\rho_{\mathbf{B}},\mathcal{B}_{\mathbf{B}}\) and with \(\epsilon=\eta\) satisfying (5.17). This gives a collection \(\overline{\mathcal{B}}_{k+1}\) and strongly admissible function \(\overline{\rho}_{k+1}\). Set \(S_{k+1}=\operatorname{rad}(\overline{\mathcal{B}}_{k+1})\), and increment \(k\) by one. Once \(S_{k}\leq\kappa r\), terminate the algorithm. We will soon show that the algorithm terminates in finite time. Let \(L=k\) be the time it terminates.
We have, as part of Lemma 5.10, that \(\overline{\rho}_{k}\overline{\wedge}_{\tau,\overline{\mathcal{B}}_{k}}\Gamma\) for every \(k\in[0,L]\cap\mathbb{Z}\). Further, by noting that \(\epsilon\in(0,1)\), we get
\[\sum_{B\in\overline{\mathcal{B}}_{k+1}}\overline{\rho}_{k+1}(B)^{p}\leq\sum_{ B\in\overline{\mathcal{B}}_{k+1}}\overline{\rho}_{k}(B)^{p}.\]
By iterating this \(k\) times, we get from (5.20) that
\[\sum_{B\in\mathcal{B}_{k}}\overline{\rho}_{k}(B)^{p}\leq\sum_{B\in\overline{ \mathcal{B}}_{0}}\overline{\rho}_{0}(B)^{p}\leq\ \epsilon_{0}. \tag{5.22}\]
Let us analyse the effect of the algorithm on the radii of the collections \(\overline{\mathcal{B}}_{k}\), and the termination of the algorithm. Assume that \(k\geq 0\). At each step, a ball \(B\) in \(\mathcal{B}_{k+1}\) is either equal to a ball \(\mathbf{B}\in\overline{\mathcal{B}}_{k}\) with \(\operatorname{rad}(\mathbf{B})\leq\kappa r\), or \(B\in\mathcal{B}_{\mathbf{B}}\) for some \(\mathbf{B}\in\overline{\mathcal{B}}_{k}\) with \(\kappa s<\operatorname{rad}(\mathbf{B})\leq S_{k}\). By construction, in either case \(\operatorname{rad}(B)\leq\max\{\delta_{+}\operatorname{rad}(\mathbf{B}),s\kappa\}\). Thus, by taking a supremum over all balls \(B\in\overline{\mathcal{B}}_{k+1}\), we get that \(S_{k+1}\leq\max\{\kappa r,\delta_{+}S_{k}\}\). In particular, while \(S_{k}>\kappa r\), then the values \(S_{k}\) form a geometrically decreasing sequence. This can only last for finitely many steps. Therefore, there must exist some \(L\geq 0\), when the algorithm terminates with \(S_{L}\leq\kappa r\).
We show now that by induction each ball \(B\in\overline{\mathcal{B}}_{k}\), for \(k=0,\ldots,L\) satisfies \(\operatorname{rad}(B)\in[r,S_{k}]\). The upper bound is obvious, so we focus on the lower bound. The case of \(k=0\) is also obvious. So, we focus on the induction step. During the algorithm, for \(k=0,\ldots,L-1\), each ball \(B\in\overline{\mathcal{B}}_{k+1}\) is either equal to a ball \(\mathbf{B}\in\overline{\mathcal{B}}_{k}\) or for some \(\mathbf{B}\in\overline{\mathcal{B}}_{k}\) we have \(B\in\overline{\mathcal{B}}_{\mathbf{B}}\) and \(\operatorname{rad}(\mathbf{B})>\kappa r\). In the first case, \(\operatorname{rad}(B)\in[r,\kappa r]\). In the second case \(\operatorname{rad}(B)\in[\delta_{-}\operatorname{rad}(\mathbf{B}),\delta_{+} \operatorname{rad}(\mathbf{B})]\), and thus \(\operatorname{rad}(B)\geq\delta_{-}\kappa r>r\) since \(\delta_{-}>\kappa^{-1}\) by choice of \(\kappa\) at the beginning of the proof. In either case \(r\leq\operatorname{rad}(B)\leq S_{k}\). Therefore, for all \(B\in\overline{\mathcal{B}}_{k}\), for \(k=0,\ldots,L\) we have \(\operatorname{rad}(B)\in[r,S_{k}]\).
Now, for \(k=L\), we have \(\operatorname{rad}(B)\in[r,\kappa r]\), since \(S_{L}\leq\kappa r\). Now set \(\mathcal{V}=\overline{\mathcal{B}}_{L}\). We thus get the desired claim, since (5.22) gives the desired mass bound for \(\overline{\operatorname{Mod}}_{p,\tau}(\Gamma,\mathcal{V})\), since \(\overline{\rho}_{L}\overline{\wedge}_{\tau,\overline{\mathcal{B}}_{L}}\Gamma\), and we have already observed that for all \(V\in\mathcal{V}\) we have \(\kappa^{-1}r\leq r_{V}\leq r\).
### Estimate for Bourdon-Kleiner modulus
In this section, we use the algorithm in the previous subsection to give an explicit relationship between the Bourdon-Kleiner modulus from Subsection 3.1 and our new discrete modulus from Subsection 3.4. The basic idea is to use doubling to give an initial collection \(\mathcal{V}\), and then to use Lemma 5.16 to push the collection down to roughly uniform size with small modulus. This push-down operation is quantitative. Once the collection consists of balls of roughly the same size, we can apply Proposition 3.7 to compare the modulus to the Bourdon-Kleiner modulus of the same collection.
**Proposition 5.23**.: _Fix \(\kappa\geq 1,p\in(1,\infty)\). For each \(k\in\mathbb{N}\), let \(\mathcal{U}_{k}\) be a \(\kappa\)-approximations at scale \(2^{-k}\) for a compact LLC space \(X\). If \(X\) has uniformly small \(p\)-moduli of annuli, then for every \(\epsilon>0\), there exists a \(l\in\mathbb{N}\) for which for all \(z\in X\) and all \(k\geq 0\), we have_
\[\operatorname{Mod}_{p,\mathcal{U}_{l+k}}(\Gamma_{B(z,2^{-k}),2})\leq\epsilon\]
Proof.: Fix \(k\in\mathbb{N}\) and \(\epsilon>0\). Let \(\tau=4\) and let \(l_{0}=\lceil\log_{2}(\tau)\rceil+4\). Let \(X\) have uniformly small \(p\)-moduli of annuli with constant \(\delta_{-}\in(0,\tau^{-1})\). Choose \(\kappa^{\prime}\geq\kappa\) be the constant from Lemma 5.16, and let \(C\) be the constant associated to \(\kappa^{\prime},\tau\) and the space \(X\) which comes from Proposition 3.7. Set \(\epsilon_{0}=C^{-1}\epsilon\).
By doubling, we have that there is a constant \(D\) independent of \(k\) so that there are at most \(D\) many sets in \(\mathcal{U}_{k+l_{0}}\) which intersect \(B(z,2^{1-k})\). Set
\[\mathcal{B}_{0}=\{B(x_{U},r_{U}):U\in\mathcal{U}_{k+l_{0}},U\cap B(z,2^{1-k})\neq \emptyset\}\]
and set \(\rho_{0}(B)=1\) for all \(B\in\mathcal{B}_{0}\). Then, by applying the definition, and since \(\mathcal{B}_{0}\) covers \(B(z,2^{1-k})\), we see \(\rho_{0}\overline{\gamma}_{\tau,\mathcal{B}_{0}}\Gamma_{B(z,2^{-k}),2}\). By the size bound for \(\mathcal{B}_{0}\), we get
\[\sum_{B\in\mathcal{B}_{0}}\rho_{0}(B)^{p}\leq D.\]
By Lemma 5.16, there exists an integer \(N\in\mathbb{N}\) (which depend on \(\epsilon\), \(D\) and the constants in the uniformly small moduli condition) with the following properties. For any \(r>0\) with \(\delta_{-}^{N}\inf\{\operatorname{rad}(B):B\in\mathcal{B}_{0}\}\geq r\) there is a collection of balls \(\mathcal{V}\) with
\[\overline{\operatorname{Mod}}_{p,\tau}(\Gamma,\mathcal{V})\leq\epsilon_{0}=C ^{-1}\epsilon.\]
and \(r_{V}\in[\kappa^{\prime-1}r,r]\) for all \(V\in\mathcal{V}\).
Now, if \(l\geq l_{0}+N\lceil\log_{2}(\delta_{-}^{-1})\rceil+1\), then we can choose \(r=2^{-k-l}\). Then, by Proposition 3.7, we get for the \(\kappa\)-approximation \(\mathcal{U}_{l+k}\) at level \(r\) that
\[\operatorname{Mod}_{p,\mathcal{U}_{l+k}}(\Gamma_{B(z,2^{-k}),2})\leq C \overline{\operatorname{Mod}}_{p,\tau}(\Gamma,\mathcal{V})\leq\epsilon.\]
### Proof of main theorem
Proof of Theorem 1.6.: Because the Ahlfors regular conformal dimension is always greater than the conformal Hausdorff dimension, we have \(\dim_{CH}(X)\leq\dim_{CAR}(X)\). We are left to prove the converse inequality. Since \(X\) is connected, compact, locally connected and quasiself-similar, by Lemma 2.5\(X\) is LLC.
Suppose that \(p\) is arbitrary and \(\dim_{CH}(X)<p\). Fix any sequence of \(\kappa\)-approximations \(\{\mathcal{U}_{k}\}_{k\in\mathbb{N}}\), where \(\mathcal{U}_{k}\) is at scale \(2^{-k}\). By Lemma 5.2, we have that \(X\) has uniformly small moduli of annuli. Then, by Proposition 5.23, we have that
\[\liminf_{m\to\infty}\sup_{x\in X,k\in\mathbb{N}}\{\operatorname{Mod}_{p}( \Gamma_{B(x,2^{-k}),2},\mathcal{U}):\mathcal{U}\text{ is a }\kappa-\text{ approximation at level }2^{-k-m}\}=0.\]
Then, Theorem 3.5 implies that \(\dim_{CAR}(X)\leq p\). Since \(p>\dim_{CH}(X)\) is arbitrary, this completes the proof.
|
2309.09208 | Data-driven control of nonlinear systems from input-output data | The design of controllers from data for nonlinear systems is a challenging
problem. In a recent paper, De Persis, Rotulo and Tesi, "Learning controllers
from data via approximate nonlinearity cancellation," IEEE Transactions on
Automatic Control, 2023, a method to learn controllers that make the
closed-loop system stable and dominantly linear was proposed. The approach
leads to a simple solution based on data-dependent semidefinite programs. The
method uses input-state measurements as data, while in a realistic setup it is
more likely that only input-output measurements are available. In this note we
report how the design principle of the above mentioned paper can be adjusted to
deal with input-output data and obtain dynamic output feedback controllers in a
favourable setting. | Xiaoyan Dai, Claudio De Persis, Nima Monshizadeh, Pietro Tesi | 2023-09-17T08:24:59Z | http://arxiv.org/abs/2309.09208v1 | # Data-driven control of nonlinear systems from input-output data*
###### Abstract
The design of controllers from data for nonlinear systems is a challenging problem. In a recent paper, De Persis, Rotulo and Tesi, "Learning controllers from data via approximate nonlinearity cancellation," IEEE Transactions on Automatic Control, 2023, a method to learn controllers that make the closed-loop system stable and dominantly linear was proposed. The approach leads to a simple solution based on data-dependent semidefinite programs. The method uses input-state measurements as data, while in a realistic setup it is more likely that only input-_output_ measurements are available. In this note we report how the design principle of the above mentioned paper can be adjusted to deal with input-output data and obtain dynamic output feedback controllers in a favourable setting.
## I Introduction
Learning controllers from data is of uttermost importance and a fascinating topic, with foundations in both control theory and data science. Several recent approaches have been proposed for data-driven control, initially focusing, as is natural, on linear systems, e.g. [1, 2, 3, 4]. For nonlinear systems, some results have appeared as well, mostly focusing on special classes of nonlinear systems, bilinear [5, 6], polynomial [7, 8, 9], rational [10] or with quadratic nonlinearities [11, 12]. Other approaches consist of approximating general nonlinear control systems to classes for which data-driven design is possible [13, 14] or expressing nonlinear systems via a dictionary of known functions, in which case the design can aim at making the closed-loop system dominantly linear [15] or prescribing a desired output signal [16].
The understanding of the topic is far from having reached a mature phase, even in the case full measurements of the state are available. Yet, it can be argued that the use of these data-dependent design schemes in practice very much rely on the possibility that they work with output measurements data only, which dispenses the designer from requiring to know the state of the system - a very restrictive prior in many cases. In this paper we report on some early results on using data-driven control techniques in conjunction with input/output data for discrete-time nonlinear systems.
_Related work._ Even when a model is known, output feedback control for nonlinear systems is a challenging open problem [17, Section 8.7]. The certainty equivalence principle, which is valid for linear systems, is hard to extend to a nonlinear setting. Nonetheless, certain nonlinear discrete-time versions of the certainty equivalence principle have been obtained [18]. In [19], the state in a globally stabilizing state feedback (possibly generated by a finite horizon model predictive scheme) is replaced by an estimate provided by an observer under a uniform observability assumption to obtain a globally stabilizing output feedback controller.
The important uniform observability property [20, 21, 22] can be explored in different ways in the context of learning control from data. Since it guarantees the existence of an injective map from input/output sequences to the state, deep neural networks can be trained to approximate such a map and provide estimates of the state to be used in the given input-to-state stabilizing feedback, obtaining a locally asymptotically stable closed-loop system [23]. The injective map can also be used to define the regression relating the input/output sequences of the system and deep neural networks can be used to learn such a regression [24]. However, to the best of our knowledge there are very few other attempts at designing controllers for nonlinear system from input/output data.
_Contribution._ The aim of this note is to start the investigation of feedback design from input/_output_ data for nonlinear discrete-time systems. We adopt the notion of uniform observability, which allows us to extend some of the design procedures introduced in [2]. Namely, we consider past inputs and outputs as fictitious state variables and obtain a form of the system for which the data-driven "state" feedback design techniques for nonlinear systems of [15] can be used. The implementation of the controller is then carried out by replacing the past input/output measurements with the quantities returned by a dead-beat observer of the output and a chain of integrators driven by the input. A formal analysis of the stability of the overall closed-loop system is then presented along with a discussion about the proposed solution.
In Section II we recall the notion of observability that we adopt for our analysis and introduce an auxiliary system that reproduces the input/output behaviour of the system to control. The auxiliary system is extended in Section III-A with a chain of integrators that provides the past inputs of the system to be used in the controller. The design of the output feedback dynamic controller based on input/output data is presented in Section III. The analysis of the closed-loop system to show the convergence of the system's and the controller's state to the origin is the topic of Section IV, along with a discussion of the result.
## II Preliminaries
We consider the single-input single-output nonlinear discrete-time system
\[\begin{array}{rl}x^{+}=&f(x,u)\\ y=&h(x)\end{array} \tag{1}\]
where \(x\in\mathbb{R}^{n}\), \(u,y\in\mathbb{R}\), \(f(0,0)=0\) and \(h(0)=0\). \(f,h\) are continuous functions of their arguments with domains \(\mathbb{R}^{n}\times\mathbb{R}\) and \(\mathbb{R}^{n}\). These functions are unknown. The dimension of the state-space \(n\) is not necessarily known.
### _Dataset_
A dataset consisting of open-loop input-output measurements
\[\mathcal{D}:=\left\{(u(k),y(k))\right\}_{k=0}^{N+T-1} \tag{2}\]
is available, where the positive integers \(N,T\) will be specified later. The samples in the dataset are obtained from off-line experiment(s) conducted on system (1), hence they satisfy the equations (1), namely
\[\begin{array}{rl}x(k+1)=&f(x(k),u(k))\\ y(k)=&h(x(k)),\quad\forall k=0,1,\ldots,N+T-1\end{array}\]
For our purpose of designing an output feedback controller from \(\mathcal{D}\) it is not required that all the samples of the dataset are sequentially obtained in a single experiment. In fact, even multiple experiments collecting \(N+T\) samples suffice. This is useful especially when dealing with unstable dynamics.
### _Uniform Observability_
The problem of interest is to design an output feedback controller that stabilizes the nonlinear system, based on the dataset \(\mathcal{D}\). To this purpose, we need to infer the behavior of the state \(x\) from input-output measurements, for which suitable "observability" conditions on the system (1) are required. Before stating them, we introduce some notation. We let
\[\begin{array}{rl}F^{0}(x):=&x\\ F^{1}(x,v_{0}):=&f(x,v_{0})\\ F^{k+1}(x,v_{0},\ldots,v_{k}):=&f(F^{k}(x,v_{0},\ldots,v_{k-1}),v_{k}),k\geq 1 \end{array} \tag{3}\]
Note that (3) gives \(x(k)=F^{N}(x(k-N),u_{[k-N,k-1]})\). To reduce the notational complexity, we introduce \(v_{[0,k]}\), which denotes the sequence of values \(v_{0},\ldots,v_{k}\). Hence, the last identity above is rewritten as \(F^{k+1}(x,v_{[0,k]}):=f(F^{k}(x,v_{[0,k-1]}),v_{k})\). In what follows, we will use symbols like \(v_{[0,k]}\) also to denote the vector \(\left[\begin{smallmatrix}v_{0}&v_{1}&\ldots&v_{k}\end{smallmatrix}\right]^{\top}\).
The following is the main assumption on system (1).
**Assumption 1**: _Let \(\mathcal{X}\subset\mathbb{R}^{n}\) and \(\mathcal{U}\subset\mathbb{R}\) be compact sets such that \(\mathcal{X}\times\mathcal{U}\) contains the origin of \(\mathbb{R}^{n+1}\). There exists \(N\in\mathbb{Z}_{>0}\) such that, for any \(v_{[0,N-2]}\in\mathcal{U}^{N-1}\), the mapping_
\[\Phi_{N}(x,v_{[0,N-2]})=\begin{bmatrix}h\circ F^{0}(x)\\ h\circ F^{1}(x,v_{0})\\ \vdots\\ h\circ F^{N-1}(x,v_{[0,N-2]})\end{bmatrix} \tag{4}\]
_is injective as a function of \(x\) on \(\mathcal{X}\). \(\Box\)_
Following [22, Definition 1], we refer to the assumption above as a uniform observability on \(\mathcal{X}\) property. It is observed in [22] that, if \(f,h\) are continuously differentiable functions, uniform observability is not restrictive in the sense that a nonuniform distinguishability property and a nonuniform observability rank condition imply uniform observability. Since for any \(M\geq N\) the mapping \(\Phi_{M}\) remains injective, we do not need to know the smallest \(N\) for which Assumption 1 holds.
For any \(v_{[0,N-2]}\in\mathcal{U}^{N-1}\), the function
\[\Phi_{N}(\cdot,v_{[0,N-2]})\colon\mathcal{X}\to\mathbb{R}^{N}\]
such that \(x\mapsto w=\Phi_{N}(x,v_{[0,N-2]})\), is injective on \(\mathcal{X}\) and one can define a left inverse
\[\Psi_{N}(\cdot,v_{[0,N-2]})\colon\Phi_{N}(\mathcal{X},v_{[0,N-2]})\to\mathbb{R }^{n}\]
such that \(\Psi_{N}(\Phi_{N}(x,v_{[0,N-2]}),v_{[0,N-2]})=x\) for all \(x\in\mathcal{X}\).
### _An auxiliary system_
We introduce a system equivalent to (1) which is better suited for control design. By equivalent it is meant that the new system has the same input-output behavior of system (1) when properly initialized. We use this auxiliary system for control design purposes. Later on we show the effect of the designed controller on the actual system (1).
For any \(v_{[0,N-1]}\in\mathbb{R}^{N}\), define the functions
\[\begin{array}{rl}\psi(w,v_{[0,N-1]}):=&F^{N}(\Psi_{N}(w,v_{[0,N-2]}),v_{[0, N-1]})\\ \tilde{h}(w,v_{[0,N-1]}):=&h\circ\psi(w,v_{[0,N-1]})\\ \tilde{f}(w,v_{[0,N-1]}):=&A_{c}w+B_{c}\tilde{h}(w,v_{[0,N-1]})\end{array} \tag{5}\]
with the pair \((A_{c},B_{c})\in\mathbb{R}^{N\times N}\times\mathbb{R}^{N}\) in the Brunovsky form. The domain of \(\psi(\cdot,v_{[0,N-1]}),\tilde{h}(\cdot,v_{[0,N-1]})\), \(\tilde{f}(\cdot,v_{[0,N-1]})\) is \(\Phi_{N}(\mathcal{X},v_{[0,N-2]})\). Under the standing assumptions on \(f,h\), these functions are continuous and zero at \((w,v)=(0,0)\).
In the result below, for a \(k\in\mathbb{Z}\), we let \(u_{[k-N,k-1]}\) be an input sequence applied to system (1) and \(y_{[k-N,k-1]}\) its output response from some initial condition \(x(k-N)\).
**Lemma 1**: _Let system (1) satisfy Assumption 1. Consider arbitrary \(k_{0}\in\mathbb{Z}\), \(x(k-N)\in\mathcal{X}\) and \(u_{[k-N,k-1]}\in\mathcal{U}^{N}\) for all \(k\in\mathbb{Z}_{\geq k_{0}}\). Consider the system_
\[\begin{array}{rl}w^{+}=&\tilde{f}(w,v)\\ y_{w}=&\tilde{h}(w,v)\end{array} \tag{6}\]
_with \(\tilde{f},\tilde{h}\) defined in (5). If the input \(v(k)\) applied to (6) satisfies \(v(k)=u_{[k-N,k-1]}\) for all \(k\in\mathbb{Z}_{\geq k_{0}}\) and the initial condition of (6) is set to \(w(k_{0})=y_{[k_{0}-N,k_{0}-1]}\), then_
\[w(k)=y_{[k-N,k-1]},\quad y_{w}(k)=y(k),\quad\forall k\in\mathbb{Z}_{\geq k_{0}}.\]
_Furthermore, \(x(k)=\psi(w(k),v(k))\), for all \(k\in\mathbb{Z}_{\geq k_{0}}\). \(\Box\)_
_Proof._ For the sake of completeness, it is given in Appendix VI-A.
**Example 1**: _We consider [15, Example 5]_
\[x_{1}^{+} =x_{1}+T_{s}x_{2} \tag{7a}\] \[x_{2}^{+} =\frac{T_{s}g}{\ell}\sin x_{1}+\left(1-\frac{T_{s}\mu}{m\ell^{2}} \right)x_{2}+\frac{T_{s}}{m\ell}(\cos x_{1})u\,, \tag{7b}\]
_with \(y=x_{1}\). We compute_
\[\Phi_{2}(x,v)=\begin{bmatrix}x_{1}\\ x_{1}+T_{s}x_{2}\end{bmatrix}\]
_which is globally invertible (Assumption 1 holds with \(N=2\).) with_
\[\Psi_{2}(w,v)=\begin{bmatrix}w_{1}\\ \frac{wx_{2}-w_{1}}{T_{s}}\end{bmatrix}\]
_Hence \(\psi(w,v_{0},v_{1})=\operatorname{col}(\psi_{1}(w,v_{0},v_{1}),\psi_{2}(w,v_{ 0},v_{1}))\), where_
\[\psi_{1}(w,v_{0},v_{1})= w_{2}+\frac{T_{s}^{2}g}{\ell}\sin w_{1}+\left(1-\frac{T_{s}\mu}{m \ell^{2}}\right)(w_{2}-w_{1}) \tag{8}\] \[+\frac{T_{s}^{2}}{m\ell}(\cos w_{1})v_{0}\] \[\psi_{2}(w,v_{0},v_{1})= \frac{T_{s}g}{\ell}\sin w_{2}+\left(1-\frac{T_{s}\mu}{m\ell^{2}} \right)\left(\frac{T_{s}g}{\ell}\sin w_{1}\right.\] \[\left.+\left(1-\frac{T_{s}\mu}{m\ell^{2}}\right)\frac{w_{2}-w_{1 }}{T_{s}}+\frac{T_{s}}{m\ell}(\cos w_{1})v_{0}\right)+\frac{T_{s}}{m\ell}( \cos w_{2})v_{1}\]
_From which, one computes_
\[\tilde{h}(w,v_{0})=w_{2}+\frac{T_{s}^{2}g}{\ell}\sin w_{1}+\left( 1-\frac{T_{s}\mu}{m\ell^{2}}\right)(w_{2}-w_{1})\] \[+\frac{T_{s}^{2}}{m\ell}(\cos w_{1})v_{0}\] \[=\left(-1+\frac{T_{s}\mu}{m\ell^{2}}\right)w_{1}+\left(2-\frac{T_ {s}\mu}{m\ell^{2}}\right)w_{2}+\frac{T_{s}^{2}g}{\ell}\sin w_{1}\] \[+\frac{T_{s}^{2}}{m\ell}(\cos w_{1})v_{0}\]
_Hence the equivalent representation is given by_
\[w^{+}=\begin{bmatrix}w_{2}\\ \tilde{h}(w,v_{0})\end{bmatrix},\quad y_{w}=\tilde{h}(w,v_{0})\]
_The original state \(x\) is obtainable from the solution of the system above via the expression_
\[x=\psi(w,v_{0},v_{1})\]
_where \(\psi\) is as in (8). For this example \(\mathcal{X}=\mathbb{R}^{2}\) and \(\mathcal{U}=\mathbb{R}\)._
## III Design of an output feedback controller from data
### _A dynamic extension_
System (6) is driven by the past \(N\) samples of \(u\), which is the input to (1). These past values are obtained by adding a chain of integrators to the dynamics (6)
\[\xi^{+}= A_{c}\,\xi+B_{c}u \tag{9}\]
with the interconnection condition
\[v=\xi\]
which returns the system
\[\begin{array}{rl}w^{+}=&\tilde{f}(w,\xi)\\ \xi^{+}=&A_{c}\,\xi+B_{c}u\\ y=&\tilde{h}(w,\xi)\end{array} \tag{10}\]
Once the system's state satisfies \((w(k),\xi(k))=(y_{\{\overline{k}-n,\overline{k}-1\}},u_{\{\overline{k}-n, \overline{k}-1\}})\) for some \(\overline{k}\in\mathbb{Z}\), the input-output behavior of this system matches the one of (1) for all \(k\geq\overline{k}\). We will discuss later on the availability of such initial condition at a time \(\overline{k}\).
### _Control input design_
To obtain \(u\) that drives the chain of integrators making the dynamic controller, we argue as in [2, 15]. We first introduce the following:
**Assumption 2**: _For any \(\xi\in\mathcal{U}^{N}\) and any \(w\in\Phi_{N}(\mathcal{X},\xi_{[1,N-1]})\), where \(\xi_{[1,N-1]}\) denotes the first \(N-1\) entries of \(\xi\), it holds that \(\tilde{h}(w,\xi)=\alpha Z(w,\xi)\), where \(Z(w,\xi)\in\mathbb{R}^{S}\) is a vector of known continuous functions and \(\alpha\in\mathbb{R}^{1\times S}\) is an unknown vector._
This is a technical assumption due to the need to give the nonlinearities of (10) a form for which the controller design is possible. Although it is restrictive, [15, Section VI.B] bypasses such an assumption by expressing \(\tilde{h}(w,\xi)\) as \(\alpha Z(w,\xi)+d(w,\xi)\), where the term \(d(w,\xi)\) represents the nonlinearities that were excluded from \(Z(w,\xi)\), and then analyzing the stability of the system in the presence of the neglected nonlinearity \(d(w,\xi)\). This analysis goes beyond the scope of this paper.
We consider the case in which the function \(Z(w,\xi)\) comprises both a linear part and a nonlinear part \(Q(w,\xi)\), i.e.
\[Z(w,\xi)=\begin{bmatrix}w\\ \xi\\ Q(w,\xi)\end{bmatrix}\]
The system (10) can then be written as
\[\begin{bmatrix}w^{+}\\ \xi^{+}\end{bmatrix}= A\begin{bmatrix}w\\ \xi\end{bmatrix}+B_{1}u+B_{2}\alpha Z(w,\xi) \tag{11}\] \[y= \alpha Z(w,\xi)\]
where
\[A:=\begin{bmatrix}A_{c}&0\\ 0&A_{c}\end{bmatrix},\;B_{1}:=\begin{bmatrix}0\\ B_{c}\end{bmatrix},\;B_{2}:=\begin{bmatrix}B_{c}\\ 0\end{bmatrix}\]
and the pair \((A_{c},B_{c})\) is in the Brunovsky canonical form.
We focus on the case in which the input \(u\) is designed as a function of \(Z(w,\xi)\), i.e.
\[u=\kappa Z(w,\xi) \tag{12}\]
where \(\kappa\in\mathbb{R}^{1\times S}\) is the control gain. Write the closed-loop system (11)-(12) as
\[\begin{bmatrix}w^{+}\\ \xi^{+}\end{bmatrix}= A\begin{bmatrix}w\\ \xi\end{bmatrix}+B_{1}\kappa Z(w,\xi)+B_{2}\alpha Z(w,\xi) \tag{13}\] \[y= \alpha Z(w,\xi)\]
The system is defined for any \(\xi\in\mathcal{U}^{N}\) and any \(w\in\Phi_{N}(\mathcal{X},\xi_{[1,N-1]})\).
### _Data-dependent representation of the closed-loop system_
Preliminary to the design of the controller is a data-dependent representation of the closed-loop system. We first introduce some notation. Recall the dataset in (2) and introduce, for \(i=0,\ldots,T-1\),
\[U(i):=\begin{bmatrix}u(i)\\ u(i+1)\\ \vdots\\ u(i+N-1)\end{bmatrix},Y(i):=\begin{bmatrix}y(i)\\ y(i+1)\\ \vdots\\ y(i+N-1)\end{bmatrix}\]
We assume that the samples of the dataset evolve in the domain of definition of (13).
**Assumption 3**: _For any \(i=0,\ldots,T-1\), \(U(i)\in\mathcal{U}^{N}\) and \(Y(i)\in\Phi_{N}(\mathcal{X},u_{[i,i+N-2]})\). \({}_{\Box}\)_
We let:
\[\begin{array}{rcll}Y_{0}&:=\begin{bmatrix}Y(0)&Y(1)&\ldots&Y(T-1)\end{bmatrix} \\ V_{0}&:=\begin{bmatrix}U(0)&U(1)&\ldots&U(T-1)\end{bmatrix}\\ Y_{1}&:=\begin{bmatrix}Y(1)&Y(2)&\ldots&Y(T)\end{bmatrix}\\ V_{1}&:=\begin{bmatrix}U(1)&U(2)&\ldots&U(T)\end{bmatrix}\\ Q_{0}&:=\begin{bmatrix}Q(0)&Q(1)&\ldots&Q(T-1)\end{bmatrix}\\ U_{0}&:=\begin{bmatrix}u(N)&u(N+1)&\ldots u(N+T-1)\end{bmatrix}\end{array} \tag{14}\]
In the definition of \(Q_{0}\), we are using the shorthand notation \(Q(i)\) for \(Q(Y(i),U(i))\). Under Assumption 3, bearing in mind the dynamics (11), the dataset-dependent matrices introduced in (14) satisfy
\[\begin{bmatrix}Y_{1}\\ V_{1}\end{bmatrix}=A\begin{bmatrix}Y_{0}\\ V_{0}\end{bmatrix}+B_{1}U_{0}+B_{2}\alpha\begin{bmatrix}Y_{0}\\ V_{0}\\ Q_{0}\end{bmatrix} \tag{15}\]
**Remark 1**: _(Multiple experiments) This identity is obtained from the \(T\) identities_
\[\begin{bmatrix}Y(i+1)\\ U(i+1)\end{bmatrix}=A\begin{bmatrix}Y(i)\\ U(i)\end{bmatrix}+B_{1}u(i+N)+B_{2}\alpha\begin{bmatrix}Y(i)\\ U(i)\\ Q(i)\end{bmatrix},\] \[i=0,\ldots,T-1\]
_We note that, for each \(i\), the identity does not require the quantities \(Y(i),U(i),Y(i+1),U(i+1),u(i)\) to be related to the corresponding quantities for \(i+1\). In other words, we could run \(T\)\(N\)-long independent experiments and collect the resulting input-output samples in_
\[Y_{0}^{j} :=\begin{bmatrix}y^{j}(0)\\ y^{j}(1)\\ \vdots\\ y^{j}(N-1)\end{bmatrix},\;U_{0}^{j}:=\begin{bmatrix}u^{j}(0)\\ u^{j}(1)\\ \vdots\\ u^{j}(N-1)\end{bmatrix},\] \[Y_{1}^{j} :=\begin{bmatrix}y^{j}(1)\\ y^{j}(2)\\ \vdots\\ y^{j}(N)\end{bmatrix},\;U_{1}^{j}:=\begin{bmatrix}u^{j}(1)\\ u^{j}(2)\\ \vdots\\ u^{j}(N)\end{bmatrix}\]
_where \(j=0,\ldots,T-1\) denotes the number of the experiment, and \(\{u^{j}(k),y^{j}(k)\}_{k=0}^{N}\) are the input-output samples of the experiment \(j\). We could then redefine the matrices in (14) as_
\[Y_{0} :=\begin{bmatrix}Y_{0}^{0}&Y_{0}^{1}&\ldots&Y_{0}^{T-1}\end{bmatrix}\] \[V_{0} :=\begin{bmatrix}U_{0}^{0}&U_{0}^{1}&\ldots&U_{0}^{T-1}\end{bmatrix}\] \[Y_{1} :=\begin{bmatrix}Y_{1}^{0}&Y_{1}^{1}&\ldots&Y_{1}^{T-1}\end{bmatrix}\] \[V_{1} :=\begin{bmatrix}U_{1}^{0}&U_{1}^{1}&\ldots&U_{1}^{T-1}\end{bmatrix}\] \[Q_{0} :=\begin{bmatrix}Q_{0}^{0}&Q_{0}^{1}&\ldots&Q_{0}^{T-1}\end{bmatrix}\] \[U_{0} :=\begin{bmatrix}u^{0}(N)&u^{1}(N)&\ldots u^{T-1}(N)\end{bmatrix}\]
_and the identity (15) would still apply. \({}_{\blacksquare}\)_
We establish the following:
**Lemma 2**: _Let Assumptions 1, 2 and 3 hold. Consider any matrices \(\kappa\in\mathbb{R}^{1\times S},G\in\mathbb{R}^{T\times S}\) that satisfy the relation_
\[\begin{bmatrix}\kappa\\ \hline I_{S}\end{bmatrix}=\begin{bmatrix}U_{0}\\ \hline Y_{0}\\ V_{0}\\ Q_{0}\end{bmatrix}G \tag{16}\]
_and partition \(G\) as_
\[G=\begin{bmatrix}G_{1}&G_{2}\end{bmatrix}\]
_where \(G_{1}\in\mathbb{R}^{T\times 2N},G_{2}\in\mathbb{R}^{T\times(S-2N)}\). Then the closed-loop system (13) can be written as_
\[\begin{bmatrix}w^{+}\\ \xi^{+}\end{bmatrix}=M\begin{bmatrix}w\\ \xi\end{bmatrix}+NQ(w,\xi)\]
_where_
\[M=\mathcal{X}_{1}G_{1},\;N=\mathcal{X}_{1}G_{2},\;\mathcal{X}_{1}=\begin{bmatrix} Y_{1}\\ V_{1}\end{bmatrix}. \tag{17}\]
\({}_{\Box}\)_
Hence,
\[A+\left(\begin{bmatrix}Y_{1}\\ V_{1}\end{bmatrix}-A\begin{bmatrix}Y_{0}\\ V_{0}\end{bmatrix}\right)G_{1}= \mathcal{X}_{1}G_{1}\] \[\left(\begin{bmatrix}Y_{1}\\ V_{1}\end{bmatrix}-A\begin{bmatrix}Y_{0}\\ V_{0}\end{bmatrix}\right)G_{2}= \mathcal{X}_{1}G_{2}.\]
Let the set of real-valued symmetric matrices of dimension \(n\times n\) be denoted by \(\mathbb{S}^{n\times n}\). This data-dependent representation leads to the following local stabilization result:
**Proposition 1**: _Let Assumptions 1, 2 and 3 hold. Consider the following SDP in the decision variables \(\mathcal{P}_{1}\in\mathbb{S}^{2N\times 2N}\), \(\mathcal{Y}_{1}\in\mathbb{R}^{T\times 2N}\), and \(G_{2}\in\mathbb{R}^{T\times(S-2N)}\):_
\[\operatorname{minimize}_{\mathcal{P}_{1},\mathcal{Y}_{1},G_{2}} \|\mathcal{X}_{1}G_{2}\| \tag{18a}\] \[\text{subject to}\quad\begin{bmatrix}Y_{0}\\ V_{0}\\ Q_{0}\end{bmatrix}\mathcal{Y}_{1}=\begin{bmatrix}\mathcal{P}_{1}\\ 0_{(S-2N)\times 2N}\end{bmatrix},\] (18b) \[\begin{bmatrix}\mathcal{P}_{1}&(\mathcal{X}_{1}\mathcal{Y}_{1})^{ \top}\\ \mathcal{X}_{1}\mathcal{Y}_{1}&\mathcal{P}_{1}\end{bmatrix}\succ 0\,,\] (18c) \[\begin{bmatrix}Y_{0}\\ V_{0}\\ Q_{0}\end{bmatrix}G_{2}=\begin{bmatrix}0_{2N\times(S-2N)}\\ I_{S-2N}\end{bmatrix}\,. \tag{18d}\]
_Assume that_
\[\lim_{|(w,\xi)|\to 0}\frac{|Q(w,\xi)|}{|(w,\xi)|}=0\,. \tag{19}\]
_If the SDP is feasible then_
\[\xi^{+}=A_{c}\xi+B_{c}u \tag{20}\]
_with_
\[u=\kappa Z(w,\xi) \tag{21}\]
_and \(\kappa\) as in_
\[\kappa=U_{0}\begin{bmatrix}\mathcal{Y}_{1}&G_{2}\end{bmatrix}\begin{bmatrix} \mathcal{P}_{1}^{-1}&0_{2N\times(S-2N)}\\ 0_{(S-2N)\times 2N}&I_{S-2N}\end{bmatrix} \tag{22}\]
_renders the origin \((\overline{w},\overline{\xi})=(0,0)\) an asymptotically stable equilibrium of_
\[\begin{array}{ll}w^{+}=&\tilde{f}(w,\xi)\\ \xi^{+}=&A_{c}\xi+B_{c}\kappa Z(w,\xi)\\ y=&\tilde{h}(w,\xi).\end{array} \tag{23}\]
_Proof._ Set \(G_{1}=\mathcal{Y}_{1}\mathcal{P}_{1}^{-1}\). Then (18b), (18d) imply
\[I_{S}=\begin{bmatrix}Y_{0}\\ V_{0}\\ Q_{0}\end{bmatrix}\begin{bmatrix}G_{1}&G_{2}\end{bmatrix}\]
which along with the definition of \(\kappa\) in (22), namely \(\kappa=U_{0}\left[\begin{smallmatrix}G_{1}&G_{2}\end{smallmatrix}\right]\), implies (16). Hence, the data-dependent representation of system (13) given in Lemma 2 holds. By Schur complement, the constraint (18c) is equivalent to
\[\mathcal{P}_{1}-(\mathcal{X}_{1}\mathcal{Y}_{1})^{\top}\mathcal{P}_{1}^{-1}( \mathcal{X}_{1}\mathcal{Y}_{1})\succ 0\,,\]
Pre- and post-multiplying by \(\mathcal{P}_{1}^{-1}\) and bearing in mind the definition of \(G_{1}\) we obtain
\[\mathcal{P}_{1}^{-1}-(\mathcal{X}_{1}G_{1})^{\top}\mathcal{P}_{1}^{-1}( \mathcal{X}_{1}G_{1})\succ 0\,,\]
which shows that \(V(w,\xi)=\begin{bmatrix}w^{\top}&\xi^{\top}\end{bmatrix}\mathcal{P}_{1}^{-1} \begin{bmatrix}w\\ \xi\end{bmatrix}\) is a Lyapunov function for the linear part of the closed-loop system. In particular note that the domain of definition of the function \(V(w,\xi)\) is the same as the one of system (23), hence, \(V(w,\xi)\) is defined at the origin. We have
\[V(w^{+},\xi^{+})-V(w,\xi)\] \[= \begin{bmatrix}w^{\top}&\xi^{\top}\end{bmatrix}((\mathcal{X}_{1}G _{1})^{\top}\mathcal{P}_{1}^{-1}(\mathcal{X}_{1}G_{1})-\mathcal{P}_{1}^{-1} )\begin{bmatrix}w\\ \xi\end{bmatrix}\] \[+2\begin{bmatrix}w^{\top}&\xi^{\top}\end{bmatrix}(\mathcal{X}_{1}G _{1})^{\top}\mathcal{P}_{1}^{-1}\mathcal{X}_{1}G_{2}Q(w,\xi)\] \[+Q(w,\xi)^{\top}(\mathcal{X}_{1}G_{2})^{\top}\mathcal{P}_{1}^{-1} \mathcal{X}_{1}G_{2}Q(w,\xi)\]
In view of (19), \(V(w^{+},\xi^{+})-V(w,\xi)<0\) in a neighborhood of the origin. This shows the claim.
### _Region of Attraction_
Proposition 1 provides a local stabilization result. Following [15], Proposition 1 can be extended to provide an estimate of the Region of Attraction (ROA) of the system (23). First we recall the following definitions.
**Definition 1**: _[_25_, Definition 13.2]_ _Suppose that \(\overline{x}=0\) is an asymptotically stable equilibrium for \(x^{+}=f(x)\). Then the ROA of \(x^{+}=f(x)\) is given by_
\[\mathcal{A}_{0}=\{x_{0}\colon\lim_{k\to\infty}s_{k}(x_{0})=0\}\]
_where \(s_{k}(x_{0})\) is the solution to \(x^{+}=f(x)\) at time \(k\geq k_{0}\) from the initial condition \(x_{0}\). \(\Box\)_
**Definition 2**: _[_25_, Definition 13.4]_ _A set \(\mathcal{M}\subset\mathbb{R}^{n}\) is a positively invariant set for \(x^{+}=f(x)\) if \(s_{k}(\mathcal{M})\subseteq\mathcal{M}\) for all \(k\geq k_{0}\), where \(s_{k}(\mathcal{M})=\{s_{k}(x_{0})\colon x_{0}\in\mathcal{M}\}\). \(\Box\)_
Recall the Lyapunov difference
\[V(w^{+},\xi^{+})-V(w,\xi)\] \[= \begin{pmatrix}M\begin{bmatrix}w\\ \xi\end{bmatrix}+NQ(w,\xi)\end{pmatrix}^{\top}\mathcal{P}_{1}^{-1}\begin{pmatrix} M\begin{bmatrix}w\\ \xi\end{bmatrix}+NQ(w,\xi)\end{pmatrix}\] \[-\begin{bmatrix}w\\ \xi\end{bmatrix}^{\top}\mathcal{P}_{1}^{-1}\begin{bmatrix}w\\ \xi\end{bmatrix}=:\mathcal{W}(w,\xi)\]
with \(M,N\) as in (17).
**Corollary 1**: _Consider the same setting as Proposition 1. Let1\(\mathcal{V}:=\{(w,\xi)\colon\mathcal{W}(w,\xi)<0\}\). Any sublevel set \(\mathcal{R}_{\gamma}=\{(w,\xi)\colon V(w,\xi)\leq\gamma\}\) contained in \(\mathcal{V}\cup\{0\}\) is positively invariant for system (23) and defines an estimate of the ROA of system (23). \(\Box\)_
Footnote 1: Although not indicated explicitly, \(\mathcal{V}\) is a subset of the domain of definition of \(V(w,\xi)\).
As the function \(\mathcal{W}(w,\xi)\) is known from the data, the estimate of the ROA \(\mathcal{R}_{\gamma}\) is computable.
## IV Main result
To draw conclusions on the convergence of system (1), we first observe that the dynamical controller (20) uses its own state \(\xi\) and the state \(w\) to generate the control action \(u=\kappa Z(\xi,w)\). At time \(k\) the state \(w(k)\) contains the past \(N\) output measurements from the process (1), from which we
only measure \(y(k)\). To make the past measurements in \(w(k)\) available to the controller, we extend it with the dynamics
\[\eta^{+}=A_{c}\eta+B_{c}y \tag{24}\]
Then, for any \(k_{0}\in\mathbb{Z}\) and any \(\eta(k_{0})\in\mathbb{R}^{N}\), we have that \(\eta(k)=y_{[k-N,k-1]}=w(k)\) for all \(k\geq k_{0}+N\), that is, independently of the initialization of (24), its state \(\eta(k)\) provides the vector \(w(k)\) of the past output measurements from time \(N\) onward. Similarly, for any \(\xi(k_{0})\in\mathbb{R}^{N}\), system (20) is such that \(\xi(k)=u_{[k-N,k-1]}\) for all \(k\geq k_{0}+N\). See [19] for the same structure of the controller (20), (24).
**Remark 2**: _System (24) is the so-called deadbeat observer, since for \(k\geq k_{0}+N\), the mapping \(\psi(\eta(k),\xi(k))\) would return \(x(k)\). If both \(\psi\) and a state-feedback stabilizer for system (1) were known, one could obtain a dynamic output feedback controller for the system (1). Here we are interested to the case in which this knowledge is not available and we design a dynamic output feedback controller under a suitable assumption on the nonlinearity \(\tilde{h}\) (Assumption 2)._
The following statement transfers the result obtained for the system (23) to the actual closed-loop system that includes the process (1).
**Proposition 2**: _Let Assumptions 1, 2 and 3 hold. Consider the SDP (18), assume that it is feasible and let condition (19) hold. For any \((x_{0},\xi_{0},\eta_{0})\in\mathcal{X}\times\mathbb{R}^{N}\times\mathbb{R}^{N}\) for which there exists \(v=(v_{[0,N-2]},v_{N-1})\in\mathcal{U}^{N}\) such that \((\Phi_{N}(x_{0},v_{[0,N-2]}),v)\in\mathcal{R}_{\gamma}\), the solution of the system (1) in closed-loop with the time-varying controller comprised by (20), (24) and_
\[u(k)=\left\{\begin{array}{ll}v_{k-k_{0}}&k_{0}\leq k\leq k_{0}+N-1\\ \kappa Z(\eta(k),\xi(k))&k\geq k_{0}+N\end{array}\right. \tag{25}\]
_that starts from \((x_{0},\xi_{0},\eta_{0})\), asymptotically converges to the origin._
_Proof._ First note that, by definition of the mapping \(\Phi_{N}\) and since \(f(0,0)=0\) and \(h(0)=0\), each entry of \(\Phi_{N}\) is a continuous function of its arguments which is zero when these are zero, hence there exists a neighbourhood of the origin \((x,v)=(0,0)\) such that any point \((x,v)\) in the neighbourhood satisfies \((\Phi_{N}(x,v_{[0,N-2]}),v)\in\mathcal{R}_{\gamma}\).
By definition of the mapping \(\Phi_{N}\) in Assumption 1 and (25), \(\Phi_{N}(x_{0},v_{[0,N-2]})=y_{[k_{0},k_{0}+N-1]}\), where \(y\) denotes the output response of the closed-loop system from the initial condition \((x_{0},\xi_{0},\eta_{0})\).
By the dynamics of the controller (20), (24), we have \(\eta(k)=y_{[k-N,k-1]}\), \(\xi(k)=u_{[k-N,k-1]}\) for all \(k\geq k_{0}+N\) and \((\eta(k_{0}+N),\xi(k_{0}+N))=(\Phi_{N}(x,v_{[0,N-2]}),v)\in\mathcal{R}_{\gamma}\). Hence, by Lemma 1, the solution of (20), (24) are the same as those of system (23) intialized at \((w(k_{0}+N),\xi(k_{0}+N))=(y_{[k_{0},k_{0}+N-1]},u_{[k_{0},k_{0}+N-1]})\). As \((\eta(k_{0}+N),\xi(k_{0}+N))\in\mathcal{R}_{\gamma}\), by Proposition 1 and Corollary 1, \((\eta(k),\xi(k))\) converges to the origin. By Lemma 1, for all \(k\geq k_{0}+N\), \(x(k)=\psi(\eta(k),\xi(k))\), which implies convergence of \(x(k)\) to the origin by continuity of \(\psi\).
The particular form of \(u(k)\) in (25) is due to the fact that, during the first \(N\)-steps, the controller state does not provide an accurate value of the past input-output measurements of the system, hence the choice to apply an open-loop input sequence. After \(N\) time steps, when such past measurements become available through the controller states \(\eta(k),\xi(k)\), \(u(k)\) is set to the feedback \(\kappa Z(\eta(k),\xi(k))\).
We also remark that in the result above if the initial condition \(x_{0}\) is sufficiently close to the origin and the initial sequence of control values \(v_{0},\ldots,v_{n-2},v_{n-1}\) does not drive the output response of (1) outside the set \(\mathcal{R}_{\gamma}\), then the designed controller (25) steers the state of the overall closed-loop system to the origin. Note that \(\mathcal{R}_{\gamma}\) is known thanks to Corollary 1, hence the designer can check whether the initial control sequence and the corresponding measured output response are in \(\mathcal{R}_{\gamma}\). For the design of the initial control sequence, the designer could take advantage of some expert knowledge.
**Remark 3**: _(Prior on input/output measurements) The controller is designed under the assumption that the input/output measurements collected during the experiment range over some specified sets - see Assumption 3 - where the measurements provide meaningful information about the system's internal state. These sets are not known, hence, the feature that the evolution of the system during the experiments remains in the sets of interest must be considered as one of the priors under which the design is possible._
## V Numerical example
We continue with Example 1 and consider the equations (7) with output \(y=x_{1}\). The system parameters are \(T_{s}=0.1\), \(m=1,\ell=1,g=9.8\) and \(\mu=0.01\). The problem is to learn a controller for (7) from input-output data that renders the origin of the closed-loop system locally asymptotically stable.
Following [15, Example 5], we choose
\[Z(w,\xi)=\begin{bmatrix}w\\ \xi\\ \sin w_{1}-w_{1}\\ \xi_{1}\cos w_{1}-\xi_{1}\end{bmatrix}\]
and note that Assumption 2 and (19) hold.
We collect data by running \(T=7\), \(N=2\)-long experiments with input uniformly distributed in \([-0.5,0.5]\) and with an initial state in \([-0.5,0.5]^{2}\). For each experiment \(j=0,1,\ldots,T-1\), we collect the samples \(\{u^{j}(k),y^{j}(k)\}_{k=0}^{2}\). Then we construct data matrices \(Y_{1},Y_{0},V_{1},V_{0},U_{0},Q_{0}\), as detailed in Remark 1. The program (18) is feasible and we obtain the controller gain with
\[\kappa=\begin{bmatrix}52.4412&-76.1179&-0.5782&-0.4467&0&0\end{bmatrix} \tag{26}\]
using the YALMIP toolbox [26], MOSEK solver [27]. To assess the effectiveness of the designed controller, instead of computing \(\mathcal{R}_{\gamma}\), which for this example provides a conservative estimate of the ROA, we depict in Fig. 1 the set of initial conditions \(x_{0}\) for which, choosing \(v_{k-k_{0}}=0\) for \(k_{0}\leq k\leq k_{0}+N-1\) in (25), the state \((x(k),\eta(k),\xi(k))\) converges to zero. Note that the choice of \(\eta_{0},\xi_{0}\) is inessential. The set is obtained by letting the closed-loop system evolve
for \(200\) time steps and then checking whether or not the norm \(\|(x(k),\eta(k),\xi(k))\|_{\infty}\) is smaller than \(10^{-6}\) on the interval \(195\leq k\leq 200\).
## VI Conclusions
We have examined a design of dynamic output feedback controllers for nonlinear systems from input/output data. The uniform observability property of the system, a prior in the approach, is instrumental to define a new set of coordinates, from which a data-driven "state"-feedback design can be conducted. The result is local and the size of the region of attraction is limited by the free evolution of the system during the first \(N\) steps during which the dead-beat observer reconstructs the past input/output values that feed the controller. The design and analysis have been carried out in the favourable setting in which measurements are noise-free and the nonlinearities can be expressed via a dictionary of known functions. Regarding the future work, besides going beyond the favourable setting, we would like to explore either a more sophisticated observer design or a different data-driven control design method. An option is to express the function \(\psi\) via a dictionary of functions, perform a data-driven design of an observer and follow a certainty equivalence principle in the analysis of the closed-loop system.
## Appendix
### _Proof of Lemma 1_
For any time \(k\), collect the past \(N\) output samples generated by the system (1) in the vector \(y_{[k-N,k-1]}\). The vectors \(y_{[k-N,k-1]},y_{[k-N+1,k]}\) at two successive time instants are related by
\[y_{[k-N+1,k]}=\begin{bmatrix}y(k-N+1)\\ y(k-N+2)\\ \vdots\\ y(k)\end{bmatrix}=A_{c}y_{[k-N,k-1]}+B_{c}y(k) \tag{27}\]
By the dynamics (1) and the definitions (3), the state \(x(k)\) at time \(k\) is given by
\[x(k)=F^{N}(x(k-N),u_{[k-N,k-1]}) \tag{28}\]
the output \(y(k)\) at time \(k\) is given by
\[y(k)=h\circ F^{N}(x(k-N),u_{[k-N,k-1]})\]
and, by the definition (4),
\[y_{[k-N,k-1]}=\Phi_{N}(x(k-N),u_{[k-N,k-2]}). \tag{29}\]
By Assumption 1 and the hypothesis that \(x(k-N)\in\mathcal{X}\) and \(u_{[k-N,k-1]}\in\mathcal{U}^{N}\) for all \(k\in\mathbb{Z}_{\geq k_{0}}\), the mapping (29) is invertible and returns
\[x(k-N)=\Psi_{N}(y_{[k-N,k-1]},u_{[k-N,k-2]}).\]
Hence, the state \(x(k)\) in (28) can be expressed as a mapping of the past input and output samples
\[\begin{array}{ll}x(k)=&F^{N}(\Psi_{N}(y_{[k-N,k-1]},u_{[k-N,k-2]}),u_{[k-N,k -1]})\\ =&\psi(y_{[k-N,k-1]},u_{[k-N,k-1]})\end{array}\]
and similarly for the output
\[\begin{array}{ll}y(k)=&h\circ F^{N}(\Psi_{N}(y_{[k-N,k-1]},u_{[k-N,k-2]}),u_ {[k-N,k-1]})\\ =&\tilde{h}(y_{[k-N,k-1]},u_{[k-N,k-1]})\end{array}\]
If \(y(k)\) is replaced in (27), then
\[y_{[k-N+1,k]}=\tilde{f}(y_{[k-N,k-1]},u_{[k-N,k-1]}),\]
by definition of \(\tilde{f}\) in (5).
By the choice of the input \(v(k)=u_{[k-N,k-1]}\), for all \(k\in\mathbb{Z}_{\geq k_{0}}\) and of the initial condition \(w(k_{0})=y_{[k_{0}-N,k_{0}-1]}\), we have that \(w(k)=y_{[k-N,k-1]}\) for all \(k\in\mathbb{Z}_{\geq k_{0}}\), and this ends the proof.
|
2309.11321 | Face Aging via Diffusion-based Editing | In this paper, we address the problem of face aging: generating past or
future facial images by incorporating age-related changes to the given face.
Previous aging methods rely solely on human facial image datasets and are thus
constrained by their inherent scale and bias. This restricts their application
to a limited generatable age range and the inability to handle large age gaps.
We propose FADING, a novel approach to address Face Aging via DIffusion-based
editiNG. We go beyond existing methods by leveraging the rich prior of
large-scale language-image diffusion models. First, we specialize a pre-trained
diffusion model for the task of face age editing by using an age-aware
fine-tuning scheme. Next, we invert the input image to latent noise and obtain
optimized null text embeddings. Finally, we perform text-guided local age
editing via attention control. The quantitative and qualitative analyses
demonstrate that our method outperforms existing approaches with respect to
aging accuracy, attribute preservation, and aging quality. | Xiangyi Chen, Stéphane Lathuilière | 2023-09-20T13:47:10Z | http://arxiv.org/abs/2309.11321v1 | # Face Aging via Diffusion-based Editing
###### Abstract
In this paper, we address the problem of _face aging_--generating past or future facial images by incorporating age-related changes to the given face. Previous aging methods rely solely on human facial image datasets and are thus constrained by their inherent scale and bias. This restricts their application to a limited generatable age range and the inability to handle large age gaps. We propose FADING, a novel approach to address **F**ace **A**ging via **D**ffusion-based editi**NG**. We go beyond existing methods by leveraging the rich prior of large-scale language-image diffusion models. First, we specialize a pre-trained diffusion model for the task of face age editing by using an age-aware fine-tuning scheme. Next, we invert the input image to latent noise and obtain optimized null text embeddings. Finally, we perform text-guided local age editing via attention control. The quantitative and qualitative analyses demonstrate that our method outperforms existing approaches with respect to aging accuracy, attribute preservation, and aging quality.
## 1 Introduction
+
Footnote †: 2023: The copyright of this document resides with its authors.
It may be distributed unchanged freely in print or electronic forms.
on diverse concepts (such as _"woman"!"man"_, _"glasses"_, etc) that could be potentially exploited for age editing. While some recent research [],,,,,,, ] has explored the potential of leveraging diffusion models for image editing tasks, they are limited to general-purpose editing methods. In contrast, no studies have demonstrated how these approaches can be adapted to tailor highly specific tasks such as face aging.
To this end, we propose FADING : Face Aging via Diffusion-based editiNG. The proposed method consists of two stages: specialization and editing. Specialization is a training stage where we re-target a pre-trained diffusion-based language-image model for face aging. In this stage, we employ an age-aware fine-tuning scheme that achieves better disentanglement of the age from age-irrelevant features (_e.g_. gender). For the editing stage, we first employ a well-chosen inversion technique to invert the input image into latent noise. Subsequently, we leverage a pair of text prompts containing both initial and target age information to perform text-based localized age editing, via attention control. Our contribution can be summarized as follows: (i) FADING is the first method to extend large-scale diffusion models for face aging; (ii) we successfully leverage the attention mechanism for accurate age manipulation and disentanglement; (iii) we qualitatively and quantitatively demonstrate the superiority of FADING over state-of-the-art methods through extensive experiments. 1
Footnote 1: Code available at [https://github.com/MunchkinChen/FADING](https://github.com/MunchkinChen/FADING).
## 2 Related Work
Face-AgingMost of the recent methods rely on the well-known Generative Adversarial Networks (GANs) []. On the one hand, _condition-based_ methods follow the conditional GAN framework []. This means they include age as an extra condition into the GAN framework to guide age-aware synthesis []. The age estimator can be embedded into the generator and trained simultaneously with it []. Alternatively, recurrent neural networks are used in [] to iteratively synthesize aging effects. Pre-trained face recognizers are employed to preserve age-irrelevant features (_i.e_. identity) [].
On the other hand, other methods [],,,,,, ] resort to _latent space manipulation_[]. An age modulation network is designed to fuse age labels with the latent vectors in HRFAE [], or to output age-aware transformation to apply to the decoder in RAGAN []. SAM relies on the latent space of a pre-trained GAN and employs an age regressor to explicitly guide the encoder in generating age-aware latent codes. Huang [] learn a unified embedding of age and identity. Some works also adopt a style-based architecture []. LATS [] follows StyleGAN2 [] to perform modulated convolutions to inject learned age code into the decoder. CUSP disentangles style and content representations and uses a decoder to combine the two representations with a style-based strategy. We highlight that one drawback of these methods is the significant discrepancy in identity that arises when real images are inverted into the GAN's latent space []. Consequently, the reconstruction of the initial image may be inaccurate, which can lead to suboptimal results.
Image editing with Diffusion Models (DMs)Large-scale diffusion models have raised the bar for text-to-image synthesis []. Naturally, works have attempted to adapt text-guided diffusion models to image editing. SDEdit [] is among the first to propose diffusion-based image editing. It adds noise to the input image and then performs a text-guided denoising process from a predefined step. However, SDEdit lacks specific control
over edited region. With the help of a mask provided by the user, [],, ] better address this problem and enable more meaningful local editing. After each denoising step, the mask is applied to the latent image while also adding the noisy version of the original image. DiffEdit [] gets rid of the need for a user-provided mask by automatically generating one that highlights regions to be edited based on the text description. Prompt-to-prompt [] proposes a text-only editing technique based on a pair of _"before-after"_ text descriptions. Null-text inversion [] enables real image editing with prompt-to-prompt thanks to its accurate inversion of real images. Concurrently, Imagic [] enables text-guided real image editing by fine-tuning the diffusion model to capture the input image's appearance. However, it is important to note that all these methods are general-purpose editing techniques. As such, our work aims to showcase the potential for adapting these broad approaches for use in more specific tasks, such as face aging.
## 3 FADING: Face Aging via Diffusion-based editiNG
The objective of this work is to transform an input image \(\mathbf{x}\) to make the person in the image appear to be of a specific target age \(\alpha_{\tau}\). For this, we employ a dataset of \(N\) face images \(\mathbf{x}^{(n)}\in\mathbb{R}^{H\times W\times 3},\ n=1,...,N\) with their corresponding age labels \(\alpha^{(n)}\in\{1,..K\}\), where \(K\) is the maximum age in our training dataset. The age labels \(\alpha^{(n)}\) can be obtained either via manual labeling or using a pre-trained age classifier.
The proposed approach relies on a specialization and an edition stage illustrated in Figure 1. In the first stage, a pre-trained diffusion model is re-targeted for the task of face age editing. This training procedure is detailed in Sec. 3.1. To better disentangle age information from other age-irrelevant features, our specialization procedure employs an age-aware fine-tuning scheme. Then, our inference consists of two steps: inversion and editing. In the inversion step, we inverse the diffusion process using a recent optimization-based inversion [] as detailed in Sec. 3.2. In the editing step, we use a new prompt that contains the target age to guide a localized age editing with attention control (see Sec. 3.3). We also provide a solution to improve the prompts used for editing to achieve higher image quality.
Figure 1: FADING addresses \(\mathbf{face}\)\(\mathbf{aging}\) via \(\mathbf{diffusion}\)-based editing: In the specialization stage, a pre-trained diffusion model is fine-tuned for the aging task. Editing is achieved via age estimation, image inversion, and attention control.
### Specialization to Face Aging
FADING leverages a pre-trained text-to-image Diffusion Model (DM) []. While the proposed method could be applied to any text-to-image DM, in our experiments, we employ a variant of DM named Latent Diffusion Model (LDMs) []. LDMs operate in the latent space of an auto-encoder to achieve lower computation complexity. As traditional DMs, LDMs are composed of a forward and a backward pass.
In the forward process, the input image \(\mathbf{x}_{0}\) is projected to the auto-encoder latent space, \(\mathbf{z}_{0}=\mathcal{E}(\mathbf{x}_{0})\). Then, random Gaussian noises are added to the original latent embedding \(\mathbf{z}_{0}\) in a stepwise manner to create a sequence of noisy samples \((\mathbf{z}_{1}...,\mathbf{z}_{T})\). Learning an LDM consists in training a neural network \(\epsilon_{\theta}\) to estimate the corresponding noise from a given sample \(\mathbf{z}_{t}\). In the reverse process, on the other hand, new data points are generated by sampling from a normal distribution and gradually denoising the sample using \(\epsilon_{\theta}\). The generated image \(\mathbf{\hat{x}}_{0}\) is obtained by feeding the estimated latent tensor \(\mathbf{\hat{z}}_{0}\) to the decoder. To enable generation conditioned on a text prompt \(\mathcal{P}\), a sequence of token embeddings is extracted from \(\mathcal{P}\) and given to \(\epsilon_{\theta}\) via cross-attention layers, where keys and values are estimated from the token embedding. In the case of unconditional generation, the token embeddings are replaced by fixed embeddings referred to as _null-text embedding_ and denoted by \(\varnothing_{t}\).
Age editing with a pre-trained DM can be performed without any training stage [], but this produces unsatisfactory results since they are generally not specialized for human faces. Also, coarse conditioning prompts, such as "_man in his thirties_", can capture age-related semantics but we observe that they often fail to capture more specific textual descriptions of age as numbers, such as "_32-year-old man_". To address these issues, we propose a specialization stage that re-purposes a pre-trained DM toward the aging task. For every face image \(\mathbf{x}\) with its corresponding age \(\alpha\), fine-tuning is performed using an image-prompt pair, with the following prompt: \(\mathcal{P}_{\alpha}\)="_photo of a \([\alpha]\) year old person_", where \(\alpha\) is the age of the person written as numerals. We have observed better performance when adding another age-agnostic prompt \(\mathcal{P}\)="_photo of a person_" at every iteration. We refer to this fine-tuning scheme as the _double-prompt_ scheme. One assumption to justify this observation is that it can allow better disentangling of age information from other age-irrelevant features (_i.e_. identity and context features). Regarding the training loss, we employ the reconstruction objective of DMs which, in our case, can be written as follows:
\[\mathcal{L}_{DM}=\mathbb{E}_{\mathbf{z}_{0}\sim\mathcal{E}(x),\alpha,\epsilon, \epsilon^{\prime},t}[\|\epsilon-\epsilon_{\theta}(\mathbf{z}_{t},t,\mathcal{P} )\|_{2}^{2}+\|\epsilon^{\prime}-\epsilon_{\theta}(\mathbf{z}_{t}^{\prime},t, \mathcal{P}_{\alpha})\|_{2}^{2}], \tag{1}\]
where \(\epsilon\) and \(\epsilon^{\prime}\) are random Gaussian noises, and \(\mathbf{z}_{t}\) and \(\mathbf{z}_{t}^{\prime}\) are the respective noisy latent codes obtained from \(\mathbf{z}_{0}\). To preserve the rich image prior learned by the DM, we restrict the number of fine-tuning steps to a small value, typically around 150 steps.
### Age Editing: Image Inversion
After the specialization stage, our DM can generate face images either unconditionally or conditionally on a target age \(\alpha\) with prompts \(\mathcal{P}\) and \(\mathcal{P}_{\alpha}\) respectively. To enable real image editing, we need to inverse the diffusion process of the input image. In this task, we leverage an inversion algorithm, known as _null-text inversion_[], which consists in modifying the unconditional textual embedding that is used for classifier-free guidance such that it leads to accurate reconstruction. To be specific, we use the specialized model to invert the input image \(\mathbf{x}\) to the noise space through DDIM inversion []. We obtain a diffusion
trajectory \(\{z_{t}^{inv}\},t=1\dots T\) from Gaussian noise to the input image. Unfortunately, previous studies [5] show that classifier-free guidance amplifies the accumulated error of DDIM inversion, resulting in poor reconstruction of \(\mathbf{x}\). _Null-text inversion_ optimizes the null-text embedding \(\varnothing_{t}\) used in classifier guidance at every step \(t\) such that, assuming a conditioning prompt \(\mathcal{P}_{im}\) corresponding to the input image, the forward process leads to an accurate reconstruction of \(\mathbf{x}\). The unconditionally inverted sequence of noisy latents \(\{z_{t}^{inv}\}_{t=1}^{T}\) serves as our pivot trajectory for optimization: the unconditional null embeddings over all time-steps \(\{\varnothing_{t}\}_{t=1}^{T}\) are sequentially optimized such that the noise estimator network \(\varepsilon_{\theta}\) predicts latent codes close to \(z_{t-1}^{inv}\) at every step \(t\). More precisely, for every step \(t\) in the order of the diffusion process \(t=T\to t=1\), the following minimization problems are sequentially considered:
\[\min_{\varnothing_{t}}\lVert z_{t-1}^{inv}-z_{t-1}(\bar{z}_{t},t,\mathcal{P}_{ im};\varnothing_{t})\rVert_{2}^{2} \tag{2}\]
where \(\bar{z}_{t}\) is the noisy latent code obtained by solving the optimization problem of the previous step, and \(z_{t-1}\) is the latent code at step \(t-1\) estimated using \(\bar{z}_{t}\). To enable age editing, we need to provide a prompt corresponding to the content of the input image. In this task, we propose to employ a pre-trained age estimator. Assuming an input image \(\mathbf{x}\), we obtain its estimated age \(\alpha\) and employ as prompt \(\mathcal{P}_{inv}=\mathcal{P}_{\alpha}=\)"_photo of a \([\alpha]\) year old person_".
### Age Editing: Localized Age Editing with Attention Control
We now explain how we edit an image \(\mathbf{x}\) to make the person in the image appear to be of a target age \(\alpha_{\tau}\). To achieve this, we take inspiration from recent literature [5] and act on the cross-attention maps used for text-conditioning, forcing the model to modify only age-related areas via attention map injection. After inversion, we know the latent noise \(\mathbf{z}_{T}\) and the optimized unconditional embeddings \(\{\varnothing_{t}\}_{t=1}^{T}\) leads to an accurate reconstruction of \(\mathbf{x}\) when conditioned on prompt \(\mathcal{P}_{\alpha}\). In every cross-attention layer of \(\varepsilon_{\theta}\), we compute the reference cross-attention maps generated during the diffusion process \(\{M_{t}^{\alpha}=\text{Softmax}(Q_{t}^{\mathbf{Z}}K_{t}^{\alpha})\}_{t=1}^{T}\), where \(Q_{t}^{\mathbf{Z}}\) are queries computed from \(\mathbf{z}_{t}\) and \(K_{t}^{\alpha}\) keys computed from the prompt \(\mathcal{P}_{\alpha}\). As shown in [5, 5], these attention maps contain rich semantic relations between the spatial layout of the image and each word in \(\mathcal{P}_{\alpha}\). In our case, the attention maps corresponding to the token \([\alpha]\) indicate which pixels are related to the age of the person.
Next, we replace the initial estimated age \(\alpha\) in the inversion prompt \(\mathcal{P}_{\alpha}\) with a target age \(\alpha_{\tau}\) and obtain a new target prompt \(\mathcal{P}_{\tau}\)="_photo of a \([\alpha_{\tau}]\) year old person_". We then use \(\mathcal{P}_{\tau}\) to guide the generation: during the new sampling process, we inject the cross-attention maps \(\{M_{t}^{\alpha}\}_{t=1}^{T}\), but keep the cross-attention values from the new prompt \(\mathcal{P}_{\tau}\). In this way, the generated image is conditioned on the target age information provided by the target prompt \(\mathcal{P}_{\tau}\) through the cross-attention values, while preserving the original spatial structure. Specifically, as only age-related words are modified in the new prompt, only pixels that attend to age-related tokens receive the greatest attention. Note that, we follow [5] and perform a soft attention constraint by swapping only the first \(t_{M}\) steps, as the attention maps play an important role mostly in the early stages.
Enhancing promptsFADING can achieve satisfying aging performance with the very generic prompts given above. Nevertheless, the results can be further improved by using more specific prompts in the inversion and editing stages. While this can be achieved with manual prompt engineering, we propose a simple and automatic way to improve our initial prompts \(\mathcal{P}_{\alpha}\) and \(\mathcal{P}_{\tau}\). First, we can leverage pre-trained gender classifiers to predict the gender
of the person in the input image. Then, the word _"person"_ in both \(\mathcal{P}_{\alpha}\) and \(\mathcal{P}_{\tau}\) can be replaced by either _"woman"_ or _"man"_. Second, our experiments show that in the case of young ages, either in \(\mathcal{P}_{\alpha}\) and \(\mathcal{P}_{\tau}\), the use of words such as _"person"_, _"woman"_ or _"man"_ do not perform well. Therefore, if the target age \(\alpha_{\tau}\) or the age \(\alpha\) estimated by our classifier is below 15, the words _"woman/man"_ are replaced by _"girl/boy"_ in \(\mathcal{P}_{\tau}\) or \(\mathcal{P}_{\alpha}\).
## 4 Experiments
Implementation detailsWe employ Stable Diffusion pre-trained on the LAION-400M dataset. 150 training images are sampled from FFHQ-Aging to finetune the pre-trained model for 150 steps, with a batch size of 2. We used the central age of the true label age group as \(\alpha\) in the finetuning prompt \(\mathcal{P}_{\alpha}\). We employed Adam optimizer with a learning rate of \(5\times 10^{-6}\) and \(\beta_{1}=0.9\), \(\beta_{2}=0.999\). During attention control, we set the cross-attention replacing ratio \(t_{M}/T\) to 0.8. All experiments are conducted on a single A100 GPU. It takes 1 minute for finetuning, 1 minute for inversion, and 5 seconds for age editing.
Evaluation protocolWe utilized two widely-used high-resolution **datasets** as in. _FFHQ-Aging_ is an extension of the NVIDIA FFHQ dataset containing 70k 1024\(\times\)1024 resolution images. Images are manually labeled into 10 age groups ranging from 0-2 to 70+ years old. _CelebA-HQ_ consists of 30k images. This dataset is used only for evaluation, not for training. Age labels are obtained using the DEX classifier as used in previous studies [], []. Images are downsampled to 512\(\times\)512 resolution for our experiments. Regarding the **metrics**, we evaluate aging methods from three perspectives: aging accuracy, age-irrelevant attribute reservation, and aging quality. Following, we employ: _Mean Absolute Error (MAE)_: the prediction of an age estimator is compared with the target age. _Gender, Smile, and Face expression preservation_: we report the percentage to which the original attribute is preserved. _Blurriness_: indicates face blur condition. _Kernel-Inception Distance_ assesses the discrepancy between generated and real images for similar ages. We report the KID between original and generated images within the same age groups. For evaluation, Face++ is used for aging accuracy, attribute preservation, and blurriness evaluation.
### Comparison with State-of-the-Art
We conduct comparisons with state-of-the-art aging approaches, including HRFAE, LATS, and CUSP. We are unable to include Re-aging GAN, another recent aging method, in our comparison due to the unavailability of its source code. Moreover, the lack of detailed information regarding its evaluation protocol prevents us from conducting a fair and reliable comparison following its evaluation protocol. We start the comparison on the CelebA-HQ dataset. In this case, we follow the evaluation protocol used in and sample 1000 test images with _"young"_ labels and translate them to the target age of 60.
Qualitative comparisonThe comparative study on CelebA-HQ is shown in Figure 2. Note that these images are extracted from, and consequently have not been cherry-picked. We observe that FaderNet introduces little modifications, PAG-GAN and
IPC-GAN [] produce pronounced artifacts or degradation. HRFAE [] generates plausible aged faces with minor artifacts but is mostly limited to skin texture changes, such as adding wrinkles. LATS [], CUSP [], and our approach introduce high-level semantic changes, such as significant receding of the hairline (see third row). But LATS operates only in the foreground; it does not deal with backgrounds or clothing and requires a previous masking procedure. On the other hand, CUSP always introduces glasses with aging. This is likely due to the high correlation between age and glasses in their training set. Our method does not introduce these undesired additional accessories, produces fewer artifacts on backgrounds, and possesses more visual fidelity to the input image.
We now expand the comparison with the best-performing competitor, namely CUSP [], on FFHQ-Aging []. We translate input images to all age groups and report per-age-group results, for a more comprehensive analysis with a complete sense of continuous transformation throughout the lifespan. Figure 5 shows qualitative results. We have the following key observations. (1) In general, our approach introduces fewer artifacts, generates realistic textural and semantic modification, and achieves better visual fidelity across all age groups. (2) We achieve significant improvement for extreme target ages (infant and elderly, see columns for (4-6) and (70+)). (3) Our model handles better rare cases, such as accessories or occlusions. CUSP fails when the source person wears facial accessories. Typically, for the person on the right who wears sunglasses, CUSP falsely translates sunglasses to distorted facial components. In contrast, our method preserves accessories accurately while correctly addressing structural changes elsewhere. These results confirm our initial hypothesis that utilizing a specialized DM pre-trained on a large-scale dataset increases robustness compared to methods exclusively trained on facial datasets, which are susceptible to data bias.
Interestingly, we observe a slight variation in skin tone when addressing age change with FADING. It is important to note that a similar shift in skin tone is also observed for the training-free baseline (vanilla implementation of prompt-to-prompt editing using pretrained Stable Diffusion, referred to as Training-free in Table 3), as shown in Figure 7(a) (see more results in supplementary material). This suggests that the entanglement between age and skin tone is inherent to the pre-trained Stable Diffusion model and is not a result of our
Figure 2: Qualitative comparison with state-of-the-art methods on CelebA-HQ. Images for the other approaches are extracted from [].
specialization stage.
Quantitative comparisonTable 1 presents quantitative results on CelebA-HQ[] dataset. Note that an 8.23-year discrepancy is reported between the DEX classifier utilized for inference and the Face++ classifier utilized for evaluation[]. FADING is on par with CUSP for aging accuracy. We achieve the highest gender preservation, proving our capability to retain age-irrelevant features. However, we report lower scores for other attributes. As is discussed in the qualitative analysis, this is because previous methods primarily generate texture-level modification, which preserves high-level attributes. In contrast, FADING yields more profound but realistic semantic changes, thus slightly compromising preservation metrics.
Table 2 presents quantitative results on FFHQ-Aging[] dataset. Lower MAE suggests that we have a better aging accuracy. FADING also reports better gender preservation for most age groups. Note that, for middle-aged group from 30-50, an almost perfect preservation rate is achieved. Our qualitative analysis is supported by the quantitative KID analysis, with one order of magnitude lower than CUSP for nearly all age groups. Again this demonstrates that FADING achieves higher aging performance.
### Ablation studies
Specialization (Spec.) and Double-Prompt (DP) schemeTo assess the influence of the design of the specialization step, we consider a variant where we skip the specialization step and directly use a pre-trained Stable Diffusion instead. This baseline can be seen as a vanilla implementation of prompt-to-prompt editing [] with null-text inversion [] in the case of aging. The second variant includes the specialization step but omits the double-prompt scheme. The results shown in Figure 3(a) and Table 3 demonstrate the effectiveness
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & Predicted Age & Blur & Gender & Smiling & Neutral & Happy \\ \hline Real images & 68.23 \(\pm\) 6.54 & 2.40 & - & - & - & - \\ \hline FaderNet [] & 44.34 \(\pm\) 11.40 & 9.15 & 97.60 & 95.20 & 90.60 & 92.40 \\ PAGGAN [] & 49.07 \(\pm\) 11.22 & 3.68 & 95.10 & 93.10 & 90.20 & 91.70 \\ IPCGAN [] & 49.72 \(\pm\) 10.95 & 9.73 & 96.70 & 93.60 & 89.50 & 91.10 \\ HRFAE [] & 54.77 \(\pm\) 8.40 & **2.15** & 97.10 & **96.30** & **91.30** & **92.70** \\ HRFAE-2.24 [] & 51.87 \(\pm\) 9.59 & 5.49 & 97.30 & 95.50 & 88.30 & 92.50 \\ LATIS [] & 55.33 \(\pm\) 9.33 & 4.77 & 96.55 & 92.70 & 83.77 & 88.64 \\ CUSP [] & **67.76 \(\pm\) 5.38** & 2.53 & 93.20 & 88.70 & 79.80 & 84.60 \\ FADING (Ours) & 66.49 \(\pm\) 6.46 & 2.35 & **98.40** & 90.20 & 84.50 & 86.80 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison on CelebA-HQ on the young-to-60 task. Except for FADING, the scores are extracted from [].
Figure 3: Qualitative comparison with state-of-the-art methods on FFHQ-Aging. For CUSP, we translate each image to the corresponding age group. For FADING, we translate to the central age of each group. For the oldest age group (70+), we translate to 80 years old.
of our specialization step in generating more realistic images. Our qualitative analysis indicates that the images edited with a non-specialized model exhibit noticeable aberrations, especially around the mouth area and facial contours. The quantitative metrics also support the observation that our method achieves higher aging quality (lowest blurriness and KID). Furthermore, the training-free editing approach reports the highest aging error and a low attribute preservation rate. Regarding our double-prompt scheme, Figure 4a shows that it improves the structural alignment with the original image. Quantitatively, as shown in Table 3, the slight increase in age-MAE brought by _DP_ is vastly complemented by the large gains in attribute preservation metrics. This improvement suggests that the _DP_ indeed enhances the disentanglement of age from age irrelevant features by keeping them better retained. Besides, the age-MAE metric may be a less strong indicator of disentanglement capability, given that differences of 0.38 year in facial appearance are often imperceptible in real photos.
Enhanced Prompts (EP) and Initial Age (IA)We now analyze the edition stage considering two other variants: one without our enhanced prompts and another which does not use the initial age of the source image and instead uses \((\mathcal{P},\mathcal{P}_{\tau})\) as editing prompts. The positive impacts of enhanced prompts and the use of the estimated initial age are demonstrated in Table 4 where we observe consistent gains in all metrics. Qualitatively, _EP_ plays an important role in preserving age irrelevant attributes: we observe significant improvements in gender consistency in Figure 7a. Surprisingly, the use of gender information in our enhanced prompts also helps to improve aging accuracy. We hypothesize that this is because more detailed prompts (we assume that "woman" contains more information than "person") lead to more specialized attention maps for each semantic component, resulting in more accurate targeting of age-related pixels. The impact of _IA_ is illustrated in Figure 7b. Without information on the initial age, the appearance of the person barely changes, except for slight variations in hair color. This indicates that the use of initial age (_IA_) in guiding prompts prevents the model from reproducing the original image without effectively addressing the
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l} \hline \hline Metric & Method & 0-2 & 3-6 & 7-9 & 10-14 & 15-19 & 20-29 & 30-39 & 40-49 & 50-69 & 70+ & Mean \\ \hline \multirow{2}{*}{MAE} & CUSP & 9.41 & 16.28 & 20.24 & 18.16 & 11.88 & 10.36 & 12.70 & **11.08** & **8.13** & 8.05 & 12.63 \\ & FADING & **5.70** & **11.72** & **13.66** & **11.22** & **6.86** & **6.23** & **9.60** & 12.04 & 8.39 & **6.20** & **9.16** \\ \hline \multirow{2}{*}{Gender(\%)} & CUSP & 71.5 & **73.5** & **74.5** & **78.0** & 73.5 & 80.5 & 85.5 & 81.5 & 82.0 & 76.0 & 77.7 \\ & FADING & **72.0** & 72.0 & 67.5 & 68.0 & **88.0** & **96.0** & **98.0** & **97.0** & **95.0** & **87.5** & **84.1** \\ \hline \multirow{2}{*}{KID(\(\times 100\))} & CUSP & 4.19 & 3.22 & 3.14 & 3.18 & 3.60 & 3.63 & 3.98 & 4.69 & 4.07 & 4.57 & 3.83 \\ & FADING & **1.41** & **0.11** & **0.45** & **0.25** & **0.52** & **0.16** & **1.00** & **0.59** & **1.50** & **0.61** & **0.66** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative comparison between CUSP and FADING on FFHQ-Aging.
Figure 4: Qualitative ablation studies on several aspects of FADING: impact of the specialization step (_Spec._), the use of Double Prompt (_DP_), the Enhanced Prompts (_EP_) and the use of the Initial Age (_IA_).
age change.
## 5 Conclusion
In this paper, a novel method for face age editing based on diffusion models was presented. The proposed model leverages the rich image and semantic prior of large-scale text-image models, via a training stage that specializes the diffusion model for aging tasks. Qualitative and quantitative analyses on two different datasets demonstrated that our method produces natural-looking re-aged faces across a wider range of age groups with higher re-aging accuracy, better aging quality, and greater robustness compared to state-of-the-art methods. The effectiveness of each component of our method was also validated through extensive experiments. In future works, we plan to extend our enhanced prompts strategy to preserve other age-agnostic attributes by leveraging corresponding pre-trained attribute classifiers. For example, we could include _"wearing glasses"_ in the editing prompts when glasses are detected.
## Acknowledgments
This paper has been supported by the French National Research Agency (ANR- 20-CE23-0027).
|
2308.16767 | Reinforcement learning for safety-critical control of an automated
vehicle | We present our approach for the development, validation and deployment of a
data-driven decision-making function for the automated control of a vehicle.
The decisionmaking function, based on an artificial neural network is trained
to steer the mobile robot SPIDER towards a predefined, static path to a target
point while avoiding collisions with obstacles along the path. The training is
conducted by means of proximal policy optimisation (PPO), a state of the art
algorithm from the field of reinforcement learning. The resulting controller is
validated using KPIs quantifying its capability to follow a given path and its
reactivity on perceived obstacles along the path. The corresponding tests are
carried out in the training environment. Additionally, the tests shall be
performed as well in the robotics situation Gazebo and in real world scenarios.
For the latter the controller is deployed on a FPGA-based development platform,
the FRACTAL platform, and integrated into the SPIDER software stack. | Florian Thaler, Franz Rammerstorfer, Jon Ander Gomez, Raul Garcia Crespo, Leticia Pasqual, Markus Postl | 2023-08-31T14:41:38Z | http://arxiv.org/abs/2308.16767v1 | # Reinforcement learning for safety-critical control of an automated vehicle
###### Abstract
We present our approach for the development, validation and deployment of a data-driven decision-making function for the automated control of a vehicle. The decision-making function, based on an artificial neural network is trained to steer the mobile robot SPIDER towards a predefined, static path to a target point while avoiding collisions with obstacles along the path. The training is conducted by means of proximal policy optimisation (PPO), a state of the art algorithm from the field of reinforcement learning.
The resulting controller is validated using KPIs quantifying its capability to follow a given path and its reactivity on perceived obstacles along the path. The corresponding tests are carried out in the training environment. Additionally, the tests shall be performed as well in the robotics situation Gazebo and in real world scenarios. For the latter the controller is deployed on a FPGA-based development platform, the FRACTAL platform, and integrated into the SPIDER software stack.
Reinforcement learning, Decision-making, Path following, Path tracking, Reactive path tracking, Collision avoidance, Automated driving, Validation, EDDI
## I Introduction
In this work we aim to showcase the implementation, validation and deployment of a machine learning (ML) application in a safety critical system. For this purpose a data-driven decision-making function for the automated control of the mobile robot SPIDER is developed. It extends the capabilities of a Stanley-based path tracking controller1 which is already integrated in the SPIDER software stack2 to a reactive path tracking controller. Thus, if an obstacle along the path is perceived by the robot, an evasion maneuver to avoid a collision is initiated.
Footnote 1: For a comprehensive description and discussion of how a Stanley controller works we refer to [13] and [6].
Footnote 2: The software stack of the SPIDER is entirely based on ROS 2.
The function is composed of several function blocks (see section IV-A). Its decision-making block, i.e. the unit which is providing the controls to be applied to the vehicle, is based on an artificial neural network (ANN). For the training procedure of this ANN, a state of the art algorithm from the field of reinforcement learning (RL) is used - see section IV-B.
The focal point of this work is on investigating the preservation of safety relevant driving functions3 while executing the abovementioned decision-making function. Hence, a framework supporting on one hand the execution of computationally intense vehicle functions, and on the other hand allowing its safe execution has to be provided. This is where the SPIDER and the FRACTAL project come into play.
Footnote 3: This relates in particular to the collision avoidance function, which triggers an emergency brake if a collision is imminent.
The SPIDER4 is a mobile HiL platform developed at the Virtual Vehicle Research GmbH. It is designed for the testing of autonomous driving functions in real-world conditions, e.g. on proving grounds, in an automated and reproducible manner. The integrated safety concept ensures the safety of test drives. The FRACTAL platform5 a FPGA-based development platform, and various components developed in the FRACTAL project enhance the already existing safety concept. This includes monitoring units and a diverse redundancy library. In addition, the integrated hardware accelerators make the platform suitable for the execution of functions with high computing effort.
Footnote 4: [https://www.v2c.at/spider/](https://www.v2c.at/spider/)
Footnote 5: See [https://fractal-project.eu/](https://fractal-project.eu/) and [18]
Due to the open system design of the SPIDER, the FRACTAL platform can be integrated into the SPIDER system. To incorporate the decision-making function into the SPIDER software stack and to run it on the FRACTAL platform, it is integrated into an appropriate ROS 2 node. For the deployment of the ANN the open-source deep learning library EDDL6 is used.
Footnote 6: See [https://github.com/deephealthproject/eddl/](https://github.com/deephealthproject/eddl/)
### _Related work_
The scientific literature knows several non data-driven methods for the solution of the path tracking and obstacle avoidance problem. Some popular and well-known path tracking controllers are the "Pure pursuit controller", the "Carrot chasing
controller", or the "Stanley controller" - for details we refer to [9, 20, 23] or [13]. Approaches for the design of obstacle avoidance controllers, such as the artificial potential field method, can be found in [22, 31, 17].
However, we decided to follow a ML approach to tackle the reactive path tracking task. The main reason for this decision is that it seemed us to be difficult to appropriately tune and coordinate a combined controller consisting of a path tracking and a collision avoidance component. In addition, a slim and efficient ML solution promises a low computational effort at runtime. Quoting [15], RL offers to robotics a framework and a set of tools for the design of sophisticated and hard-to-engineer behaviors. According to that, RL approaches are well suited to the problem. The application of RL methods for the automated control of vehicles is not new - the topic was already adressed by a variety of researchers. We refer in this regard to [8, 14, 29, 2, 5] and as well [19]. For the sake of completeness, we point out that there are also non machine learning based methods for solving the reactive path tracking problem. See for example [30, 10].
### _Structure of the paper_
This paper is structured as follows. In Section II the main building blocks of this work are presented. Section III gives the formulation of the problem. In Section IV the structure and functioning of the decision-making function is presented. Finally, in the remaining sections the results obtained are presented and discussed.
## II Preliminaries
### _Spider_
The SPIDER (Smart Physical Demonstration and Evaluation Robot) is an autonomous robot prototype developed at Virtual Vehicle Research GmbH. It is a mobile HiL platform designed for the development and testing of autonomous driving functions. It allows reproducible testing of perception systems, vehicle software and control algorithms under real world conditions. Four individually controllable wheels enable almost omni-directional movement, enabling the SPIDER to precisely mimic the movements of target vehicles - see Figure 1.
Due to its adaptable mounting rod system - see Figure 2 - positions of sensors can easily be adapted to the target system.
From a system perspective, the architecture of the SPIDER can be divided into three blocks, as shown in Figure 3. The decision-making function to be developed is located in the HLCU. Using the data provided by the sensor block, it determines control variables which are passed on to the LLCU in the form of a target linear speed and target angular velocity. The LLCU performs safety checks on these signals and forwards them to the corresponding hardware components.
### _The FRACTAL platform_
The FRACTAL platform [18] is a new approach to reliable edge computing. It provides an Open-Safe-Reliable platform to build cognitive edge nodes while guaranteeing extra-functional properties like dependability, or security, as visualized in Figure 4. FRACTAL nodes can be deployed to various hardware architectures. The SPIDER use-case is deployed on a FPGA using the open-source SELENE hardware and software platform [11]. SELENE is a heterogeneous multicore processor platform based on the open RISC-V Instruction Set Architecture (ISAA). The software stack is build on GNU/Linux. The SELENE platform is extended by various components from FRACTAL to ensure safety properties and allow the execution of computational extensive machine learning functions.
The developments of SELENE and FRACTAL provide the baseline for the SPIDER to move from a non-safe industrial PC setup to an open source based, safe platform with smaller form-factor, and lower power consumption. To add extra properties in context of safety and hardware acceleration, the SPIDER includes FRACTAL components on hardware and software level. This includes congestion detection at the memory controller, register file randomization, a redundant acceleration scheme, diverse redundancy of cores, and statistics
Fig. 1: SPIDER following the trajectory of a target vehicle. The red dots indicate SPIDER sensors ensuring safety, where the blue dots correspond to sensors of the target vehicle. The green bars represent rods on which the sensors are mounted.
Fig. 2: Smart Physical Demonstration and Evaluation Robot (SPIDER)
units [1].
### _Reinforcement learning_
Reinforcement learning (RL) refers to a subarea of machine learning. The learning principle of methods belonging to this area is based on learning through interaction. Through repeated interaction with its environment, the learning system learns which actions are beneficial in terms of problem solving and which are detrimental in this respect. This is done by means of a numerical reward function tailored to the specific use case. By means of suitable optimization methods the system is encouraged to derive a control strategy, which, given a certain observation, selects the action that promises the maximum reward.
A more detailed description of the RL paradigm can be found for example in [15, 27].
### _Eddl/ledel_
The European Distributed Deep Learning Library (EDDL) is a general-purpose deep learning library initially developed as part of the DeepHealth Toolkit [4] to cover deep learning needs in healthcare use cases within the DeepHealth project [https://deephealth-project.eu/](https://deephealth-project.eu/). The EDDL is a free and open-source software available on a GitHub repository [https://github.com/deephealthproject/eddl/](https://github.com/deephealthproject/eddl/).
EDDL provides hardware-agnostic tensor operations to facilitate the development of hardware-accelerated deep learning functionalities and the implementation of the necessary tensor operators, activation functions, regularization functions, optimization methods, as well as all layer types (dense, convolutional and recurrent) to implement state-of-the-art neural network topologies. Given the requirement for fast computation of matrix operations and mathematical functions, the EDDL is being coded in C++. GPU specific implementations are based on the NVIDIA CUDA language extensions for C++. A Python API is also available in the same GitHub repository and known as pyEDDL.
In order to be compatible with existing developments and other deep learning toolkits, the EDDL uses ONNX [3], the standard format for neural network interchange, to import and export neural networks including both weights and topology.
In the Fractal project, the EDDL is being adapted to be executed on embedded and safety-critical systems. Usually, these systems are equipped with low resources, i.e., with limited memory and computing power, as it is the case of devices running on the edge.
When adapted to this kind of systems, the EDDL is renamed as Low Energy DEep Learning library (LEDEL).
Specifically, the EDDL has been ported to run on emulated environments based on the RISC-V CPU. It has been tested to train models and for inferencing. However, the use of the LEDEL in this work is only for inferencing, so that the running time is not a critical issue. The models are trained using the EDDL on powerful computers, then the trained models can be imported by the EDDL thanks to ONNX.
## III Problem formulation
The decision-making function shall navigate the SPIDER along a predefined path7 from a starting point to a target point while avoiding collisions with obstacles.
Footnote 7: By a path we mean a list of target coordinates, target linear speeds and target headings which shall be reached one after another by the robot. Only static paths are considered, i.e. any path is generated in advance by a path planning module and is not changed during execution time. Furthermore, we assume that the target speeds do not exceed the achievable maximum speed of the vehicle.
Although the SPIDER can be controlled omnidirectionally, in this use case we limit ourselves to develop a car-like control strategy. As a consequence, reaching the target orientation at each of the given waypoints is disregarded.
Fig. 4: FRACTAL project technology pillars and objectives under the Multi-Annual Strategic Plan (”M.A.S.P.”)
Fig. 3: System architecture of the SPIDER.
## IV Decision-making function
### _Design_
The entire control unit is designed as depicted in Figure 5. At any time point it takes as input a cost map, the current state of the vehicle, the control values applied in the previous time step and provides the control values \((u_{1},u_{2})\) to be applied next as output. These values are sampled from the finite subset
\[U=\{(u_{1}, u_{2})\ :\ u_{1}=-0.5+1.5i/11,\] \[u_{2}=-1+2j/11\ 1\leq i,j\leq 11\}.\]
of the control space \([-1/2,1]\times[-1,1]\) according to the probability distribution provided by the decision-making block represented through an ANN. Given the maximal linear acceleration \(a_{max}\) and the maximal steering angle \(\vartheta_{max}\) of the robot, the terms \(u_{1}a_{max}\), \(u_{2}\vartheta_{max}\) determine the acceleration and the steering angle respectively which will be applied to the vehicle.
We briefly discuss the main building blocks of the control unit next.
**Range finding** The range finding block consists of a module which takes as input a cost map of a fixed dimension centered around the vehicle and determines the distance from obstacles to the vehicle by means of a ray-casting approach. For this purpose, starting from the center of mass (COM) of the robot, \(m\) virtual rays are plotted on the occupancy grid - see Figure 6. Along each of these arrows, the corresponding cell entry of the occupancy grid is checked at \(n\) evenly distributed points, the so-called ray nodes. Based on the number of free cells counted from the inside to the outside, the distance (in meters) along a ray from the robot to any obstacles is determined.
It is assumed that the robot is entirely contained within a circular disk of radius \(\rho_{1}\) centered at its COM. For the determination of the distances we thus only take in consideration ray nodes which are not contained in this disk. In addition the maximal distance is bounded by \(\rho_{2}\). Thus we get distances in the interval \([0,\rho_{2}-\rho_{1}]\).
**Reference segment selection** To quantify the spatial proximity of the vehicle to the path the cross track error is used. By definition, the cross track error is the normal distance from the current position of the vehicle to the target trajectory. In the given context, it is determined as the normal distance of the position of the vehicle to the closest line segment which is connecting two consecutive waypoints.8 For the reference segment selection procedure we followed the approach described in [7], Section 9.3.
Footnote 8: By waypoints we understand the target coordinates defined by the path.
**ANN input generation** The neural network takes \(7\) input variables. Let \((x,y)\), \(v\), \(\theta\) denote the current position, the current linear speed and the current heading of the vehicle respectively. We denote by \(u_{0}\ u_{1}\) the control values, which were applied in the previous time step.
Let \(k\in\mathbb{N}\) be such that the line segment \(L_{(x,y)}\) determined by the reference segment selection procedure described above, connects the waypoints \(w_{k},w_{k+1}\). Let \(v_{k+1}\) denote the target velocity at \(w_{k+1}\) and let \(d=(d_{1},\ldots,d_{m})\) be the tuple of distances computed by the range finding unit given the current position of the vehicle. Then the input \(x=(x_{1},\ldots,x_{7})\) to the neural network is defined as follows:
* Clipped cross track error: Let \(e_{x}\) denote the signed normal distance of the vehicle's position to \(L_{(x,y)}\), i.e. \(e_{x}\) defines the current cross track error. Given the clip parameter \(\delta>0\), we define \[x_{1}=\begin{cases}-\delta,\ \text{if}\ e_{x}<-\delta\\ \delta,\ \text{if}\ e_{x}>\delta\\ e_{x},\ \text{else}\end{cases}\]
* Linear speed error: The linear speed error is defined as \(x_{2}=v_{k+1}-v\).
* Waypoint heading error: We introduce the heading error \(x_{3}\) as the cosine of the angle between the vector \(v(\theta)\) indicating the driving direction of the vehicle and the vector connecting \(w_{k}\) and \(w_{k+1}\).
* Previous control values: Define \(x_{4}=u_{0}\) and \(x_{5}=u_{1}\).
* Obstacle heading error: We introduce \(x_{6}\) to be the cosine of the angle between \(v(\theta)\) and the range finding ray sensing the smallest distance to an obstacle.
Fig. 5: Decision-making flow
Fig. 6: Illustration of the range finding procedure.
* Smallest obstacle distance: Define \(x_{7}\) to be the smallest distance to an object measured by the range finding unit.
We note that by these definitions the input variables of the neural network are always contained within a fixed range and are thus bounded. According to [26] such normalisation can accelerate and stabilise the training process.
**Decision-making** The ANN representing the decision-making block consists of two hidden dense layers of \(64\) neurons each and an output layer of \(|U|=121\) neurons. For the hidden layers \(\tanh\) is used as activation function, whereas for the output layer the softmax function is used.
### _Training_
For the training of the decision-making function we use the proximal policy optimisation (PPO) method presented in [24]. The training suite is implemented in Python and is built on the Python package Stable Baselines - see [12]. The driving environment which is used for the training procedure is based on the well known kinematic bicycle model9, which is - according to [21] - a suitable and accurate model for car-like driving manoeuvres at low speeds.
The RL paradigm requires to choose a reward function adapted and aligned to the problem. In the given context it must be constructed in such a way that actions steering the vehicle with target speed along the target trajectory are rewarded. However, the rewarding approach must also reflect the collision avoidance requirement. Thus, actions which may lead to collisions must be penalised.
Footnote 9: For a description and an analysis of the model we refer to [16].
The approach we considered, builds on the work of [29, 5] and [19]. Its main lines are described next. The reward function \(r\) is made up of a path following component \(r_{pf}\) and a collision avoidance component \(r_{ca}\). Given the input data \(x=(x_{1},\ldots,x_{7})\) to the ANN, non-negative parameters \(\alpha_{1},\ldots,\alpha_{4}\), \(\beta_{1},\beta_{2}\), \(\lambda\) we define
\[r_{1}(x) =\alpha_{1}\exp(-x_{1}^{2}/2\beta_{1})\] \[r_{2}(x) =\alpha_{2}\exp(-x_{2}^{2}/2\beta_{2})\] \[r_{3}(x) =\alpha_{3}x_{3}\]
With this we set
\[r_{pf}(x) =-1+(1+r_{2}(x)r_{3}(x))(1+r_{1}(x))\] \[r_{ac}(x) =\begin{cases}-\alpha_{4}x_{6},\text{ if }x_{7}\leq\lambda( \rho_{2}-\rho_{1})\\ 0,\text{ else}\end{cases}\]
The definition of \(r_{pf}\) implies, that large rewards can be achieved for small values of the cross track error, and small deviations from the target velocity and if the vehicle is approaching in a straight line the upcoming waypoint. Due to the additive constants in the definition of the reward function, driving strategies aligned only partially to the desired policy will not be disregarded. Thus, for example, deviations from the speed profile off the target trajectory result in positive rewards.
The definition of \(r_{ac}\) on the other hand, punishes actions which steer the vehicle in the direction of the smallest distance to an obstacle. If the vehicle crashes into an obstacle an additional penalty \(r_{crash}\) is applied. Combining \(r_{ac}\), \(r_{pf}\) and \(r_{crash}\) via
\[r(x)=r_{ac}+r_{pf}+r_{crash}\]
we obtain a reward signal which honours actions that maximise the distance to obstacles and proximity to the path in the absence of obstacles in the vehicle's surrounding.
## V Evaluation and performance criteria
The validation of the decision-making function is based on KPIs. Two classes of KPIs are considered: KPIs for the assessment of the path tracking capability and KPIs for the assessment of the collision avoidance capability. For the validation we choose a specific path and a distribution of obstacles on or near the target trajectory. Let \((x_{1}^{(k)},x_{2}^{(k)},x_{3}^{(k)},x_{4}^{(k)},x_{5}^{(k)},x_{6}^{(k)},x_{7}^ {(k)})_{0\leq k\leq N}\) denote the set of input terms to the ANN obtained by applying the decision-making function to the specified scenario in an episode of \(N+1\) time steps10.
Footnote 10: If after \(M<N\) time steps the terminal position is reached or the vehicle collides with an obstacle, then only the input terms up to time point \(M\) are considered.
### _Path tracking_
For the assessment of the path tracking performance we consider on the one hand the mean \(l^{2}\) total tracking error \(\kappa_{2}\) which is defined by
\[\kappa_{2}=\frac{1}{N}\sum_{k=1}^{N}\|(x_{1}^{(k)},x_{2}^{(k)})\|_{2}^{2}.\]
In addition we measure the path tracking capability of the decision-making function by means of the waypoint reach rate \(\kappa_{reach}\): Consider a tuple \((z_{1},\ldots,z_{L})\) of points on the target trajectory arranged from the starting point towards the terminal point. Then \(\kappa_{reach}\) is defined to be the quotient of the number of points \(z_{l}\), \(1\leq l\leq L\) which could be approximately reached in successive manner and the total number \(L\) of points considered.
### _Collision avoidance_
To validate the collision avoidance capabilites of the decision-making function we use the metrics \(\kappa_{danger}\) and \(\kappa_{dist}\). The latter is defined by means of
\[\kappa_{dist}=\min\{x_{7}^{(k)}\ :\ 0\leq k\leq N\}.\]
We emphasize at this point, that \(\kappa_{dist}\) is indirectly proportional to the so called safety cost function introduced in [25]. The definition of \(\kappa_{danger}\) is based on the collision danger introduced in [28]. Let \(\eta_{k}\) be given by
\[\eta_{k}=\begin{cases}1,\text{ if }x_{7}^{(k)}\leq(\rho_{2}-\rho_{1})/2\\ 0,\text{ else}\end{cases}\]
and define
\[\kappa_{danger}=\frac{1}{N}\sum_{k=1}^{N}\eta_{k}.\]
## VI Results
For the evaluation of the decision-making function two scenarios are examined - a scenario containing obstacles and an obstacle-free scenario. In both cases the path depicted in Figure 7 is considered.
In the driving simulation, the parameter values
\[a_{max}=5,\vartheta_{max}=\pi/6,\rho_{1}=1,\rho_{2}=5,m=15,n=17.\]
were used. In the reference segment selection procedure a lookahead distance of \(3\) meters was used to obtain the current reference segment and the corresponding cross track error. For details we refer again to [7]. Regarding the reward function we used the following values The PPO from the Stable Baselines
\begin{tabular}{c|c|c|c|c|c|c|c} \(\alpha_{1}\) & \(\alpha_{2}\) & \(\alpha_{3}\) & \(\alpha_{4}\) & \(\beta_{1}\) & \(\beta_{2}\) & \(\lambda\) & \(r_{crash}\) \\ \hline \(1\) & \(1\) & \(1\) & \(1.5\) & \(0.25\) & \(0.25\) & \(0.75\) & \(-250\) \\ \end{tabular}
package (version 2.10.0) is applied using the default parameter settings.
### _Pure path following_
Figure 8 shows path following performance of the decision-making function in an obstacle-free environment. We observe that the driven trajectory almost matches the target trajectories. The deviations in the curved parts of the target trajectory can be attributed to the waypoint selection procedure and the controller's effort to minimise the cross track error. Moreover we note, that the target velocity profile is reached very precisely. Summing up, we expect a small value of \(\kappa_{2}\) and a value close to \(1\) of \(\kappa_{reach}\). Considering \(50\) randomly generated points on the target trajectory and a positional tolerance of \(1\) meters, we obtain
\[\kappa_{2}=0.04,\qquad\kappa_{reach}=0.94.\]
### _Reactive path following_
To illustrate the ability of the decision-making function to detect and evade an obstacle along the target trajectory, we consider a scenario with one obstacle placed on the target trajectory. The resulting trajectory of the vehicle is given in Figure 9. In regard of the KPIs we obtain the following values
\[\kappa_{2}=0.36,\qquad\kappa_{reach}=0.14\] \[\kappa_{dist}=0.75,\qquad\kappa_{danger}=0.02\]
We point out that, caused by the evasion maneuver, the value of \(\kappa_{2}\) increases and the waypoint reach rate \(\kappa_{danger}\) drops rather strongly. The latter is a consequence of the necessary wide swerving to avoid a collision.
## VII Discussion and outlook
### _Discussion_
We were able to define and train by means of a state of the art RL algorithm a decision-making function solving the reactive path tracking problem as introduced in Section III. For the performance and safety assessment of the controller KPIs were considered. According to these KPIs decent results could be obtained. This is confirmed by the plots depicted in
Fig. 8: The plot shows the trajectory driven by the robot in the Python training simulation in an obstacle-free scenario. The plot shows moreover the controls applied over time, the cross track errors at each time point and the linear speed profile of the vehicle.
Fig. 7: The plot shows the target trajectory given by the parametric curve \(\gamma=(\gamma_{1},\gamma_{2}):[-\pi,\pi]\rightarrow\mathbb{R}^{2}\), where \(\gamma_{1}(t)=40+20\cos(t)\), \(\gamma_{2}(t)=22.5+20\sin(t)\cos(t)\), and the target speed profile.
Figure 8 and Figure 9. Even though the target trajectory could be tracked in an sufficient manner, by means of adjustments of the reward function a smoother driving behaviour could be obtained.
To obtain a more valid and more reliable statement in regard of the safety performance of the decision-making function, further KPIs may be studied. Additionally, unit tests for the examination of the driving behaviours in selected (critical) scenarios could be considered. In order to complete picture the above validation and safety assessment procedure has to be applied to a larger set of different scenarios. Only then possible weaknesses of the approach can be identified. Based on these results, conclusions can be drawn about the quality and the completeness of the set of scenarios considered in the training process. In order to achieve a balanced and robust result, it is important to use samples from an uniform distribution over the whole input space of the ANN. This can be achieved by considering a wide variety of training scenarios.
### _Outlook_
At the time of publication of this paper, the integration of the decision-making function into the SPIDER software stack had not yet been completed. Results from the tests carried out in Gazebo and in real-world scenarios could therefore not be considered. The publication will be supplemented in this respect during the remainder of the FRACTAL project.
## VIII Acknowledgments
This project has received funding from the ECSEL Joint Undertaking (JU) under grant agreement No 877056. The JU receives support from the European Union's Horizon 2020 research and innovation programme and Spain, Italy, Austria, Germany, Finland, Switzerland. In Austria the project was also funded by the program "IKT der Zukunft" of the Austrian Federal Ministry for Climate Action (BMK). The publication was written at Virtual Vehicle Research GmbH in Graz and partially funded within the COMET K2 Competence Centers for Excellent Technologies from the Austrian Federal Ministry for Climate Action (BMK), the Austrian Federal Ministry for Digital and Economic Affairs (BMDW), the Province of Styria (Dept. 12) and the Styrian Business Promotion Agency (SFG). The Austrian Research Promotion Agency (FFG) has been authorised for the programme management.
|
2309.13536 | Tackling the Unlimited Staleness in Federated Learning with Intertwined
Data and Device Heterogeneities | The efficiency of Federated Learning (FL) is often affected by both data and
device heterogeneities. Data heterogeneity is defined as the heterogeneity of
data distributions on different clients. Device heterogeneity is defined as the
clients' variant latencies in uploading their local model updates due to
heterogeneous conditions of local hardware resources, and causes the problem of
staleness when being addressed by asynchronous FL. Traditional schemes of
tackling the impact of staleness consider data and device heterogeneities as
two separate and independent aspects in FL, but this assumption is unrealistic
in many practical FL scenarios where data and device heterogeneities are
intertwined. In these cases, traditional schemes of weighted aggregation in FL
have been proved to be ineffective, and a better approach is to convert a stale
model update into a non-stale one. In this paper, we present a new FL framework
that leverages the gradient inversion technique for such conversion, hence
efficiently tackling unlimited staleness in clients' model updates. Our basic
idea is to use gradient inversion to get estimations of clients' local training
data from their uploaded stale model updates, and use these estimations to
compute non-stale client model updates. In this way, we address the problem of
possible data quality drop when using gradient inversion, while still
preserving the clients' local data privacy. We compared our approach with the
existing FL strategies on mainstream datasets and models, and experiment
results demonstrate that when tackling unlimited staleness, our approach can
significantly improve the trained model accuracy by up to 20% and speed up the
FL training progress by up to 35%. | Haoming Wang, Wei Gao | 2023-09-24T03:19:40Z | http://arxiv.org/abs/2309.13536v2 | Tackling the Unlimited Staleness in Federated Learning with Intertwined Data and Device Heterogeneities
###### Abstract
The efficiency of Federated Learning (FL) is often affected by both data and device heterogeneities. Data heterogeneity is defined as the heterogeneity of data distributions on different clients. Device heterogeneity is defined as the clients' variant latencies in uploading their local model updates due to heterogeneous conditions of local hardware resources, and causes the problem of staleness when being addressed by asynchronous FL. Traditional schemes of tackling the impact of staleness consider data and device heterogeneities as two separate and independent aspects in FL, but this assumption is unrealistic in many practical FL scenarios where data and device heterogeneities are intertwined. In these cases, traditional schemes of weighted aggregation in FL have been proved to be ineffective, and a better approach is to convert a stale model update into a non-stale one. In this paper, we present a new FL framework that leverages the gradient inversion technique for such conversion, hence efficiently tackling unlimited staleness in clients' model updates. Our basic idea is to use gradient inversion to get estimations of clients' local training data from their uploaded stale model updates, and use these estimations to compute non-stale client model updates. In this way, we address the problem of possible data quality drop when using gradient inversion, while still preserving the clients' local data privacy. We compared our approach with the existing FL strategies on mainstream datasets and models, and experiment results demonstrate that when tackling unlimited staleness, our approach can significantly improve the trained model accuracy by up to 20% and speed up the FL training progress by up to 35%. The source codes of our work have been made publicly available at: [https://github.com/pittisl/FL-with-intertwined-heterogeneity](https://github.com/pittisl/FL-with-intertwined-heterogeneity).
## 1 Introduction
Federated Learning (FL) [14] uses multiple clients to collaboratively train a global machine learning (ML) model, while retaining their local data privacy. In a vanilla FL framework, each client downloads the global model from the server and trains it with the local data. Clients then upload their locally trained models as updates to the server for aggregation to update the global model.
FL could be affected by both data and device heterogeneities. _Data heterogeneity_ is defined as the heterogeneity of data distributions on different clients, which makes local data distributions to be non-i.i.d. and deviate from the global data distribution [9]. This disparity could make the aggregated global model biased and reduces model accuracy [25]. Most existing work addresses data heterogeneity by adopting different training strategies at local clients, such as adding a regularizer [1] or an extra correction term to address client drifts [8].
_Device heterogeneity_, on the other hand, arises from clients' heterogeneous conditions of local hardware resources (e.g., computing power, memory space, network link speed, etc), which result
in clients' variant latencies in uploading their model updates. If the server waits for slow clients to complete aggregation and update the global model, the speed of training will be slowed down. An intuitive solution to device heterogeneity is asynchronous Federated Learning (AFL), which immediately updates the global model whenever having received a client update [18]. Since a model update from a slow client is computed based on an outdated global model, this update will be _stale_ when aggregated at the server, affecting model convergence and reducing model accuracy. To tackle such _staleness_, weighted aggregation can be used in AFL, to apply a reduced weight on a stale model update in aggregation.
All the existing techniques, even when simultaneously tackling data and device heterogeneities [27], consider these two heterogeneities as separate and independent aspects in FL. This assumption, however, is unrealistic in many FL scenarios where data and device heterogeneity are _intertwined_. For example, data samples in certain classes or with particular features may only be produced from some slow clients, such as embedded and IoT devices that are deployed in special conditions (e.g., at remote sites or inside human bodies) and hence have strict resource constraints. In these cases, if reduced weights are applied to stale model updates from these slow clients, some important knowledge in these updates may not be sufficiently learned, leading to low prediction accuracy in these classes or with these features.
Instead, a better approach is to convert a stale model update into a non-stale one. Existing techniques for such conversion, however, are limited to a small amount of staleness. For example, Asynchronous Stochastic Gradient Descent with Delay Compensation (DC-ASGD) can be used for first-order compensation of the gradient delay [26], but it assumes that staleness is sufficiently small to ignore all the high-order terms in the difference between stale and non-stale model updates. Hence, staleness is usually limited to communication delays and is always smaller than one epoch [28]. In contrast, our experiments show that when staleness grows beyond one epoch, the compensation error will quickly increase (Section 2).
In practical FL scenarios, it is not uncommon to witness excessive or even unlimited staleness in clients' model updates, especially in the aforementioned cases where the client devices have very limited computing power, local energy budget, or communication capabilities. To efficiently tackle such _unlimited staleness_, in this paper we present a new FL framework which uses gradient inversion at the server to convert stale model updates. Gradient inversion [29] is a ML technique that recovers training data from a trained model by mimicking the model's gradient produced with the original training data. With such training data being recovered by gradient inversion from stale model updates, we can use the recovered data to retrain the current global model, as an estimation to the client's unstale model update. Compared to other model inversion methods, such as training an extra generative model [19] or optimizing input data with extra constraints [21], gradient inversion does not require any auxiliary dataset nor the client model to be fully trained.
The major challenge, however, is that quality of recovered data will drop and hence affect the FL performance, especially when a large amount of data samples is recovered, because gradient inversion permutes the learned information from the model's gradient across data samples. To minimize the impact of such data quality drop, our basic idea is to avoid any direct use of recovered data samples. Instead, we use gradient inversion at the server to obtain an estimated distribution of clients' training data from their stale model updates, such that the model with such estimated data distribution will exhibit a similar loss surface as that of using the clients' original training data. Such estimated data distributions are then used to compute the non-stale model updates. In this way, the server will not obtain any client's raw data samples, hence protecting the clients' data privacy.
Another challenge is the gradient inversion's own impact on FL, which is produced due to imperfect estimation of clients' training data. While such impact is invariant throughout the whole FL procedure, the impact of staleness could gradually diminish as the global model converges. As a result, we will need to switch back to classic AFL aggregation at the late stage of FL. We adaptively determine this switching point by timely evaluating the impact of staleness and comparing it with the estimation error in gradient inversion, and also enforce a smooth switching to prevent sudden drop of model accuracy in training.
We evaluated our proposed technique by comparing with the mainstream FL strategies on multiple mainstream datasets and models. Experiment results show that when tackling unlimited staleness, our technique can significantly improve the trained model accuracy by up to 20% and speed up the FL training progress by up to 35%. Even when the clients' local data distributions are continuously
variant, our technique can still ensure high performance of the trained model and largely performs the existing FL methods.
## 2 Background and Motivation
In this section, we present background knowledge and preliminary results that demonstrate the ineffectiveness of existing AFL techniques in tackling staleness with intertwined data and device heterogeneities, hence motivating our proposed technique using gradient inversion.
### Tackling Staleness in FL with Intertwined Data and Device Heterogeneities
The most intuitive solution to staleness is weighted aggregation, but results in improper bias towards fast clients and misses important knowledge in slow clients' model updates, when data and device heterogeneities are intertwined. To demonstrate this, we conducted experiments by using the MNIST dataset [11] on 100 clients to train a 3-layer CNN model. We set data heterogeneity as that each client only contain samples in one data class, and set device heterogeneity as a staleness of 40 epochs on clients with data samples in class 5. Results in Figure 1 show that, staleness will lead to large degradation of model accuracy, and using weighted aggregation will further enlarge the degradation. These results motivate us to tackle the staleness in FL by converting stale model updates to unstale ones.
Existing techniques on such conversion, such as DC-ASGD [26], suggested to compensate the errors in stale model updates, but are only applicable to limited amounts of staleness. As shown in Figure 2, when we use the same experiment setting as above and increase the amount of staleness from 0 to 60 epochs, DC-ASGD's first-order compensation error, measured in cosine similarity and L1-norm difference with the unstale model updates, both significantly increase. The basic reason is that higher-order terms in compensation can not be negligible as staleness becomes large. These results motivate us to further design better techniques that support accurate conversion with unlimited staleness.
Figure 1: The impact of staleness in AFL
Figure 2: DC-ASGD’s compensation error to staleness
### Gradient Inversion
Our proposed approach to addressing the above limitations builds on existing techniques of gradient inversion. Gradient inversion (GI) [29] aims to recover the original training data from gradients of a model under the white box setting, which means that the trained model's architecture and the setting for training are both known. Its basic idea is to minimize the difference between the trained model's gradient and the gradient computed from the recovered data. More specifically, denote a batch of training data as \((x,y)\) where \(x\) denotes the input data samples and \(y\) denotes the corresponding labels, gradient inversion aims to solve the following optimization problem:
\[(x^{\prime*},y^{\prime*})=\arg\min_{(x^{\prime},y^{\prime})}\|\frac{\partial L [(x^{\prime},y^{\prime});w^{t-1}]}{\partial w^{t-1}}-g^{t}\|_{2}^{2}, \tag{1}\]
where \((x^{\prime},y^{\prime})\) is the recovered data, \(w^{t-1}\) is the trained model, \(L[\cdot]\) is model's loss function, and \(g^{t}\) is the gradient calculated with the raw training data and \(w^{t-1}\). This problem can be solved using gradient descent to iteratively update \((x^{\prime},y^{\prime})\). Since a gradient only contains knowledge from data samples involved in this gradient calculation, gradient inversion can ensure transferring only the relevant knowledge from the training data to the recovered data, achieving higher data quality and minimizing compensation errors.
The quality of recovered data relates to the amount of data samples recovered. Recovering a larger dataset will confuse the knowledge across different data samples, reducing the quality of recovered data. All the existing methods are limited to a small batch (\(<\)48) of data samples [22; 6; 24]. This limitation contradicts with the typical size of clients' datasets in FL, calling for alternative ways of better utilizing the recovered data samples.
## 3 Problem Settings
We consider a FL problem with one server and \(N\) clients. At time \(t\)1, a normal client \(i\) provides its model update as
Footnote 1: In the rest of this paper, without loss of generality, we use the notation of time \(t\) to indicate the \(t\)-th epoch in FL training.
\[w_{i}^{t}=LocalUpdate(w_{global}^{t};D_{i}),\]
where \(LocalUpdate[\cdot]\) is client \(i\)'s local training program, which uses the current global model \(w_{global}^{t}\) and client \(i\)'s local dataset \(D_{i}\) to produce \(w_{i}^{t}\). On the other hand, when the client \(i\)'s model update is delayed due to either insufficient computing power (i.e., computing delay) or unstable network connection (i.e., communication delay), the server will receive a stale model update from \(i\) at time \(t\) as
\[w_{i}^{t-\tau}=LocalUpdate(w_{global}^{t-\tau};D_{i}),\]
where the amount of staleness is indicated by \(\tau\) and \(w_{i}^{t-\tau}\) is computed from an outdated global model \(w_{global}^{t-\tau}\).
Due to intertwined data and device heterogeneities, we generally consider that \(w_{i}^{t-\tau}\) contains some unique knowledge about \(D_{i}\) that is only available from client \(i\), and such knowledge needs to be sufficiently incorporated into the current global model. To do so, the server computes an estimation of \(w_{i}^{t}\) from the received \(w_{i}^{t-\tau}\), namely \(\hat{w}_{i}^{t}\), and then uses this estimation in aggregation. During this procedure, the server only receives the stale model update \(w_{i}^{t-\tau}\) from client \(i\), which does not expose any part of its local dataset \(D_{i}\) to the server. The client \(i\) does not need to perform any extra computations for such estimation of \(\hat{w}_{i}^{t}\), either.
## 4 Methodology
As shown in Figure 3, our proposed technique consists of three key components: _1)_ recovering an intermediate dataset from the received stale model update via gradient inversion to represent the distribution of the client's training data; _2)_ estimating the unstale model update using the recovered dataset; and _3)_ deciding when to switch back to vanilla FL in the late stage of FL training, to avoid the excessive estimation error from gradient inversion.
### Data Recovery from Stale Model Updates
At time \(t\), when the server receives a stale model update \(w_{i}^{t-\tau}\) from client \(i\) with staleness \(\tau\), we adopt gradient inversion described in Eq. (1) into FL, to recover an intermediate dataset \(D_{rec}\) from \(w_{i}^{t-\tau}\). We expect that \(D_{rec}\) represents the similar data distribution with the client \(i\)'s original training dataset \(D_{i}\). To achieve so, we first fix the size of \(D_{rec}\) and randomly initialize each data sample and label in \(D_{rec}\). Then, we iteratively update \(D_{rec}\) by minimizing
\[Disparity[LocalUpdate(w_{global}^{t-\tau};D_{rec}),w_{i}^{t-\tau}], \tag{2}\]
using gradient descent, where \(Disparity[\cdot]\) is a metric to evaluate how much \(w_{i}^{t-\tau}\) changes if being retrained using \(D_{rec}\). In FL, a client's model update comprises multiple local training steps instead of a single gradient. Hence, to use gradient inversion for data recovery in FL, we substitute the single gradient computed from \(D_{rec}\) in Eq. (1) with the local training outcome using \(D_{rec}\). In this way, since the loss surface in the model's weight space computed using \(D_{rec}\) is similar to that using \(D_{i}\), we can expect a similar gradient being computed. To verify this, we conducted preliminary experiments by using the MNIST dataset to train the LeNet model. Results in Figure 4 show that, the loss surface computed using \(D_{rec}\) is highly similar to that using \(D_{i}\), in the proximity of the current global model (\(w_{global}^{t-\tau}\)), and the computed gradient is hence very similar, too.
A key issue is how to decide the proper size of \(D_{rec}\). Since gradient inversion is equivalent to data resampling in the original training data's distribution, a sufficiently large size of \(D_{rec}\) would be necessary to ensure unbiased data sampling and sufficient minimization of gradient loss through iterations. On the other hand, when the size of \(D_{rec}\) is too large, the computational overhead of each iteration would be unnecessarily too high. We experimentally investigated such tradeoff by using the MNIST and CIFAR-10 [10] datasets to train a LeNet model. Results in Tables 1 and 2, where the size of \(D_{rec}\) is represented by its ratio to the size of original training data, show that when the size of \(D_{rec}\) is larger than 1/2 of the size of the original training data, further increasing the size of \(D_{rec}\) only results in little extra reduction of the gradient inversion loss but dramatically increase the computational overhead. Hence, we believe that it is a suitable size of \(D_{rec}\) for FL. Considering that clients' local dataset in FL contain at least hundreds of samples, we expect a big size of \(D_{rec}\) in most FL scenarios.
Figure 4: Comparing the loss surface and gradient computed using \(D_{rec}\), \(D_{i}\), and random noise data
Figure 3: The overall picture of our proposed method of tackling unlimited staleness in FL
Such a big size of \(D_{rec}\) directly decides our choice of how to evaluate the change of \(w_{i}^{t-r}\) in Eq. (2). Most existing works use cosine similarity between \(LocalUpdate(w_{global}^{t-r};D_{rec})\) and \(w_{i}^{t-r}\) to evaluate their difference in the direction of gradients, so as to maximize the quality of individual data samples in \(D_{rec}\)[3]. However, since we aim to recover a large \(D_{rec}\), this metric is not applicable, and instead we use L1-norm as the metric to evaluate how using \(D_{rec}\) to retrain \(w_{global}^{t-r}\) will change its magnitude of gradient, to make sure that \(D_{rec}\) incurs the minimum impact on the state of training.
With such a big \(D_{rec}\), the similarity between data samples in \(D_{rec}\) and \(D_{i}\) is the minimum, hence protecting the client \(i\)'s local data privacy. To verify this, we did experiments with the CIFAR-10 dataset and ResNet-18 model, and match each data sample in \(D_{rec}\) with the most similar data sample in \(D_{i}\) by computing their LPIPS perceptual similarity score [23]. As shown in Figure 5, these matching data samples are highly dissimilar, and the recovered data samples in \(D_{rec}\) are mostly meaningless to humans.
As shown in Tables 1 and 2, gradient inversion is computationally expensive because it needs a large amount of iterations to converge. In our approach, we reduce this high computational overhead in the following two ways. First, we simplify \(LocalUpdate[\cdot]\) in Eq. (2). In the original \(LocalUpdate[\cdot]\) used in FL, the client performs training epochs via mini-batch SGD, and a random data augmentation is usually applied to client data before each epoch. To reduce such overhead, in gradient inversion only full-batch gradient descent will be performed. Second, in most FL scenarios, the clients' local datasets remain fixed, implying that \(D_{rec}\) will also be invariant over time. Therefore, instead of starting iterations from a random initialization, we can optimize \(D_{rec}\) from those calculated in previous training epochs. In our experiments using the MNIST dataset and the LeNet model, when the client data remains fixed, we observe a reduction in the required iterations in gradient inversion
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Size & 1/64 & 1/16 & 1/4 & 1/2 & 2 & 10 \\ \hline Time(s) & 193 & 207 & 214 & 219 & 564 & 2097 \\ \hline GI loss & 27 & 4.1 & 2.56 & 1.74 & 1.62 & 1.47 \\ \hline \end{tabular}
\end{table}
Table 1: Tradeoff between gradient inversion (GI) loss and computing time with different sizes of \(D_{rec}\) after 15k iterations, with the MNIST dataset
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Size & 1/64 & 1/16 & 1/4 & 1/2 & 2 & 10 \\ \hline Time(s) & 423 & 440 & 452 & 474 & 1330 & 4637 \\ \hline GI Loss & 1.97 & 0.29 & 0.16 & 0.15 & 0.15 & 0.12 \\ \hline \end{tabular}
\end{table}
Table 2: Tradeoff between gradient inversion (GI) loss and computing time with different sizes of \(D_{rec}\) after 15k iterations, with the CIFAR-10 dataset
Figure 5: The 5 best matches between samples in \(D_{rec}\) and \(D_{i}\). The top row represents samples in \(D_{rec}\), and the bottom row represents samples in \(D_{i}\).
by 43%. When the client data is only partially fixed, Figure 6 that we can still achieve more than 15% reduction when 20% of client data is changing in every epoch.
### Estimating the Unstale Model Updates
Having obtained \(D_{rec}\), the server uses it to retrain its current global model to estimate the client \(i\)'s unstale model update:
\[\hat{w}_{i}^{t}=LocalUpdate[w_{global}^{t};D_{rec}]\]
After that, the server aggregates \(\hat{w}_{i}^{t}\) with model updates from other clients, to update its global model in the current epoch.
To verify the accuracy of using \(\hat{w}_{i}^{t}\) to estimate \(w_{i}^{t}\), we compare this estimation with DC-ASGD's first-order estimation, by computing their discrepancies with the true unstale model update under different amounts of staleness, using the MNIST dataset and LeNet model. Results in Figure 7 show that, compared to DC-ASGD's first-order estimation, our estimation based on gradient inversion can reduce the estimation error by up to 50%, especially when the amount of staleness excessively increases to more than 50 epochs.
### Deciding and Updating the Switching Point back to Vanilla FL
As shown in Figure 7, the estimation made by gradient inversion also contains errors, because the gradient inversion loss can not be minimized down to zero. As the FL training goes and the global model converges, the difference between stale and unstale model updates diminishes, implying that the error in our estimated model update (\(\hat{w}_{i}^{t}\)) will exceed that of the original stale model update \(w_{i}^{t-\tau}\) in the late stage of FL training. To verify this, we conducted experiments by training the LeNet model with the MNIST dataset, and evaluated the average values of \(E_{1}(t)=Disparity[\hat{w}_{i}^{t};w_{i}^{t}]\) and \(E_{2}(t)=Disparity[w_{i}^{t-\tau};w_{i}^{t}]\) across different clients, using both cosine similarity and L1-norm
Figure 6: The relationship between reduction in the number of iterations and the percentage of changed data
Figure 7: Our estimation method has smaller error compared to that of DC-ASGD’s estimation
difference as the metric. Results in Figure 8 show that at the final stage of training in FL, \(E_{2}(t)\) is always larger than \(E_{1}(t)\).
Hence, in the late stage of FL training, it is necessary to switch back to vanilla FL and directly use stale model updates in aggregation. The difficulty of deciding such switching point is that the true unstale model update (\(w_{i}^{t}\)) is unknown at time \(t\). Instead, the server will be likely to receive \(w_{i}^{t}\) at a later time, namely \(t+\tau^{\prime}\). Therefore, if we found that \(E_{1}(t)>E_{2}(t)\) at time \(t+\tau^{\prime}\) when the server receives \(w_{i}^{t}\) at \(t+\tau^{\prime}\), we can use \(t+\tau^{\prime}\) as the switching point instead of \(t\). Doing so will result in a delay in switching, but our experiment results in Figure 9 with different switching points show that the FL training is insensitive to such delay.
In practice, when we make such switch, the model accuracy in training will experience a sudden drop, as shown in Figure 9, due to the inconsistency of gradient between \(\hat{w}_{i}^{t}\) and \(w_{i}^{t-\tau}\). To avoid such sudden drop, at time \(t+\tau^{\prime}\), instead of immediately switching to using \(\hat{w}_{i}^{t}\) in server's model aggregation, we use a weighted average of \(\alpha\hat{w}_{i}^{t}+(1-\alpha)w_{i}^{t-\tau}\) in aggregation, so as to ensure smooth switching.
## 5 Experiments
We evaluated our proposed technique in two FL scenarios. In the first scenario, all clients' local datasets are fixed. In the second scenario, we consider a more practical FL setting, where clients' local data is continuously updated and data distributions are variant over time. We believe that the second setting matches the condition of many real-world applications, such as embedded sensing or camera surveillance scenarios, where environmental contexts are changing over time and introducing new knowledge into clients' local datasets. We compare our proposed technique with the following FL training strategies, and also included the case of FL without any staleness as the baseline in comparison:
* **Direct aggregation with staleness**: Directly aggregating stale model updates without applying weights onto them.
Figure 8: Comparison of model updates’ estimation error as the FL training progresses
Figure 9: FL training results with different switching points. \(E_{1}(t)>E_{2}(t)\) when \(t\)=155, but different switching points exhibit very similar training performance.
* **Weighted aggregation with staleness**: Applying weights onto the stale model updates in aggregation, and these weights generally decay with the amount of staleness [4].
* **First-order compensation**: Compensating the model update errors caused by staleness using first-order Taylor expansion and Hessian approximation as described in DC-ASGD [26].
### Experiment Setup
In all experiments, we consider a FL scenario with 100 clients. Each local model update on a client is trained by 5 epochs using the SGD optimizer, with a learning rate of 0.01 and momentum of 0.5.
To emulate data heterogeneity, we use a Dirichlet distribution to sample client datasets with different label distributions [7], and use a tunable parameter (\(\alpha\)) to adjust the amount of data heterogeneity: as shown in Figure 10, the smaller \(\alpha\) is, the more biased these label distributions will be and hence the higher amount of data heterogeneity exists. When \(\alpha\) is very small, the local dataset of each client only contains data samples of few data classes.
To emulate device heterogeneity intertwined with data heterogeneity, we select one data class to be affected by staleness, and apply different amounts of staleness, measured by the number of epochs that the clients' model updates are delayed, to the top 10 clients whose local datasets contain the most data samples of the selected data class. The impact of staleness, on the other hand, can be further enlarged by applying staleness in the similar way to more data classes.
Based on such experiment setup, in all experiments, we assess our approach's performance improvement by measuring the increase of model accuracy in the selected data class being affected by staleness. We expect that our approach can either improve the final model accuracy when the training completes, or achieve the same level of model accuracy with the existing methods but use fewer training epochs.
Figure 11: Model accuracy in data class 5 affected by staleness, using MNIST dataset to train a LeNet model
Figure 10: Emulating data heterogeneity using the Dirichlet Distribution. Data distributions on 10 clients are shown.
### FL Performance in the Fixed Data Scenario
In the fixed data scenario, we conduct experiments in two FL settings: 1) the MNIST [11] dataset to train a LeNet model and 2) the CIFAR-10 [10] dataset to train a ResNet-8 model.
When staleness is 40 epochs, Figure 11 and 12 show that our proposed technique results in much better model accuracy in both FL settings. At the early stage of FL, the model accuracy achieved by our technique of _gradient inversion based model estimation_ is very close to that of FL without staleness, indicating that our technique can fully remove the impact of staleness. In contrast, directly aggregating stale model updates could result in 7.5% drop in model accuracy, and using weighted aggregation with staleness could even increase such model accuracy drop to up to 20%.
Our technique can also greatly speed up the training progress. As shown in Figure 11 and 12, compared to direct aggregation or weighted aggregation with staleness, to achieve the same model accuracy in the early stage of training (before 400 epochs in Figure 11 and before 1000 epochs in Figure 12), our technique can reduce the required training time by 27.5% and 35%, for the MNIST and CIFAR-10 dataset, respectively. This speedup is particularly important in many time-sensitive and mission-critical applications such as embedded sensing, where coarse-grained but fast model ability is needed.
Furthermore, we also conducted experiments with different amounts of data heterogeneity and device heterogeneity (a.k.a., staleness) using the MNIST dataset and the LeNet model. Results in Tables 3 and 4 show that, compared with the existing schemes including direct aggregation and first-order compensation, our proposed gradient inversion (GI)-based estimation can generally achieve higher model accuracy using a smaller amount of training epochs. Especially when the amount of staleness increases to a unlimited level (e.g., 20-40 epochs) or the data distributions among different clients are highly heterogeneous, the improvement of model accuracy could be up to 5% with \(>\)30% less training epochs. These results demonstrate that our proposed method can be widely applied to different FL scenarios with unlimited amount of staleness and data heterogeneity.
\begin{table}
\begin{tabular}{l r r r} \hline \hline \multirow{2}{*}{**training time**} & \multicolumn{3}{c}{**Staleness (epoch)**} \\ \cline{2-4} & **10** & **20** & **40** \\ \hline \multirow{2}{*}{**GI-based Estimation**} & **720** & **660** & **580** \\ & **73.3\%** & **75.2\%** & **77.7\%** \\ \hline \multirow{2}{*}{**Direct Aggregation**} & **800** & **800** & **800** \\ & **72.6\%** & **73.1\%** & **73.9\%** \\ \hline \multirow{2}{*}{**First Order compensation**} & **790** & **780** & **820** \\ & **72.8\%** & **73.1\%** & **73.7\%** \\ \hline \hline \end{tabular}
\end{table}
Table 3: The trained model accuracy and amount of training time spent with different amounts of staleness, measured in the number of delayed epochs
Figure 12: Model accuracy in data class 2 affected by staleness, using CIFAR-10 dataset to train a ResNet-8 model
### FL Performance in the Variant Data Scenario
To continuously vary the data distributions of clients' local datasets, we use two public datasets, namely MNIST and SVHN [15], which are for the same learning task (i.e., handwriting digit recognition) but with different feature representations as shown in Figure 13. Each client's local dataset is initialized as the MNIST dataset in the same way as in the fixed data scenario. Afterwards, during training, each client continuously replaces random data samples in its local dataset with new data samples in the SVHN dataset.
Experiment results in Figure 14 show that in such variant data scenario, since clients' local data distributions continuously change, the FL training will never converge. Hence, the model accuracy improvements by the existing FL training strategies, including both direct aggregation with staleness and first-order compensation, exhibit significant fluctuations over time and stay low (\(<\)40%). In comparison, our proposed gradient inversion based estimation can better depict the variant data patterns and hence achieve much higher model accuracy, which is comparable to FL without staleness and 20% higher than those in existing FL schemes.
In addition, we also conducted experiments with different amounts of staleness and rates of data variation. Results in Tables 5 and 6 demonstrated that our proposed method outperformed the existing FL strategies in different scenarios with different dynamics of local data patterns.
\begin{table}
\begin{tabular}{l r r r} \hline \hline \multirow{2}{*}{**training time**} & \multicolumn{3}{c}{\(\alpha\)} \\ \cline{2-4}
**accuracy** & **100** & **1** & **0.1** \\ \hline \multirow{2}{*}{**GI-based Estimation**} & **800** & **760** & **580** \\ & **82.3\%** & **78.1\%** & **78.3\%** \\ \hline \multirow{2}{*}{**Direct Aggregation**} & **800** & **800** & **800** \\ & **82.3\%** & **77.2\%** & **73.2\%** \\ \hline \multirow{2}{*}{**First Order compensation**} & **800** & **800** & **820** \\ & **82.3\%** & **77.2\%** & **72.7\%** \\ \hline \hline \end{tabular}
\end{table}
Table 4: The trained model accuracy and amount of training time spent with different amounts of data heterogeneity, controlled by the tunable parameter \(\alpha\))
Figure 14: Model accuracy with variant data distributions in clients’ local datasets
Figure 13: Datasets for digit recognition: MNIST and SVHN
## 6 Related Work
**Staleness in asynchronous FL (AFL):** Most existing solutions to staleness in AFL are based on weighted aggregation. For example, [4] suggests that a model update's weight exponentially decays with its amount of staleness, and some others use different staleness metrics to decide model updates' weights [17]. [5] decides these weights based on a feature learning algorithm. These existing solutions are always biased towards fast clients, and will hence affect the trained model's accuracy when data and device heterogeneities in FL are intertwined. Some other researchers suggest to use semi-asynchronous FL, where the server either aggregates client model updates at a lower frequency [16] or clusters clients into different asynchronous "tiers" according to their update rates [2]. However, doing so cannot completely eliminate the impact of staleness because the server's aggregation still involves stale model updates.
**Recovering training data:** We can recover the training data from stale model updates and use the recovered data to transfer knowledge from stale model updates to the global model. Such recovery can be done by training a generative model and compelling its generated data to exhibit high predictive values on the original model [20; 12; 30]. Another approach is to directly optimize the randomly initialized input data until it has good performance on the original model [21]. However, the quality of recovered data from these methods remains low. Other efforts enhance data quality by incorporating natural image priors [13] or using another public dataset to introduce general knowledge [19], but require involvement of extra datasets. Moreover, all these methods require that the original model update to be fully trained, which is usually infeasible in FL.
## 7 Conclusion
In this paper, we present a new FL framework to tackle the unlimited staleness when data and device heterogeneities are intertwined, by using gradient inversion to compute non-stale model updates from clients' stale model updates. Experiment results show that our technique can largely improve the model accuracy and speed up FL training.
\begin{table}
\begin{tabular}{l r r r} \hline \hline \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } & \multicolumn{3}{c}{**streaming rate**} \\ \cline{2-4} & 1/4 & 1/3 & 1/2 \\ \hline \multirow{2}{*}{**GI-based Estimation**} & 330 & 330 & 440 \\ & 32.2\% & **41.9\%** & **60.2\%** \\ \hline \multirow{2}{*}{**Direct Aggregation**} & **800** & **800** & **800** \\ & **27.8\%** & **32.6\%** & **40.7\%** \\ \hline \multirow{2}{*}{**First Order Compensation**} & **830** & **800** & **820** \\ & **27.5\%** & **32.6\%** & **40.9\%** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Model accuracy and amount of training time with different rates of clients’ data variation, measured as the number of local data samples being replaced in each epoch (e.g., 1/2 is to replace one data sample every 2 epochs)
\begin{table}
\begin{tabular}{l r r r} \hline \hline \multirow{2}{*}{
\begin{tabular}{} \end{tabular} } & \multicolumn{3}{c}{Staleness (epoch)} \\ \cline{2-4} & **10** & **20** & **40** \\ \hline \multirow{2}{*}{**GI-based Estimation**} & 760 & **710** & **440** \\ & 54.0\% & 52.4\% & 60.2\% \\ \hline \multirow{2}{*}{**Direct Aggregation**} & **800** & **800** & **800** \\ & 51.4\% & 46.6\% & **40.7\%** \\ \hline \multirow{2}{*}{**First Order compensation**} & **800** & **800** & **820** \\ & 51.4\% & 46.5\% & **40.9\%** \\ \hline \hline \end{tabular}
\end{table}
Table 5: The trained model accuracy and amount of training time spent with different amounts of staleness |
2309.03725 | Immersive Virtual Reality Platform for Robot-Assisted Antenatal
Ultrasound Scanning | Maternal health remains a pervasive challenge in developing and
underdeveloped countries. Inadequate access to basic antenatal Ultrasound (US)
examinations, limited resources such as primary health services and
infrastructure, and lack of skilled healthcare professionals are the major
concerns. To improve the quality of maternal care, robot-assisted antenatal US
systems with teleoperable and autonomous capabilities were introduced. However,
the existing teleoperation systems rely on standard video stream-based
approaches that are constrained by limited immersion and scene awareness. Also,
there is no prior work on autonomous antenatal robotic US systems that automate
standardized scanning protocols. To that end, this paper introduces a novel
Virtual Reality (VR) platform for robotic antenatal ultrasound, which enables
sonologists to control a robotic arm over a wired network. The effectiveness of
the system is enhanced by providing a reconstructed 3D view of the environment
and immersing the user in a VR space. Also, the system facilitates a better
understanding of the anatomical surfaces to perform pragmatic scans using 3D
models. Further, the proposed robotic system also has autonomous capabilities;
under the supervision of the sonologist, it can perform the standard six-step
approach for obstetric US scanning recommended by the ISUOG. Using a 23-week
fetal phantom, the proposed system was demonstrated to technology and academia
experts at MEDICA 2022 as a part of the KUKA Innovation Award. The positive
feedback from them supports the feasibility of the system. It also gave an
insight into the improvisations to be carried out to make it a clinically
viable system. | Shyam A, Aparna Purayath, Keerthivasan S, Akash S M, Aswathaman Govindaraju, Manojkumar Lakshmanan, Mohanasankar Sivaprakasam | 2023-09-07T14:12:04Z | http://arxiv.org/abs/2309.03725v1 | # Immersive Virtual Reality Platform for Robot-Assisted Antenatal Ultrasound Scanning
###### Abstract
Maternal health remains a pervasive challenge in developing and underdeveloped countries. Inadequate access to basic antenatal Ultrasound (US) examinations, limited resources such as primary health services and infrastructure, and lack of skilled healthcare professionals are the major concerns. To improve the quality of maternal care, robot-assisted antenatal US systems with teleoperated and autonomous capabilities were introduced. However, the existing teleoperation systems rely on standard video stream-based approaches that are constrained by limited immersion and scene awareness. Also, there is no prior work on autonomous antenatal robotic US systems that automate standardized scanning protocols. To that end, this paper introduces a novel Virtual Reality (VR) platform for robotic antenatal ultrasound, which enables sonologists to control a robotic arm over a wired network. The effectiveness of the system is enhanced by providing a reconstructed 3D view of the environment and immersing the user in a VR space. Also, the system facilitates a better understanding of the anatomical surfaces to perform pragmatic scans using 3D models. Further, the proposed robotic system also has autonomous capabilities; under the supervision of the sonologist, it can perform the standard six-step approach for obstetric US scanning recommended by the ISUOG. Using a 23-week fetal phantom, the proposed system was demonstrated to technology and academia experts at MEDICA 2022 as a part of the KUKA Innovation Award. The positive feedback from them supports the feasibility of the system. It also gave an insight into the improvisations to be carried out to make it a clinically viable system.
## I Introduction
Maternal mortality is one of the widely accepted key indicators of a country's health and socioeconomic development [1]. It is often higher in rural settings than urban areas due to inadequate access and unaffordable healthcare. Also, the availability of skilled healthcare professionals and the access to health resources [2], like primary health services, medicines, infrastructure, etc, are limited. The World Health Organisation's (WHO) Antenatal Care (ANC) model recommends eight ANC contacts during the period of pregnancy [3]. Early and regular pregnancy scans can detect the majority of fetal structural defects (59%), chromosomal defects (78%) [4] and improve the overall maternal care management.
Access to quality maternal and fetal care can be enhanced by equipping health centers with robotic ultrasound systems. Antenatal robotic ultrasound technology is the fusion of US imaging and robotics for non-invasive fetal imaging during pregnancy. Systems with teleoperation, collaborative assistance, and autonomous capabilities at varied levels of robot autonomy (LORA) [5] exist. These robotic systems allow for more precise and consistent imaging [6], standardized scanning, and improve comfort and safety for patients as well as sonologists [7]. Further, telemedicine and teleconsultation provide remote medical consultations in rural areas. The comparative studies of teleoperated US imaging from Arbeille et al. [8] and Xie et al. [9] have suggest that US-based remote diagnosis is as effective and useful as manual interventions.
Research and clinical studies on robotic fetal ultrasonography are limited. iFIND - intelligent Fetal Imaging and Diagnosis system [10] aims at automating ultrasound fetal examinations. It follows a customized workflow to scan the desired anatomical location in a consistent way. The robotic US acquisition follows a generic path that is not specific to any scan pattern prescribed for antenatal scanning, like the six-step approach recommended by the International Society of US in Obstetrics and Gynecology (ISUOG) [11].
Tsumura et al. [12] and Arbeille et al. [13] proposed teleoperated robotic systems for fetal scanning. The majority of such implementations of robot-assisted remote US systems use an audio-visual channel for examination. However, these standard approaches lack a sufficient degree of immersion and scene awareness [14]. Although VR technology can address these shortcomings, it has not been implemented before. The current research on VR for medicine mostly focuses on surgical training, psychiatric treatment, pain management, and rehabilitation [15] but not on antenatal ultrasound scanning.
Fig. 1: Proposed system architecture
A novel platform to address the shortcoming mentioned above is proposed in this work. As shown in Fig. 1, it combines the use of robotics with VR technology for antenatal US examinations. The significant contributions in that regard are:
1. An immersive virtual reality platform for the sonologist to control the robotic arm over a wired network is developed. It provides an enhanced visual representation of the clinical setting, including the robot and patient's anatomy, and offers haptic feedback-based robotic manipulation, resulting in a more realistic experience. Additionally, real-time US acquisition and streaming allows for instant and accurate diagnosis.
2. An autonomous robotic system, which automates the ISUOG's six-step approach for obstetric US scanning is developed. These standardized scans are autonomously performed by the robot under the supervision of the sonologist, who can observe the robotic movements through the VR headset and command the course of the probe at any point of time.
This paper is organized as follows: Section II provides an overview of the system, including its components and communication methods. Section III describes the design and development of manual contact and autonomous modes. Section IV presents the observations related to the demonstration of the proposed system on a fetal phantom. Lastly, Section V has the conclusions and future work.
## II System Overview
### _System Components_
The system comprises a primary and a secondary site. The primary site consists of a 7 Degrees Of Freedom (DOF) KUKA LBR Med robot arm attached with two end effectors: a 3D stereo camera and a curvilinear US probe. Additionally, a 2D camera has been integrated into the system to enable real-time patient interaction. The secondary site, operated primarily by a sonologist, features an Oculus VR headset to provide an immersive user interface and enable the robot to be steered manually or autonomously using the VR controllers. The primary and secondary sites were connected via a wired network. A Unity-based VR application (shown in Fig. 2) was developed to provide Graphical User Interface (GUI) and facilitate communication between the system components. An oval-shaped abdomen US phantom with a 23-week fetus was used for the preliminary trials.
### _Robot Communication_
The communication channel between the robot and the VR application primarily uses the Fast Robot Interface (FRI). As depicted in Fig. 2, through the FRI's data read channel, the Robot Data Receiver fetches the robot's current status (joint and cartesian values, error status, etc.) in real-time at a rate of 500 Hz. The Robot Control Interface uses the FRI's write channel to command and overlay the robot's motion. A Java application is deployed and externally controlled from the VR application over a TCP/IP network. It encloses and commands state changes in the FRI connection.
### _Interfacing 3D Camera_
The system utilizes a stereo camera from Roboception (rc visard 65 monochrome) equipped with a pattern projector to reconstruct the patient's anatomy. Communication with the camera was established using the Robotic Operating System (ROS) via the GenICam interface for seamless data transfer. In addition, the ROS bridge interface is utilized to enable effective communication between ROS and the Unity software for transferring data.
### _Real-time Ultrasound Streaming_
FAST (Framework For Heterogeneous Medical Image Computing And Visualization) interface was used to stream live US images at 30 fps from a US sensor - Clarius C3 HD. The US sensor uses Wi-Fi Direct for streaming data to the application.
### _Immersive Virtual Reality Environment_
The VR space offers an immersive and enhanced visual experience that enables sonologists to improve patient care quality. The robot model is represented using the Unified Robotics Description Format (URDF) inside the virtual environment. The URDF file contains a range of kinematic and dynamic parameters, including linear and angular friction, damping, and stiffness. Thus an accurate representation of the robot's physical behavior is simulated. The reconstructed patient anatomy is loaded as a mesh file into the VR space, as shown in Fig. 2(b), and its coordinates are mapped to the robot's base frame. The user interface dashboard consists of three segments: the first segment streams live video from the patient site. The second segment streams real-time US, with the option to tune imaging parameters, such as gain, depth, and brightness. The third segment is drive mode selection, which allows the sonologist to switch between manual contact mode and autonomous mode for distinct scan patterns. The US probe orientations - longitudinal or transverse, can also be selected from this segment.
## III Methodologies
### _Anatomical Surface Reconstruction_
As shown in Fig. 2(a), the robot is initialized to a configuration that facilitates the 3D camera to have adequate coverage of the site of phantom placement. Next, the position of the phantom is adjusted to ensure that all the ArUco markers are in the vicinity of the 3D camera. Multiple perspectives of the phantom are captured as Point Cloud Data (PCD) using the 3D camera. The outliers in the acquired PCD are filtered using RANdom SAmple Consensus (RANSAC). Then, the filtered point cloud data is merged as a single PCD using the Iterative Closest Point (ICP) technique [16]. For visualization purposes, a mesh is reconstructed from the PCD using the Poisson Surface Reconstruction algorithm [17].
### _Manual Contact Mode_
The manual mode enables the sonologist to manipulate and control the robot in real-time via a wired network. The key feature of this mode is that it enables the US probe to
maintain contact with the patient's anatomy throughout the scan, thereby ensuring good-quality US imaging. By utilizing position control and force monitoring, the robot can maintain a permissible contact force at the end effector (i.e., the US probe) while maneuvering.
To achieve real-time control of the robot, the hand gestures of the sonologist are captured using the constellated IR LEDs within the VR controller. These movements are read as position and orientation data in VR space using the Unity software. An inherent coordinate mapping is constructed from the VR to the robot space. This mapping allows the representation of the VR controller's position and orientation values in robot space. As a preprocessing step, a sequence of filtering algorithms is applied to these values to prevent unintended robot motions. Initially, the position and orientation values are given as input to a workspace filter to validate whether those values are within the robot's dexterous workspace [18]. This workspace was determined by limiting the probe's orientation to a 60-degree cone arc [19]. Post the workspace filter, acceleration, and velocity filters are implemented to prevent jerks. The linear and angular velocity parameters are limited to 20 mm/s and 30 deg/s, respectively. The normalized and spherical lerping methods are used to create smooth transitions between the filtered poses.
_Position Control_: PCD provides an excellent geometric approximation of the patient's anatomy. In order to accurately determine the mapping of camera coordinates in the robot's frame of reference, a Hand-Eye Calibration [20] was performed between them. This mapping allows to represent the PCD in the robot space. The filtered position values are superimposed on the PCD. The vertical components of the position values (Z axis) are updated to match the PCD contour. This ensures that the cartesian position of the robot is confined to the contour of the PCD. Any variations along the vertical component (up and down movements) from the VR controller will not reflect in robot motion.
It is crucial to ensure that the robot's motion avoids reaching any singular configurations. The rank of the Jacobian matrix was continuously verified to detect singularities. Since the system uses a redundant manipulator, the pseudoinverse of the Jacobian matrix needs to be computed, and it is expressed as:
\[J^{+}=J^{T}{[JJ^{T}]}^{-1}, \tag{1}\]
where \(J\) represents the Jacobian matrix, and \(J^{+}\) represents the Moore-Penrose inverse.
_Force Monitoring_: The US probe used in the current system lacks force-sensing capabilities at the contact point. Hence, the contact forces were monitored using the robot's joint torque sensors. Using the model of the robot dynamics, the joint torques are converted into end-effector forces. As a safety measure, both the resultant force of the end effector \((F_{r})\) and its component \((F_{s})\) along the probe axis are continuously monitored to ensure that they remain within the minimum \((F_{c})\) and maximum \((F_{m})\) permissible values, i.e.,
\[F_{c}<(F_{r},F_{s})<F_{m} \tag{2}\]
Fig. 3: (a) Robot initialization (b) Reconstructed phantom anatomy (c) VR environment
Fig. 2: Schematic of communication between different components of the system
Also, the force \(F_{s}\) along the probe axis is used for monitoring the contact with the anatomy.
By combining position control and force monitoring, the robot is made to traverse the probe along the anatomy contour and maintaining the permissible contact force. Thereby allowing the sonologist to capture US images without causing discomfort to the patient.
### _Autonomous Mode_
Autonomous robotic US systems mitigate the repetitive nature of standard procedures for sonologists by automating US scans, thus providing an efficient and consistent solution to streamline the diagnostic process. In the case of antenatal US scanning, ISUOG recommends a standard six-step approach for determining various fetal parameters during the second and third trimesters [11]. These steps include determining the fetal presentation, detecting fetal cardiac activity, identifying the number of fetuses in the uterus, determining the location and position of the placenta, estimating amniotic fluid, and measuring fetal biometrics such as the Bipartical Diameter (BPD), Head Circumference (HC), Abdominal Circumference (AC), and Femur Length (FL). ISUOG also specifies the recommended US probe scanning position and orientation on the anatomy to determine each parameter. These scanning patterns are well-standardized and have become a regular part of the sonologist's examination routine.
The developed system assists sonologists by automating these scans. Like manual contact mode, the system uses position and force monitoring to maintain skin contact and autonomously scan the segment. All these scans can be interpolated as geometric patterns using 5 key points, namely, the Umbilicus point (U), Bottom Left (BL), Bottom Right (BR), Top Left (TL), and Top Right (TR). The Umbilicus (anatomical landmark) has to be manually selected by the sonologist. The application has the provision to choose the other key points manually, or it can be geometrically computed using the Umbilicus point and the ArUco markers. Any scanning pattern can be approximated to lines and curves using these key positions. Fig. 4 illustrates the position of all 5 key points computed for the fetus phantom.
```
\(\mathbf{p}_{s},\mathbf{p}_{e},\) PCD_data, \(sd\) \(\mathbf{v}_{se}=\mathbf{p}_{e}-\mathbf{p}_{s}\) \(m_{s_{e}}=||\mathbf{v}_{se}||_{2}\) \(n=\frac{m_{s_{e}}}{sd}\)\(\triangleright\)\(sd\)\(\rightarrow\) minimum distance between two points \(\mathbf{P}=[\ ]\) \(\mathbf{N}=[\ ]\) \(i\gets 0\) while\(i<n\)do if\(i=0\)then\(\triangleright\)\(pz_{i-1}\)\(\rightarrow\) vertical component of \(\mathbf{p}_{i-1}\) \(pz_{i-1}\gets 0\)\(\triangleright\) unit vector \(\hat{\mathbf{v}}_{se}\) endif \(\mathbf{pSeudo}_{i}\leftarrow(\mathbf{p}_{s}+(\hat{\mathbf{v}}_{se}*sd*i))+[0,0,pz_{i-1}]\) \(\mathbf{p}_{i}\leftarrow\)\(\texttt{NN}(\mathbf{pSeudo}_{i},\texttt{PCD\_data})\) \(\mathbf{P}.\texttt{append}(\mathbf{p}_{i})\) \(\mathbf{n}_{zi}=\texttt{Normal}(\mathbf{p}_{i})\) \(i\gets i+1\) endwhile \(\mathbf{P}=[\mathbf{p}_{0},\mathbf{p}_{1},\ldots,\mathbf{p}_{n-1}]\)\(\triangleright\) path points from PCD \(\mathbf{N}=[\mathbf{n}_{z0},\mathbf{n}_{z1},.....,\mathbf{n}_{zn-1}]\)\(\triangleright\) point normals from PCD \(\texttt{pathPoints}=\texttt{PolyFIT}(\mathbf{P},sd)\) \(\texttt{pathNormals}=\texttt{Smoothem}2(\mathbf{N})\)
```
**Algorithm 1** Path Finding Algorithm
Each pattern's probe positions and orientations are computed using a path planning algorithm, i.e., Algorithm 1. The path planner is defined by path points \(\mathbf{P}\) and normals \(\mathbf{N}\). A directional vector \(\mathbf{v}_{se}\) is formed from the starting point \(\mathbf{p}_{s}\) pointing towards the ending point \(\mathbf{p}_{e}\) on the PCD. The vector \(\mathbf{v}_{se}\) is discretized into \(n\) pseudo-points \((\mathbf{pSeudo}_{i})\) based on sampling distance \(sd\). A KDTree search algorithm, denoted by NN, is used to find the closest points to the pseudo points on the PCD. These points are connected to form a smooth path using polynomial fitting methods. The probe's orientation is calculated based on the normal vector of each path point and the scan type (longitudinal or transverse) using the axis-angle formulation. The desired positions and orientations of the probe are transformed into the robot's space using the established coordinate mapping. The linear velocities are obtained by numerical differentiation of the position values. The space-fixed angular velocities are derived from the orientations using the expression \(\dot{R}R^{T}\), where R is the rotation matrix corresponding to the robot's current orientation.
Finally, the Jacobian matrix of the robot is used to map the obtained task-space velocities (\(\dot{X}\)) to joint velocities (\(\dot{\Theta}\)), using the relation \(\dot{\Theta}=J^{+}\dot{X}\).
## IV Observations and Discussions
### _Manual Control Mode_
The present study demonstrates the ability to exercise real-time control of a robotic arm through a wired network, as shown in Fig. 5. To ensure a stable connection, the system continuously monitors jitter and packet loss. The robot is only maneuvered when the latency is within the range of 5 to 8 ms. The effectiveness of the manual contact mode heavily relies on the transfer of rigid body motion from
Fig. 4: 5 key points computed on the fetal phantom
the VR controller to the robotic arm. As shown in Fig. 6, the algorithm has eliminated the high-speed variations and accidental drops of the VR controller. During these disturbances, the robot's pose stays intact and prevents unintended motions. For phantom demonstration, the minimum \((F_{c})\) and maximum \((F_{m})\) permissible forces required to maintain skin contact were set to 2N and 5N, respectively. The haptic feedback is given to the VR controller based on the variations in the robot's joint forces and position values. A high vibration alert is given to the user when the interaction forces are closer to \((F_{m})\).
The developed system was demonstrated at MEDICA 2022, and more than 50 participants volunteered to experiment with the system. They were provided with a rudimentary demonstration of the working model. Without any mention of its safety features, participants were asked to use the system. The users were able to actuate the robot along all 6 DOF, involving only translatory, rotary, or simultaneous translatory and rotary movements along the three independent axes. The system exhibited the capability to eliminate all types of disturbances, including workspace limitations and singular configurations. No adverse incidents of VR sickness were reported by any of the participants. However, some individuals experienced a minor degree of discomfort after using the VR for approximately 25 minutes.
The proposed system can be easily extended to a telephotic platform, provided the connectivity is facilitated through a high-speed internet network. The prospective advancements entail the implementation of telerobotic manipulation with due consideration given to network latency, bandwidth, and security, which are known to pose significant technical challenges.
### _Autonomous Mode_
The developed autonomous system is classified as LORA-5, where the automation provides a predetermined set of options, and the human operator must select one for the system to carry out. In our current setup, once the Umbilical point (U) has been selected, the system computes the path and corresponding orientation of the probe for each scan pattern. The sonologist is provided with the choice to select any scanning patterns from the user dashboard, and the autonomous robot motion is initiated. For instance, Fig. 7 displays one such computed path on PCD to implement the number of fetuses scan pattern. The position control and force monitoring ensures the contact between the anatomy and the US probe by maintaining the interaction forces between \(F_{c}\) and \(F_{m}\) values.
The system allows the user to switch between autonomous and manual contact modes for diagnosis. Additionally, it includes a feature that enables the sonologist to pause the robot's motion and annotate fetal measurements. For
Fig. 5: Demonstration of manual control mode
Fig. 8: Fetal measurements for the phantom
Fig. 6: VR controller input Vs Robot cartesian movement
Fig. 7: Overlay of the computed path on the PCD to scan and identify the number of fetuses
example, the US image obtained during the autonomous scan of the fetus phantom and the measurements annotated by a sonologist at MEDICA are shown in Fig. 8. The system also records the streams of US images, which can be utilized for post-analysis or expert review.
## V Conclusions
This paper presents a new system designed for robot-assisted antenatal scanning using an immersive VR platform. Manual contact mode through a wired network and autonomous mode adapted to the standard six-step approach are used interchangeably in this system. The integration of VR, robotics, and the US in the proposed system enhances the sonologist's perception and experience of the patient environment. In addition, one potential application of VR in fetal monitoring is in training healthcare professionals. It provides a safe and controlled environment to practice and improve skills with a minimal learning curve during the transition from training to real-world scenarios. Another advantage is that the supervised autonomous feature of the system, specialized to the clinically relevant ISUOG scanning protocol, helps the sonologist reduce the time and effort spent on performing these routine scans on all patients. The system was successfully demonstrated at MEDICA 2022 using a 23-week fetal phantom, and the resulting observations are reported in this paper. However, the system's usability and performance need to be comprehensively validated with clinical metrics. The real-world clinical environment poses a significant challenge in achieving seamless communication for telerobotics over a secure network and in addressing the unpredictable fetal movements during autonomous scans. The future scope is to achieve telerobotics and to autonomously manipulate the robot by leveraging US image feedback to compensate for fetal movements. We envisage this technology to be further extended as a surgical diagnostic and interventional platform that can address the lack of skilled resources and infrastructure.
## Acknowledgment
We would like to thank KUKA AG, Germany, for giving us the opportunity to integrate their robotic platform to develop this system. The authors would like to acknowledge Dr. TejaKrishna Mamidi for his assistance in editing the manuscript.
|
2309.14259 | Here Be Livestreams: Trade-offs in Creating Temporal Maps of Reddit | We present a method for mapping Reddit communities that accounts for temporal
shifts, using quantitative and qualitative analyses of clustering techniques to
produce high-quality, stable, and meaningful maps for researchers, journalists
and casual Reddit users. Building on previous work using community embeddings,
we find that only a month of Reddit comments suffices to create snapshot
embeddings that maintain quality while supporting insight into changes in
Reddit communities over time. Comparing different clusterings of community
embeddings with quantitative measures of quality and temporal stability, we
describe properties of the models and what they tell us about the underlying
Reddit data. Moreover, qualitative analysis of the resulting clusters
illuminate which properties of clusterings are useful for analysis of Reddit
communities. Although clusterings of subreddits have been used in many earlier
works, we believe this is the first study to qualitatively analyze how these
clusterings are perceived by social media researchers at a Reddit-wide scale.
Finally, we demonstrate how the temporal snapshots might be used in
exploratory study. We are able to identify particularly stable communities
during 2021-2022, such as the Reddit Public Access Network, as well as emerging
communities, like one focused on NFT trading. This work informed the
development of a webtool for exploring Reddit now available to the public at
RedditMap.social. | Virginia Partridge, Jasmine Mangat, Rebecca Curran, Ryan McGrady, Ethan Zuckerman | 2023-09-25T16:16:11Z | http://arxiv.org/abs/2309.14259v2 | # A Table of Contents for the Front Page of the Internet
###### Abstract
We create monthly snapshot community embeddings that infer relationships between subreddits, allowing clustering as an intuitive way to explore subreddit communities. Through two annotation tasks, we validate that these subreddit clusterings align well with expert judgements. Although embeddings are created independently from different time periods of data, clusterings produced from monthly snapshots change gradually over time, and the stability of a subreddit's nearest neighbors can be analyzed to understand how that subreddit fits in or evolves relative to other communities on Reddit.
## 1 Introduction
The social media platform Reddit consists of thousands of discrete, self-organized, topic-specific communities called subreddits. Individual users, known as Redditors, can post text, images, videos, or links and in turn comment and vote on other users' contributions [11]. In recent years, the site has garnered attention in news media and academic research focused on topics ranging from analyzing political polarization and partisanship [23, 24] to stock and option trading in the subreddit **wallstreetbets1**[10][12]. Unlike other popular social media platforms, there is no process for verifying users' identities, and Redditors may operate largely anonymously from behind as many usernames as they'd like. Users subscribe to subreddits covering any interests, ranging from general topics, such as images or AskReddit, to those focused on specific hobbies, political ideologies, or communities with physical counterparts, like universities or cities. Rather than digitally connecting two people who have already met in a school or workplace, Reddit connects strangers who have common interests by way of subreddits [13].
Footnote 1: Subreddits indicated with sans serif font throughout.
Reddit's moderation policies also distinguish it from other platforms. Subreddit community moderators create and enforce their own rules with relatively little interference from the platform, although subreddits violating Reddit's company policies on spam, anti-harassment, hate-speech or illegal content are banned or "quarantined" (hidden from unsubscribed users) [1, 23, 24, 25]. As a result, community norms, volunteer efforts and self-policing on the part of users all influence participation and content on the site. This community-based approach to moderation, coupled with easy access to years of Reddit data thanks to the Pushshift Reddit Dataset2[1], has made Reddit a fertile ground of study for anyone interested in changes in discourse on social media and how communities evolve in response to mass media attention.
Footnote 2: Note that Reddit recently removed Pushshift’s access to their Data API, [https://www.reddit.com/r/modnews/comments/134jpe/reddit_data_api_update_changes_to_pushshift_access/](https://www.reddit.com/r/modnews/comments/134jpe/reddit_data_api_update_changes_to_pushshift_access/)
As the role of social media in civic life grows, the need for tools to explore and understand this space has also increased. We would like such a tool to be legible not only to academics, but also journalists and activists seeking to inform the public about trends in social media communities. Users must be able to distill the massive number of subreddits to a manageable number of categories to browse and review. In this context, clustering Reddit supports exploratory and qualitative use cases [26], helping users both understand Reddit as a whole and find communities of interest for further analysis. In order to be useful, categories produced by such a system must be meaningful and sensible to both experienced Reddit users and outside observers, otherwise they could undermine trust in the methodology.
Building on recent work on community embeddings for Reddit, we propose a method for grouping subreddit communities that also accounts for temporal shifts by using monthly snapshots of data. Using relatively small amounts of data and models that can be trained on the average laptop, we create monthly snapshot community embeddings that infer relationships between subreddits, allowing clustering as an intuitive way to explore subreddit communities3. Through annotation by human experts and intrinsic evaluation metrics, we validate that these monthly community embeddings consistently produce high quality, understandable clusters, although careful attention must be paid to parameter choices in the clustering models for the best results.
Footnote 3: Code and annotated data are publicly available at [https://github.com/UMassCDS/SHOP-Reddit](https://github.com/UMassCDS/SHOP-Reddit)
Our approach also reveals areas of change and stability in
Reddit communities over time in a way that supports both exploratory and confirmatory analyses. No community is static. Subreddits may be impacted by external influences, such as an explosion in new users due to media attention, platform-wide changes in Reddit's interface or policies, or internal changes in community norms, like changes in moderation rules. We apply _variation of information_ to see that most changes in clusterings are gradual over time, but our methodology still can draw attention to significant changes from one time period to the next. Additionally, representations of Reddit during each time period are produced independently from previous time periods, retaining the ability to make comparisons across time without the overhead of reprocessing years of data.
## 2 Related Work
Launched in 2005, Reddit hosts self-organizing and self-moderating forums, where users can create and participate in subreddits that align with their personal interests, identity and preferred rules of engagement. It a popular source for studying diverse features of online communities such as community creation, connections and conflicts between different subreddits, and patterns of radicalization and political polarization [11, 12, 13, 14, 15]. Methods of study have been equally diverse, ranging from case studies on a small set of subreddits [14], to vector space models or topic models using features derived from comment text and user metadata [13, 15, 16], to network backbone extraction and community detection [17]. For the purposes of this work, we draw on previous approaches that map and categorize communities on Reddit and chart changes to those communities over time.
Many approaches to mapping Reddit rely on learning a notion of distributional similarity for subreddits based on user behavior, perhaps understood more simply as "birds of a feather flock together". An early large scale interest map of Reddit used the inferred link that occurs between two subreddits when a single user posts to both during a particular time period. Backbone network extraction was applied to those inferred connections between subreddits in order to visualize Reddit circa 2013 as a network graph with 59 distinct _interest meta-communities_ clusters, such as _sports_, _programming_ and _general interest_. Their goal was to provide users with a way to find new subreddits matching their interests using a browse-able network graph exploration tool [17]. Backbone network extraction excels at highlighting strengths of relationships between nodes in social networks. However, the method is sensitive to algorithmic hyperparameters and filtering settings, and results are often difficult to interpret and evaluate, making it challenging to maintain consistent, comparable maps over time [16, 15].
A Redditor's comments across different forums can also be used to learn dense vector representations of subreddits, the approach we will follow here. This method, first introduced as _community2vec_, is based on NLP techniques for creating word embeddings and, like its word2vec and GloVe predecessors with natural language, is capable of solving meaningful analogies and generating useful similarity relationships in the multi-dimensional subreddit space [13, 14, 15]. For example, GloVe-based embeddings of subreddits were shown to successfully retrieve the corresponding sports team when presented with a league and geographic location, warriors = sanfrancisco + rba [13]. This insight led to employing sets of analogies to tune subreddit embeddings, which researchers used to examine which Redditors and communities focus on broad, general interests compared to those that specialize in particular topics and which communities exhibit age, gender and political biases [13, 14].
Distribution of users across subreddits is by no means the only way to create a typology of Reddit. Other studies modeled text similarity using content of comments or posts via TF-IDF [13] or LDA topic models [12], then contrasted subreddits' text similarity with distributional user similarities derived from "bag of users" approaches. Both methods were able to identify pairs of communities with high user similarity and low text similarity, which happens when users of one subreddit are active in another, but discuss a completely different topic there. For example, Redditors who comment in pokemonon are often active in forums discussing other video games with different vocabulary [13]. Similarly, language models can be used to characterize each community's discussion topics on two spectrums, _generic_ to _distinctive_ and _stable_ to _dynamic_, which combine to form a quadrant typology. _Distinctiveness_ was captured by comparing how different a subreddit's word-use is from all forums in the dataset. This study harnessed monthly snapshots to measure linguistic _stability_, comparing a subreddit's within
Figure 1: The total number of comments and unique user contexts present in each month’s data snapshot, as well as the Precision@5 performance of the best community embedding model trained on that snapshot. Notably, the analogy P@5 performance of each month is consistently high, averaging 0.64, with a standard deviation of 0.012, and never drops below 0.61 in a given month.
month word-use to its language model from the entire two year time period and drawing an association between a subreddit's position in the typology and its monthly user retention rate [15].
However, text-based analyses are difficult to apply on Reddit due to its multilingual nature and community norms causing large variation in vocabulary sizes or post lengths between subreddits. We initially attempted to model topics discussed in subreddits using Latent Dirichlet allocation, but were stymied by questions around appropriate document length and difficulty in interpreting topics from the top words alone. For example, consider the impact on vocabulary from widespread use of bots that comment on each post in a subreddit or community rules, like 'Titles and comments need to be exactly "Cat." from \(\mathsf{CatsStandingUp}\). As a result, text-based analyses of Reddit typically draw data from many years, rely on extensive preprocessing heuristics [14] or limit analysis to a small set of communities that either have large enough vocabulary [15] or are particularly interesting for the study [16, 17].
In fact, we found that previous work on clustering Reddit with user embeddings also relied on long time periods of data, using at least one year's worth of Reddit data to build a single model and often more than five years. In such models, changes over time are obscured, as they are "averaged out" to a single vector weight in the final model. Additionally, to include more recent data, a new model must be trained over the entire time frame. To study temporal trends such as monthly user-retention or community evolution, different methods, such as snapshot language models [15] or genealogy graphs [15] were applied. We aim to extend previous user embedding methods to allow for both clustering Reddit and examining temporal trends without sacrificing the quality and usefulness of community embeddings established by previous authors.
## 3 Datasets
### Reddit Comments
We use the comments portion of the Pushshift Reddit Dataset for a year from April 2021 through March 2022 [1] with additional processing to create monthly snapshots of comment data. Within each \(t\)-th month, we determine the top 10,000 subreddits by number of comments, removing any other subreddits and user profile pages. We also drop comments from deleted users, users that only commented once on Reddit during the month and comments that were themselves deleted or removed. As a final step, we remove users above some portion of the most active remaining users during that time period by number of comments. Although initially intended as a heuristic to remove bots and spam, we found that the percentile could be tuned to increase accuracy on the subreddit analogy tasks. In initial experiments, removing users above the 95th percentile of most active users performed well and is the strategy adopted for this work. A beneficial side effect is that removing the most prolific users also speeds up training community embeddings by excluding the longest contexts from the data.
Finally, the names of subreddits each user commented on during the time period are collected into that user's context. All user contexts make up the data which will contribute to our snapshot of Reddit at month \(t\). Count statistics of comments and unique users for each \(t\)-th snapshot can be seen in figure 1. Overall, 6,950 subreddits appear in every month's snapshot over the course of the year, while 15,292 appear in at least one month.
### Subreddit Analogies
In order to evaluate and tune monthly snapshot community embeddings, we use the set of subreddit analogies provided by Waller and Anderson [16], which covers global university-city and city-team pairings for the four major North American sports leagues (MLB, NBA, NFL, NHL). The models are tuned to correctly retrieve appropriate responses, like pittsburgh when presented with Buffalo - buffalobills + steelers =?. Some analogies have multiple correct answers. For example, the Yankees and the
Figure 2: This shows the proportion of analogies solved according to Precision@K out of the total solvable in the top 10,000 most commented on subreddits during the time period. Dark horizontal bars indicate the approximate dates of each sport season, including playoffs and finals.
Mets are both New York City baseball teams, so an analogy is considered correct if the expected subreddit is within the top 5 embeddings nearest to computed result using cosine similarity. We report the proportion of correct analogies out of the total possible solvable analogies for the month, following the Precision@5 definition used by Waller and Anderson [20]. Although there are 113,486 analogies total, if any subreddit involved in an analogy is missing from the month's data snapshot, that analogy cannot be solved and is excluded for that month, as reflected in the ratios of correctly solved analogies out of the total possibly solvable in figure 2.
## 4 Methodology
### Community Embeddings
The idea underlying community embeddings is that a user's comment in one subreddit can be used to predict whether they've commented in other subreddits in the same time period. Training a neural network to perform this prediction task results in vector representations for each subreddit. In the community embedding setting, a user is analogous to the context window for prediction, if we recall the parallels to word embeddings for natural language. Subreddits are then analogous to words.
Our strategy for training community embedding models closely follows Waller and Anderson, with one key difference: By treating each month of Reddit comments as a snapshot, we train separate community embedding models for each month \(t\), allowing each model's parameters to be tuned for high performance on solving subreddit analogies, rather than training a single model on the full time range of data. Like Waller and Anderson, we trained each \(t\)-th skip-gram model on month \(t\)'s snapshot with negative sampling for efficient approximation and randomly downsampling high frequency word [12] and used an "infinite-sized window", where all of each user's comments are used to generate skip-grams for prediction. Our models are trained with the Gensim Word2Vec implementation4 and for each month \(t\), we use grid-search to optimize the model parameters as follows:
Footnote 4: [https://radimrehurek.com/gensim/models/word2vec.html](https://radimrehurek.com/gensim/models/word2vec.html)
* Negative samples \(k\): 10, 20
* Threshold for downsampling high frequency subreddits: 0, 0.001, 0.005
* Learning rate: 0.05, 0.08
In initial experiments, we also experimented with vector dimensions for embeddings, but found that dimension 100 generally performed well. All models are trained for 5 epochs. The performance of the best model produced for each month's snapshot is plotted in figure 1 and broken down in detail by type of analogy in figure 2. For the purposes of this paper, we use L2-normed vectors and cosine distance to create clusterings of subreddits.
### Clustering Models
Snapshot embedding models can be interacted with directly by examining the nearest neighbors of a subreddit of interest, but unsupervised clustering has added usefulness when the resulting groupings align with end users' goals. In the ideal case, users would be presented with coherent groups of subreddits for each time period without needing to specify any additional input parameters, so they can explore Reddit through clusters of related communities and see how those clusters change in time.
In this work we compare the behavior of two types of unsupervised clustering models created by taking the snapshot community embedding of month \(t\) as input features to produce a clustering \(\mathcal{C}_{t}\):
**K-Means++** is a widely-used improvement over the original k-means method of clustering, which seeks to minimize the average distance of data points to their cluster's center. Greedy k-means++ selects initial centroids probabilistically weighted by data point distributions and includes multiple trials when selecting centers, avoiding the pitfall of selecting "bad" centers that would lead to local minima [13]. Choosing different initial centers may result in different clusterings. Granularity is controlled by specifying the desired number of clusters for each use case.
**Hierarchical Agglomerative Clustering** (HA) recursively merges clusters closest together according to some distance metric to build a hierarchical tree, grouping data points in a bottom-up fashion. Different link criteria can be used to change how clusters are merged at each recursive step. We experimented with Ward-linkage, which minimizes the variance between clusters, average-linkage, taking the average distance between data points in clusters, and complete-linkage, where distance between clusters is determined by the maximum pairwise distance between data points [10]. Granularity can be adjusted by setting a desired number of clusters or a maximum distance allowed between merge-able clusters. Two traits make hierarchical clustering methods particularly appealing. First, they produce a browse-able hierarchy of the data, which is ideally both informative and easy to navigate. Second, so long as a consistent strategy is used to break ties when distances are equal, these clustering methods are deterministic, meaning they are not sensitive to model initialization parameters and easy to reproduce.
## 5 Clustering Evaluations
Evaluating the quality of clusterings is a notoriously subjective problem, highly dependent on features of a particular dataset and the context in which the clusters will be used [21]. Given that our goal is to support a broad range of exploratory and confirmatory analyses of Reddit, we rely on both qualitative and quantitative approaches to describe characteristics of clusterings produced from community embeddings.
### Intrinsic Measures of Cluster Quality
These measures capture mathematical properties of the groupings of data points in order to describe how compact data points in a single cluster are and how separated different clusters are within the space.
**Silhouette Coefficient** measures both compactness and separation. It is calculated for each data point by taking a ratio of its average distances to data points in the same cluster
to the nearest data point in a different cluster. Silhouette Coefficients of data points are averaged to get an overall score for a clustering model. Values range from -1 to 1, where higher values are indicative of better clustering assignments and scores around 0 mean clusters overlap [15].
**Davies-Bouldin** also measures compactness and separation, but is computed by comparing a cluster's separation from its most similar neighboring cluster. Lower scores correspond to better clusterings, since that indicates within-cluster data points are tight around centroids while the two centroids are separated. The scores for all clusters are averaged as a final score for the clustering. Davies-Bouldin scores have a minimum of 0 and no upper-bound [16].
Although both metrics are designed to measure similar cluster characteristics, there can be different trade-offs depending on the data and use case for clustering. We use the scikit-learn implementations of both measures 5.
Footnote 5: [https://scikit-learn.org/stable/modules/classes.html#clustering-metrics](https://scikit-learn.org/stable/modules/classes.html#clustering-metrics)
### Qualitative Evaluations
By gathering input from human experts, we seek to understand how our clusterings would be received by users. Two media researchers and one digital collections librarian were each presented with clusterings from a monthly community embedding snapshot. Would they find the groupings sensible and trustworthy enough to use the models as an exploratory tool? Additionally, instances where annotators disagree with the model or each other will provide insights on both ways to improve the methodology and areas of inherent ambiguity in the Reddit space.
**Cluster coherence judgements** serve as our first qualitative evaluation of cluster quality. Without consulting each other, each annotator was asked to mark a cluster as _coherent_ when the subreddits assigned to it had an identifiable theme. For example, a cluster consisting of the subreddits canada, ontario, PersonalFinanceCanada, vancouver, ottawa might have a _Canada_ theme.
We consider the _coherence score_ of a cluster \(C_{i}\) in \(\mathcal{C}_{t}\), \(\mathrm{CS}^{\mathcal{C}_{i}}_{C_{i}}\), to be the percentage of annotators who marked it as coherent, which can be averaged over all clusters to obtain the _clustering coherence score_:
\[\mathrm{CS}_{\mathcal{C}_{t}}=\frac{1}{|\mathcal{C}_{t}|}\sum_{C_{i}\in \mathcal{C}_{t}}\mathrm{CS}^{\mathcal{C}_{t}}_{C_{i}}\]
A clustering's overall usefulness can also be upper-bounded by the share of clusters that at least one annotator found coherent:
\[\text{C-Upper}_{\mathcal{C}_{t}}=\frac{|\{\mathrm{CS}^{\mathcal{C}_{t}}_{C_{ i}}>0\}|}{|\mathcal{C}_{t}|}\]
For situations where user trust is critical, the share of clusters that all annotators found coherent may be more appropriate:
\[\text{C-Lower}_{\mathcal{C}_{t}}=\frac{|\{\mathrm{CS}^{\mathcal{C}_{t}}_{C_{ i}}=1\}|}{|\mathcal{C}_{t}|}\]
If annotators mark many clusters as coherent, we believe end users will find the clustering model's outputs sensible and have trust in the system's exploratory power, even if not every cluster is useful to their work. We measure inter-annotator agreement using Gwet's AC1 [12] to gauge the level of subjectivity for this task.
**Subreddit intruder detection** builds on a word intrusion task originally used to measure the coherence of topic models [10], and is our second task to qualitatively evaluate subreddit clusters by presenting a set of six subreddits to annotators. Five of the subreddits are clustered together by a model, but the sixth is randomly selected, then a random arrangement of the subreddits is presented to annotators. When all annotators are able to pick out the random intruder, the subreddit cluster can be considered coherent. For example, in the set PokemonGoFriends, religion, pokemongo, PokemonGoRaids, pokemontrade, pokemon, the clear non-Pokemon intruder is religion. If annotators do not agree on a single intruder, they may be choosing randomly because the cluster lacks coherence. Alternatively, annotators might agree on a subreddit to be the
Figure 3: Intrinsic measures of quality of clustering models, where a single model for each type is trained from each month’s snapshot embedding for varying numbers of clusters, showing plots of scores averaged over the year. Notably, HA with average and complete linkage have higher variance in the number of subreddits in each cluster and HA average-linkage models have better Davies-Bouldin scores.
intruder which was not actually the randomly inserted. In that case, the cluster may have an ambiguous theme or cover multiple, overlapping themes.
To avoid subreddits' relative renown or popularity misleading annotators during this task, we restricted the selection of the random intruder to subreddits that had a similar number of comments during the time period. When setting up the task for a month \(t\), we first calculated the standard deviation of the number of comments in the 10,000 subreddits included in our snapshot during that time period, \(\sigma_{t}\). For each cluster \(C_{i}\) in a clustering \(\mathcal{C}\) created from the community embedding in month \(t\), we selected the top five most popular subreddits in \(C_{i}\) by number of comments that month. Taking \(\mu_{t,i}\) as the average number of comments for those five subreddits, an intruder \(\mathbf{I}_{C_{i}}\) is drawn from all the subreddits not in \(C_{i}\) where the number of comments during time \(t\) is within \(\mu_{t,i}\pm\sigma_{t}\). Clusters with fewer than five subreddits or no valid intruder available were not used for this task.
As in the original word intrusion task, _model precision_ measures how frequently annotators picked out the random intruder for cluster \(C_{i}\) in the clustering \(\mathcal{C}_{t}\). The model precision is formulated as
\[\mathrm{MP}^{\mathcal{C}_{t}}_{C_{i}}=\frac{\text{number of annotators to identify }\mathbf{I}_{C_{i}}}{\text{total number of annotators}}\]
### Temporal Stability
Similarity of snapshot models from one month to the next is important for the usability of our models. Consider how topographic and geopolitical maps of the globe from different years show gradual changes over time, while drastic changes in a short period are still visually obvious to the informed map-reader. Similarly, our models' users should trust that groupings they create from January's snapshot will still be largely relevant in February. True changes in communities, such as the emergence of a new community from an existing subreddit or cross-community engagement between previously unrelated subreddits, must also be reflected.
**Jaccard Similarity** is used to compare the membership of two sets, \(A\) and \(B\), ranging from 0 when the sets have no elements in common to 1 when they contain exactly the same members.
\[J(A,B)=\frac{|A\cap B|}{|A\cup B|}\]
Using pairwise comparisons of a particular subreddit's nearest neighbors across the monthly community embedding snapshots, we can describe how the community of users active in that subreddit may be shifting.
**Variation of Information** (VI) is an information theoretic based metric that can be used to compare clusterings of the same data points, subreddits in our case. In simple terms, VI measures how much information is required to turn one clustering into another. If only a few data points need to be reassigned, the clusterings are similar and VI is low. To compare two clusterings \(\mathcal{C}_{i}\) and \(\mathcal{C}_{j}\), VI is computed using entropy \(H\) and mutual information \(I\) as
\[VI(\mathcal{C}_{i},\mathcal{C}_{j})=H(\mathcal{C}_{i})+H(\mathcal{C}_{j})-2I( \mathcal{C}_{i},\mathcal{C}_{j})\]
Several properties of VI make it appealing for our use case. First, it is a metric on clusterings and follows all the axioms of a distance metric for comparing clusterings, namely
Figure 4: Histogram of average Jaccard Similarity of each subreddit’s 20 nearest neighbors under community embedding snapshot models in adjacent months. Vertical lines indicate mean and one standard deviation above and below.
non-negativity, symmetry, evaluating to zero only when clusterings are identical, and obeying the triangle inequality. Moreover, VI does not directly depend on the number of data points in the data set, so VI values from clusterings of different data set sizes can be interpreted on the same scale, so long as there is a fixed upper bound on the number of clusters [12]. For example, if \(\max(|\mathcal{C}_{i}|,|\mathcal{C}_{j}|)=101\), the upper bound on \(VI(\mathcal{C}_{i},\mathcal{C}_{j})\) is 13.32. As a consequence, VI allows us to measure the differences between subreddit clusterings at different temporal snapshots, even when different numbers of subreddits overlap from one month to the next.
In our case, when we want to compare a clustering \(\mathcal{C}_{i}\) of subreddits \(S_{i}\) to a clustering \(\mathcal{C}_{j}\) of subreddits \(S_{j}\), we extend the clusterings to also account for subreddits which only appear in one set by assigning subreddits which did not originally appear in time period \(t_{i}\) to a new cluster, \(C^{\prime}_{i}=\{s|s\in S_{j},s\notin S_{i}\}\), then define \(\mathcal{C}^{\prime}_{i}=\mathcal{C}_{i}\cup\{C^{\prime}_{i}\}\). Similarly, \(C^{\prime}_{j}=\{s|s\in S_{i},s\notin S_{j}\}\) and \(\mathcal{C}^{\prime}_{j}=\mathcal{C}_{j}\cup\{C^{\prime}_{j}\}\). Now \(\mathcal{C}^{\prime}_{i}\) and \(\mathcal{C}^{\prime}_{j}\) partition the same set of data points, \(S=S_{j}\cup S_{i}\). This approach allows us to compare clusterings of the top 10,000 subreddits during different temporal snapshots of Reddit, measuring the stability of clusterings as a user would experience them, seeing clusters merge or shift over time as user comment behavior changes or subreddits change in popularity.
## 6 Experimental Results and Discussion
### Intrinsic Clustering Quality
To gauge which configuration produced the most usable clusters, we trained the clustering models described in 4.2 on each monthly snapshot embedding, varying the desired number of clusters as an input parameter. The intrinsic measures, seen in figure 3 favor k-means++ and HA average-linkage models, especially when the clustering size was smaller than 100. The average-linkage HA models had better Davies-Bouldin scores and the highest silhouette scores for low numbers of clusters, while k-means++ models had better silhouette scores for moderately sized clusterings sized 50 through 175. As the number of clusters increases, the differences in intrinsic quality measures between different clustering models became less pronounced, although the average-linkage HA models maintained better Davies-Bouldin scores, even up to settings with 400 clusters. As an effect of the linkage method choice, the average-linkage HA models had much larger variation in the number of subreddits assigned to clusters, resulting in some clusters containing less than five subreddits, and consequently clusters assigned many more subreddits than the average cluster size. This does not arise in the Ward-linkage HA or k-means++ clusterings.
### Temporal Stability
#### Community Embeddings
Even before applying clustering, the community embeddings snapshots reveal temporal trends around categories of subreddits.
As seen in figure 1, community embeddings produced from each monthly snapshot are able to consistently achieve high P@5 on solving subreddit analogies, yet temporal shifts are obvious when viewing the analogies by category for each month. Collections of subreddits will have more activity during certain times of the year, determining which analogies can be solved, an effect of our data pre-processing approach. Figure 2 shows that there are more solvable analogies involving sports teams during each sport's season, which is particularly pronounced for baseball. University-city pairs have more solvable analogies during August, September, and January, times when students may be actively commenting to find housing and information about their school or neighborhood at the beginning of the semester. Even in time periods when there are fewer subreddits from a category retained, the community embedding still solve a high proportion of the remaining analogies.
Changes in an individual subreddit's nearest neighbors from one month to the next offer another way to analyze temporal shifts in communities on Reddit. Take the subreddit opensea, dedicated to a marketplace for selling non-fungible tokens (NFTs) associated in particular with digital artwork. In the earliest months of our dataset, April through June, 2021, opensea had a variety of subreddits in its top 20 nearest neighbors: other NFT marketplaces (NFTsMarketplace), general interest cryptocurrency and blockchain subreddits (defi, Metamask, altcoin), and those dedicated to art and artists (AbstractArt, DigitalPainting, Illustration, animation). The Jaccard Similarity of opensea's top 20 nearest neighbors was also fairly low between monthly snapshots from April to June, 0.25 both months, meaning only a quarter of the neighbors were shared between months. Over the course of 2021, as popularity and sales of NFTs exploded [15], the art and cryptocurrency focused subreddits disappeared from opensea's neighbors, replaced by those dedicated solely to exchanging NFTs (OpenseaMarket, NftGiveawayOnly, NFTCollect, NFTExchange). Jaccard Similarity of the top 20 nearest neighbors between months also increased, averaging 0.58 for the last 6 months of our data and peaking at 0.82 between December and January. Our method corroborates the establishment of a Reddit community around exchanging NFTs coinciding with growth in NFT sales on the market.
To see how this observation would generalize, for the 6,950 subreddits that appeared in every monthly snapshot, we analyzed the Jaccard Similarity of their 20 nearest neighbors in adjacent months. For example, the 20 most similar subreddits to aww, the subreddit dedicated to cute animal pictures, under the April 2021 community embedding model would be compared to aww's 20 most similar subreddits in May 2021 using Jaccard Similarity, then May 2021 compared to June 2021, and so on. This results in 11 scores for pairs of adjacent months for every subreddit. A histogram of this average Jaccard Similarity is presented in figure 4.
At its extreme low value, this average Jaccard similarity score identifies subreddits with comments almost exclusively from bots dedicated to that subreddit, making it difficult to anchor them to other subreddits in the embedding models. These include subreddits intended to function like RSS feeds (newsNepal, OzBargainBin), facilitate forum
polls (CelebbhattlePolls) or observe a chat bot interacting with itself (SubSimulartorGPT). They constitute an interesting category of subreddits which exist only to be observed by Redditors. Because there are too few non-bot user comments, the community embedding models cannot learn a position for these subreddits relative to others and give them a random embedding and set of neighbors each month.
Inversely, high similarity of nearest neighbors over time can be used to identify subreddits that have strong connections within a particular Reddit community during the entire year. The top 5 most stable subreddits found with this method, readwithme, RedditSessions, TheArtistStudio, TheYouShow, TheGamerLounge, all belonged to the Reddit Public Access Network (RPAN), a set of 17 subreddits to which users could live-stream, which launched in 2019 (Peters 2020). In a note to the community in November, 2022, Reddit announced discontinuation of the service citing costs, thanking the RPAN's "diehard fans and avid moderation teams"6. Although we weren't previously aware of RPAN, our method was able to detect this community, which had an active, dedicated user-base in 2021, as people facing lockdowns during the COVID-19 pandemic turned to online socializing. Other subreddits with highly stable neighbors also belonged to easily identifiable communities, namely open source software (gnome, kde) and mushroom cultivation for eating and psychedelics (MushroomGrowers, shrooms).
Footnote 6: [https://www.reddit.com/r/pan/comments/yl5zzd/update_on_the_future_of_live_video_broadcasting/](https://www.reddit.com/r/pan/comments/yl5zzd/update_on_the_future_of_live_video_broadcasting/)
At less extreme values, this average Jaccard Similarity is more difficult to interpret. Popular general interest subreddits, like gaming, facepalm and AskWomen, fall near the average, but so do subreddits which one might expect to be part of a dedicated community, like one community for fans of the amine _One Piece_ and another for organizing players of the mobile game _Pokemon Go_. The value is not strongly correlated with a subreddit's popularity by number of comments in the time period (Pearson's \(r=0.063\)), but we encourage future work to explore whether it is related to other features of user behavior, such a subreddit's user retention or generalist-specialist score (Waller and Anderson 2019).
different months. Summarized in figure 5, these results show how similar groupings of subreddits are over the course of the year within a single clustering setting. Clusterings produced from the same type of model are generally the most stable across the entire year, but clusterings produced from different models are comparable and the choice of clustering model and parameters does matter. HA with average-linkage clusterings are the most similar month-over-month, while HA with complete linkage are the least stable. Clusterings are also fairly stable within k-means++ and HA Ward models.
We repeated this experiment using only k-means++ models, training 10 models on each monthly snapshot and varying the models' initial starting parameters. Low average VI within the same month suggests consistent clusterings can be produced from the community embeddings and the process isn't sensitive to starting parameters. VI of these internnothy experiments appear in figure 6, where the gradual change of clusterings is clear. The similarity of clusterings within the same month is seen in the dark diagonal band, which fades out to the most different clusterings between months furthest apart in time. To some extent, this relationship is due to our strategy of computing VI based on the union of the top 10,000 subreddits between months, which also change gradually over time, but this approach captures the way end-users of the clusterings experience the changes when browsing subreddit clusters.
### Qualitative Clustering Evaluations
Based on their strong intrinsic metrics and greater temporal stability, we focused manual annotation efforts on the HA average-linkage and k-means++ clusterings. We arbitrarily selected two monthly snapshots, July 2021 and March 2022, for annotation at a granularity of 100 clusters, seeking a balance between a tractable number of clusters to review and the average number of subreddits which can be judged at a glance for each cluster. In terms of the tasks described in section 5.2, annotators marked the clusters produced by k-means++ as more coherent and also more readily identified intruder subreddits for k-means++ models, summarized in table 1 and figure 7.
Inter-annotor agreement differed between k-means++ and HA average-linkage clusterings on the coherence task, suggesting that judging coherence for the HA average-linkage models was more subjective or challenging for annotators. For the k-means++ clusterings, Gwet's AC1 was 0.85, falling in the excellent range, while on HA average-linkage clusterings, it was in the moderate range at 0.50 [23]. The major source of disagreement were clusterings that HA average-linkage assigned only one or two subreddits, related to the large standard deviation in cluster size for those clusterings as seen in figure 3. One annotator felt that a clustering consisting of only one subreddit was inherently coherent, its topic being exactly that subreddit's topic of interest, resulting in a creditable C-Upper score of 0.88 for HA average-linkage clusterings. Others thought HA average-linkage failed at the goal of contextualizing that subreddit within a community on Reddit.
Anecdotally, annotators reported high trust in the k-means++ clusters in particular, citing groupings of subreddits that reflected linguistic or geographic features. In both months the k-means++ models surfaced both _Canada_ (canada, ontario, vancouver...) and _India_ (Cricket, indiscocial, indidia, unitedstatesofindia...) clusters. In March 2022, they noted two separate clusterings for US cities on the East (boston, nyc, philadelphia, AskNYC, washingtondc...) and West coasts (LosAngeles, Seattle, Portland, bayarea, sandiego...), and a German language cluster (de, ich.iel, FragReddit, Austria, Finanzen, germany, mauerstrassenwetten...). Results for HA average-linkage clusterings with size 100 were underwhelming, but if hierarchical clusters are desired, VI results suggest that Wardlinkage may produce clusterings more similar to k-means++.
## 7 Ethical Statement
Although our work gives an overview of Reddit in aggregate at the subreddit level and does not draw examples from specific posts, comments, or users, it can facilitate discovery of particular subreddit communities and detailed analysis of their users or content. We acknowledge the potential for drawing unwanted attention and the observer effect, where attention will change the way people use Reddit. This attention may be quite harmful to users if it results in doxing or abandoning subreddit communities or long-held pseudonyms to avoid embarrassment or scrutiny. In many spaces on the internet, such harmful attention disproportionately affects marginalized communities, women and LGBTQ+ folks. We intentionally selected the top 10,000 subreddits by number of comments as a threshold for this work, as users interacting in these subreddits tacitly understand their posts and comments may get significant public attention.
Many Redditors are aware that their public posts are be
Figure 7: The histogram shows the distribution of \(\mathrm{MP}^{\mathcal{C}_{t}}_{C_{i}}\) across all clusters annotated in our intruder task. Annotators were more able to identify the random intruder in clusters from k-means++ models than hierarchical agglomerative with average-linkage.
ing used for research purposes and have expressed concerns over use of the Pushshift dataset as it relates to GDPR compliance, particularly over the right to be forgotten and having personal identifiable information removed from data releases and the Pushshift API. At the time of this paper's submission, Reddit has indicated that it will no longer allow Pushshift to maintain its data set through existing methods. Like most researchers focused on Reddit, we see Pushshift as an invaluable research, and we may be forced to significantly change our methods or desist from researching Reddit if Pushshift and Reddit are not able to find agreement on methods for archiving and studying the site.
Fittingly, we direct researchers' attention to the discussions on the pushshift subreddit and this request7 that researchers alert subreddit moderators when using data scraped from certain subreddits to minimize harms on communities. We echo Reagle's recommendations, importing our fellow researchers to directly engage with the subreddits they're studying to protect user privacy or give creative attribution when appropriate [1]. Messaging subreddit moderators on the platform is a great place to start.
Footnote 7: [https://www.reddit.com/r/Drugs/comments/ri4kqd/](https://www.reddit.com/r/Drugs/comments/ri4kqd/)
dear_researchers_scraping_data_from_this/
## 8 Concluding Discussion
We presented a method for building monthly snapshot community embeddings for popular subreddits that facilitates exploration of Reddit during that time period through similarity relationships and clustering. Despite each month's snapshot being created independently from other months in the dataset, these models capture gradual changes in Reddit over time and can be used in confirmatory and exploratory analyses to understand community trends on Reddit. Cluster coherence and subreddit intruder annotation tasks showed these clusterings align with the judgements of human experts, especially when using k-means++ models, which could provide an intuitive interface for browsing and better understanding discourse on Reddit.
In the near future, we look forward to providing these models to the community as an interactive application and hope that they will facilitate further research on Reddit in partnership with moderators and subreddit communities. We also anticipate many ways to improve these models, such as applying soft clustering to create multiple lenses to view relationships between subreddits or incorporating genealogy graphs or procrustes alignment to further compare monthly snapshot embeddings. Additional features, especially text and user retention or growth, may be good starting points to generate insights.
|
2309.15147 | The OGLE Collection of Variable Stars. Over 15 000 Delta Scuti Stars in
the Large Magellanic Cloud | We present the OGLE collection of delta Scuti stars in the Large Magellanic
Cloud and in its foreground. Our dataset encompasses a total of 15 256 objects,
constituting the largest sample of extragalactic delta Sct stars published so
far. In the case of 12 delta Sct pulsators, we detected additional eclipsing or
ellipsoidal variations in their light curves. These are the first known
candidates for binary systems containing delta Sct components beyond the Milky
Way. We provide observational parameters for all variables, including pulsation
periods, mean magnitudes, amplitudes, and Fourier coefficients, as well as
long-term light curves in the I- and V-bands collected during the fourth phase
of the OGLE project.
We construct the period-luminosity (PL) diagram, in which fundamental-mode
and first-overtone delta Sct stars form two nearly parallel ridges. The latter
ridge is an extension of the PL relation obeyed by first-overtone classical
Cepheids. The slopes of the PL relations for delta Sct variables are steeper
than those for classical Cepheids, indicating that the continuous PL relation
for first-overtone delta Sct variables and Cepheids is non-linear, exhibiting a
break at a period of approximately 0.5 d.
We also report the enhancement of the OGLE collection of Cepheids and RR Lyr
stars with newly identified and reclassified objects, including pulsators
contained in the recently published Gaia DR3 catalog of variable stars. As a
by-product, we estimate the contamination rate in the Gaia DR3 catalogs of
Cepheids and RR Lyr variables. | I. Soszyński, P. Pietrukowicz, A. Udalski, J. Skowron, M. K. Szymański, R. Poleski, D. M. Skowron, S. Kozłowski, P. Mróz, P. Iwanek, M. Wrona, K. Ulaczyk, K. Rybicki, M. Gromadzki, M. Mróz | 2023-09-26T18:00:01Z | http://arxiv.org/abs/2309.15147v2 | # The OGLE Collection of Variable Stars.
###### Abstract
We present the OGLE collection of $ Scuti stars in the Large Magellanic Cloud and in its foreground. Our dataset encompasses a total of 15 256 objects, constituting the largest sample of extragalactic $ Sct stars published so far. In the case of 12 $ Sct pulsators, we detected additional eclipsing or ellipsoidal variations in their light curves. These are the first known candidates for binary systems containing $ Sct components beyond the Milky Way. We provide observational parameters for all variables, including pulsation periods, mean magnitudes, amplitudes, and Fourier coefficients, as well as long-term light curves in the \(I\)- and \(V\)-bands collected during the fourth phase of the OGLE project.
We construct the period-luminosity (PL) diagram, in which fundamental-mode and first-overtone $ Sct stars form two nearly parallel ridges. The latter ridge is an extension of the PL relation obeyed by first-overtone classical Cepheids. The slopes of the PL relations for $ Sct variables are steeper than those for classical Cepheids, indicating that the continuous PL relation for first-overtone $ Sct variables and Cepheids is non-linear, exhibiting a break at a period of approximately 0.5 d.
We also report the enhancement of the OGLE collection of Cepheids and RR Lyr stars with newly identified and reclassified objects, including pulsators contained in the recently published Gaia DR3 catalog of variable stars. As a by-product, we estimate the contamination rate in the Gaia DR3 catalogs of Cepheids and RR Lyr variables.
Stars: variables: delta Scuti - Stars: oscillations - Magellanic Clouds - Catalogs +
Footnote †: Based on observations obtained with the 1.3-m Warsaw telescope at the Las Campanas Observatory of the Carnegie Institution for Science.
## 1 Introduction
The fourth phase of the Optical Gravitational Lensing Experiment (OGLE-IV) has yielded extensive catalogs of variable stars, including nearly complete samples of Cepheids and RR Lyr stars in the Magellanic Clouds (_e.g._, Soszynski _et al._ 2015a, 2016, 2017, 2019). \(\delta\) Scuti variables share the same pulsation mechanism with Cepheids and RR Lyr stars, which consequently places them within the same instability strip in the Hertzsprung-Russell diagram. Recently, Soszynski _et al._ (2022) published a collection of over 2600 \(\delta\) Sct pulsators in the Small Magellanic Cloud (SMC) - the first ever such catalog covering the entire area of this galaxy. In this paper, we extend the OGLE Collection of Variable Stars (OCVS) by about 15 000 \(\delta\) Sct stars carefully selected in the OGLE-IV photometric database in the Large Magellanic Cloud (LMC).
\(\delta\) Sct are mid-A to early-F type pulsating stars that populate the Cepheid instability strip on or slightly above the main sequence. They exhibit low-order radial and non-radial pressure modes with periods below 0.3 d that are self-excited through the \(\kappa\)-mechanism. The \(\delta\) Sct class includes a mixture of stars at different evolutionary stages: young stellar objects during their contraction toward the main sequence, stars with core hydrogen burning on the main sequence, subgiants evolving off the main sequence, and Population II blue stragglers, called SX Phe stars.
The first \(\delta\) Sct stars in the LMC were discovered 20 years ago. The OGLE-II catalog of RR Lyr stars in the LMC (Soszynski _et al._ 2003) was supplemented with 37 short-period pulsating variables, of which 29 turned out to be actual \(\delta\) Sct stars, while the remaining ones were ultimately classified as Cepheids or RR Lyr stars. In turn, Kaluzny and Rucinski (2003) reported the discovery of eight small-amplitude short-period variables in the LMC open cluster LW 55 and suggested that seven of them might be \(\delta\) Sct stars.
The largest catalogs of \(\delta\) Sct stars in the LMC to date were published based on photometric data collected by the OGLE-III, SuperMACHO, and EROS-2 surveys. Poleski _et al._ (2010) identified 2788 candidates for \(\delta\) Sct variables in the OGLE-III database, although more than half of them were marked as uncertain due to their low luminosity, close to the detection limit of the 1.3-m OGLE telescope. At the same time, Garg _et al._ (2010) published a list of 2300 high-amplitude \(\delta\) Sct candidates detected by the 4-m Blanco Telescope operated by the SuperMACHO project. Then, Kim _et al._ (2014) reported the discovery of 2481 \(\delta\) Sct stars in the EROS-2 light curve database. These samples were supplemented with 55 \(\delta\) Sct variables found by Salinas _et al._ (2018) in a field centered on the LMC globular cluster NGC 1846. All these catalogs together contain about 6600 \(\delta\) Sct candidates in the LMC.
In this work, we verify these samples and significantly increase the number of known \(\delta\) Sct variables in the LMC. The remainder of this paper is organized as follows. In Section 2, we provide details about the OGLE observations and data
reduction. Section 3 outlines the procedures employed for the identification and classification of \(\delta\) Sct stars in the LMC. In Section 4, we cross-match our collection of \(\delta\) Sct variables with external catalogs of variable stars. Section 5 presents newly detected Cepheids and RR Lyr stars which were also included in the OCVS. As a by-product, we examine the contamination rates of the recently published Gaia DR3 catalogs of Cepheids (Ripepi 2023) and RR Lyr stars (Clementini 2023). In Section 6, we summarize the OGLE collection of \(\delta\) Sct stars in the LMC. The on-sky distribution of \(\delta\) Sct variables in the Magellanic Clouds is presented in Section 7. In Section 8, we derive the period-luminosity (PL) relations for \(\delta\) Sct stars in the LMC. The subsequent section is devoted to multimode \(\delta\) Sct pulsators. Binary systems containing \(\delta\) Sct components are discussed in Section 10. Finally, Section 11 provides a summary of our results.
### 2 Observations and Data Reduction
The OGLE observations were taken with the 1.3-meter Warsaw Telescope located at Las Campanas Observatory in Chile. The observatory is operated by the Carnegie Institution for Science. The Warsaw Telescope is equipped with a mosaic camera consisting of 32 2k\(\times\) 4k CCDs with about 268 million pixels in total. The field of view of the OGLE-IV camera is 1.4 square degrees, with a pixel scale of 0.\({}^{\prime\prime}\) 26. For our research, we utilized photometric data in two photometric bands obtained within the OGLE-IV project between March 2010 and March 2020. The majority of the observations (typically around 700 data points per object) were collected by the OGLE-IV survey using the \(I\)-band filter from the Cousins photometric system. Additionally, from several to over 300 (typically 120) data points per star have been secured in the \(V\)-band filter, closely reproducing the bandpass from the Johnson photometric system.
The OGLE-IV project regularly observes an area of 765 square degrees in the Magellanic System region, fully covering the LMC, SMC, and the Magellanic Bridge connecting both galaxies. We adopted the celestial meridian of 2\({}^{\rm h}\) 8 as the arbitrary boundary that separates the LMC from the SMC in the sky. In the LMC region, the OGLE survey monitors the brightness of around 70 million stars with \(I\)-band magnitudes ranging from about 13.0 to 21.5. A detailed description of the instrumentation, photometric reductions, and astrometric calibrations of the OGLE-IV observations is provided by Udalski (2015).
### 3 Search for \(\delta\) Sct Stars
Our search for \(\delta\) Sct variables in the LMC followed procedures similar to those described in Soszynski (2022). Firstly, the \(I\)-band time series of each star observed by OGLE in the LMC were passed through the period-search algorithm implemented in the Fnpeaks code+. We explored the frequency range from 0 to
100 cycles per day with a resolution of \(5\times 10^{-5}\) cycles per day. For each light curve, we measured the dominant period and then subtracted it, along with its first two harmonics, from the data. Then, we repeated the periodicity search procedure on the residuals, allowing us to measure two strongest periods for each source.
The next stage of our procedure for selecting and classifying \(\delta\) Sct stars in the LMC was based on the visual inspection of the light curves with a dominant period shorter than 0.3 d. After excluding known eclipsing variables and RR Lyr stars, we examined the \(I\)- and \(V\)-band light curves of over 100 thousand stars with the largest signal-to-noise ratios of their periods. Although the OGLE \(V\)-band light curves consist of a smaller number of data points compared to the \(I\)-band time series, this is compensated by lower noise in the \(V\) filter for most \(\delta\) Sct variables. Based on the characteristic shapes of the light curves, we selected an initial sample of candidate \(\delta\) Sct stars. Furthermore, we identified a number of double-mode pulsators based on their high signal-to-noise secondary periods and period ratios in the range of 0.75 to 0.81 (see Section 0.2).
Figure 1: \((V-I)_{0}\) vs. \(I_{0}\) color–magnitude diagram for \(\delta\) Sct stars in the LMC (blue points). The background yellow points show stars from the field LMC519. Darker colors indicate areas of higher density of the points. The colors and magnitudes have been corrected for interstellar extinction using the reddening maps by Skowron (2021).
In the last stage, we verified our candidates by checking their positions on the color-magnitude and PL diagrams. Fig. 1 shows the color-magnitude diagram for \(\delta\) Sct stars in the LMC. The \(I\)-band magnitudes and \((V-I)\) color index have been corrected for interstellar extinction using the high-resolution reddening maps by Skowron (2021). During the selection process, most of the sources with a dereddened color index outside the range \(0.1<(V-I)_{0}<0.7\) mag (corresponding to the instability strip for \(\delta\) Sct variables) have been removed from our initial list. However, approximately 5% of the stars in the final version of our collection have colors outside this range because we considered that they may be genuine \(\delta\) Sct pulsators blended with other stars. The result of our selection procedure was an initial list of approximately 14 000 \(\delta\) Sct stars in the direction of the LMC.
### 4 Comparison with the Literature
In order to assess the completeness and contamination rate of the OGLE collection of \(\delta\) Sct stars in the LMC, we cross-matched it with several lists of variable stars, including the catalogs published by the OGLE-III (Poleski 2010), SuperMACHO (Garg 2010), and EROS-2 (Kim 2014) projects, the International Variable Star Index (VSX, Watson 2006), and the Gaia DR3 catalog of main-sequence pulsators (Gaia Collaboration 2023). We carefully examined the light curves of \(\delta\) Sct candidates that were not present in the initial version of our collection and supplemented it with over 1000 objects that we identified as genuine \(\delta\) Sct pulsators. The final OGLE collection contains 4712 stars that were identified as \(\delta\) Sct variables in previously released catalogs. This means that 10 544 objects in our sample (69%) are new discoveries. Below, we present detailed results of the comparison between the OGLE collection and other catalogs of \(\delta\) Sct stars in the LMC.
Our collection shares 2309 sources with the OGLE-III catalog of 2788 \(\delta\) Sct candidates in the LMC (Poleski 2010). For the remaining 479 objects (representing about 17% of the OGLE-III catalog), we were unable to confirm the classification provided by Poleski (2010). Several of these stars have been reclassified as classical Cepheids or RR Lyr variables, several dozen other sources turned out to be eclipsing or ellipsoidal variables, but the majority of the rejected stars have an unknown classification. While it is possible that some of these objects are true \(\delta\) Sct stars, we decided to exclude them from the OCVS in order to maintain the purity of our sample. It is worth noting that the vast majority of the rejected stars were marked as uncertain in the OGLE-III catalog.
The SuperMACHO catalog of high-amplitude \(\delta\) Sct stars in the LMC (Garg 2010) contains 2300 objects3. The SuperMACHO project (Rest 2005) was an optical survey of the LMC conducted with the 4-m Blanco telescope at
the Cerro Tololo InterAmerican Observatory in Chile. The SuperMACHO photometry is deeper than the OGLE photometry obtained with the 1.3-meter Warsaw telescope, which is the main reason why 308 $ Sct stars discovered by Garg _et al._ (2010) are missing in our collection. In turn, we confirm the classification of 1992 brighter variables, which indicates a high level of purity of the SuperMACHO catalog.
The EROS-2 catalog (Kim _et al._ 2014) comprises 117 234 automatically classified variable stars in the LMC, of which 2481 are categorized as $ Sct candidates. It is worth noting that the list published by Kim _et al._ (2014) contains in total 150 115 candidates for variable stars, however objects fainter than \(B_{E}=20\) mag and those with low signal-to-noise ratios (\(\rm{S/N}<20\)) of their periods were considered false positives and thus excluded from the official EROS-2 catalog. Following this approach, we also excluded these faint and low-signal-to-noise EROS-2 sources from our cross-match. Our collection includes 1376 out of 2481 objects classified by Kim _et al._ (2014) as $ Sct stars, which represents about 56% of the EROS-2 sample. Among the missing stars, we found, over 300 eclipsing binary systems, more than 50 RR Lyr variables, some irregular variables, and constant stars.
Comparison of our $ Sct sample with the VSX catalog (Watson _et al._ 2006) revealed 30 common objects, all of which are brighter than \(I=16.5\) mag, indicating they are foreground stars. The vast majority of these variables were discovered by the All-Sky Automated Survey for Supernovae (ASAS-SN, Jayasinghe _et al._ 2019, Christy _et al._ 2023). In the catalog of main-sequence pulsators published as part of the Gaia DR3 (Gaia Collaboration _et al._ 2023), we identified 49 $ Sct stars that overlap with our sample. All of these variables also belong to the halo of the Milky Way. Additionally, our collection contains 71 variables classified in the Gaia DR3 catalog as RR Lyr stars, eclipsing binaries, or short-timescale variables.
## 5 New Cepheids and RR Lyr Stars. Comparison to the Gaia DR3 Catalog
Our search for $ Sct variables in the Magellanic System and the cross-match of the OGLE databases with the Gaia DR3 catalog of variable stars (Clementini _et al._ 2023, Ripepi _et al._ 2023) allowed us to expand the OGLE collection of Cepheids and RR Lyr stars (Soszynski _et al._ 2015a, 2016, 2017, 2019). Population I $ Sct stars and classical Cepheids form a continuous distribution, therefore we adopted a boundary pulsation period that separates these two types of variables. As it was reasoned in Soszynski _et al._ (2022), we applied a maximum period of 0.3 d for the fundamental mode and 0.23 d for the first-overtone mode in $ Sct stars. Population I pulsators with longer periods are categorized as classical Cepheids in the OCVS.
The adoption of these strict criteria resulted in the reclassification of several pulsating stars already present in the OCVS, although this change is purely formal. Two multimode variables with the first-overtone periods shorter than 0.23 d were
moved from the list of classical Cepheids to the collection of \(\delta\) Sct stars. On the other hand, four stars classified by Poleski (2010) as \(\delta\) Sct stars have pulsation periods that, according to our new criteria, place them among Cepheids, so they were included in the OGLE collection of classical Cepheids in the LMC (Soszynski 2015ab). In addition, the classification of six pulsators with periods above 0.2 d has been changed from \(\delta\) Sct to first-overtone RR Lyr (RRc) stars. In these cases, we mainly relied on their position in the PL diagram, as they fall on the relation for RRc stars, below the PL sequence for delta Scuti stars. Table 1 contains all the reclassified classical pulsators in the LMC that have been moved between the OGLE catalogs of Cepheids, RR Lyr stars, and \(\delta\) Sct stars.
\begin{tabular}{l c c} \hline Old identifier & New identifier & New & Subtype \\ & & classification & \\ \hline OGLE-LMC-CEP-3367 & OGLE-LMC-DSCT-07569 & \(\delta\) Sct star & 1O/2O \\ OGLE-LMC-CEP-3374 & OGLE-LMC-DSCT-11916 & \(\delta\) Sct star & 1O/2O/3O \\ OGLE-LMC-DSCT-0394 & OGLE-LMC-RRLYR-41407 & RR Lyr star & RRc \\ OGLE-LMC-DSCT-0434 & OGLE-LMC-CEP-4716 & Classical Cepheid & 1O \\ OGLE-LMC-DSCT-0662 & OGLE-LMC-CEP-4717 & Classical Cepheid & F/1O \\ OGLE-LMC-DSCT-0765 & OGLE-LMC-RRLYR-41427 & RR Lyr star & RRc \\ OGLE-LMC-DSCT-0927 & OGLE-LMC-CEP-4718 & Classical Cepheid & F/1O/2O \\ OGLE-LMC-DSCT-1305 & OGLE-LMC-CEP-4720 & Classical Cepheid & 1O \\ OGLE-LMC-DSCT-1428 & OGLE-LMC-RRLYR-41452 & RR Lyr star & RRc \\ OGLE-LMC-DSCT-1709 & OGLE-LMC-RRLYR-41461 & RR Lyr star & RRc \\ OGLE-LMC-DSCT-1955 & OGLE-LMC-RRLYR-41467 & RR Lyr star & RRc \\ OGLE-LMC-DSCT-2716 & OGLE-LMC-RRLYR-41514 & RR Lyr star & RRc \\ \hline \end{tabular}
Our collection of variable stars has also been enriched with additional Cepheids and RR Lyr stars identified as by-products of the search for \(\delta\) Sct pulsators, as well as through cross-matching with the Gaia DR3 catalog of variable stars (Clementini 2023, Ripepi 2023). The number of classical Cepheids in the LMC increased by five objects (including a very rare case of a double-mode pulsator with the second- and third-overtone modes simultaneously exited), type II Cepheids by one, anomalous Cepheids by two, and RR Lyr stars by 355 previously overlooked variables. These stars were omitted in the earlier editions of the OCVS due to a small number of measurement points in the OGLE light curves, or noisy photometry, or symmetric light curves, or pulsation periods close to 1/2 d, which led to an erroneous measurement of the period due to a daily alias. This update resulted in an increase of the OGLE sample of Cepheids in the LMC by less than 0.2%, while the list of RR Lyr stars grew by less than 1%, confirming the high completeness of the OCVS in the Magellanic Clouds.
We also utilized the Gaia DR3 catalog to validate the completeness of the OGLE collection of classical pulsating stars in the Galactic bulge and disk (Udalski 2018, Pietrukowicz 2020, Soszynski 2020, 2021). Firstly, we cross-matched the Gaia catalog with the OCVS. Then, we extracted and carefully examined the OGLE light curves of candidate pulsators that were not previously included in our collection. As a result, the OCVS was supplemented with 108 Galactic Cepheids of all types (representing 2.9% of the previously published sample), 1848 RR Lyr stars (2.4%), and 94 \(\delta\) Sct stars (0.4%).
As a by-product of our analysis, we examined the contamination rates of the Gaia DR3 catalogs of Cepheids and RR Lyr stars. The final version of the Gaia catalog contains 15 006 candidates for classical, type II, and anomalous Cepheids in the Milky Way, Magellanic Clouds, M31, and M33 (Ripepi 2023). The OGLE photometric databases provide time-series data for 12 139 of these stars. We confirm that the vast majority of them, over 97%, are indeed Cepheids. Among the remaining stars, we identified more than 50 eclipsing and ellipsoidal variables, over 50 objects were classified by the OGLE team as RR Lyr or \(\delta\) Sct stars, we also identified some long-period variables, spotted variables, and other types of variable stars. Furthermore, the detailed division into classical and type II Cepheids agrees well in the Gaia DR3 catalog and OCVS. The only exception are anomalous Cepheids, as over 40% of the stars categorized as anomalous Cepheids in the Gaia DR3 catalog have a different classification in the OGLE collection.
The Gaia DR3 catalog contains 270 905 candidates for RR Lyr stars (Clementini 2023), out of which 148 401 are observed by the OGLE survey. We confirmed the Gaia classification for 104 346 (approximately 70%) of these stars. Among the remaining \(\approx 44\,000\) objects, we discovered around 300 eclipsing variables and some other types of variable stars. However, the vast majority of the Gaia RR Lyr candidates in this group show no periodic variability at all. We conducted a visual inspection of both the OGLE \(I\)-band and Gaia \(G\)-band light curves of these misclassified stars and noticed that most of them are faint (\(G>20\) mag), close to the detection limit of the Gaia telescope. The Gaia time-series photometry for these objects is usually quite noisy, and the periods provided in the Gaia DR3 catalog appear to be random measurements fitted to outlier data points in the light curves. Further analysis revealed a pronounced correlation between the contamination from non-RR Lyr sources and the brightness of stars in the Gaia DR3 catalog. The contamination rate equals approximately 1% for objects brighter than \(G=19\) mag, it increases to 21% for sources with mean \(G\)-band magnitudes ranging from 19 to 20 mag, and escalates to nearly 90% for stars fainter than \(G=20\) mag.
## 6 The OGLE Collection of \(\delta\) Sct Stars in the LMC
The definitive version of our collection comprises 15 256 \(\delta\) Sct variables found in the OGLE fields toward the LMC. Approximately 15 000 of these stars belong to the LMC, while the remaining objects are part of the Milky Way's halo. The major
ity of \(\delta\) Sct stars in our sample exhibit radial pulsation in either the fundamental or first-overtone mode, as indicated by relatively large amplitudes of their light curves and position in the PL diagram (see Section 8). However, we refrain from providing the presumed pulsation modes of our variables due to the challenges associated with their identification in specific cases. Instead, we have divided our \(\delta\) Sct sample into single mode and multimode pulsators. The latter category encompasses 639 stars (about 4% of the entire catalog) that feature substantial amplitudes of their secondary or tertiary pulsation modes.
The list of our \(\delta\) Sct stars together with their basic parameters (equatorial coordinates, intensity-averaged mean magnitudes in the \(I\) and \(V\) bands, up to three pulsation periods, amplitudes, epochs of the maximum light, and Fourier coefficients), as well as OGLE-IV time-series photometry, can be accessed through the OGLE Internet Archive:
_[https://ogle.astrouw.edu.pl](https://ogle.astrouw.edu.pl) \(\rightarrow\) OGLE Collection of Variable Stars_
_[https://www.astrouw.edu.pl/ogle/ogle4/OCVS/lmc/dsct/_](https://www.astrouw.edu.pl/ogle/ogle4/OCVS/lmc/dsct/_)
For the \(\delta\) Sct candidates published in the OGLE-III catalog of variable stars (Poleski 2010), we maintained their designations in the format of OGLE-LMC-DSCT-NNNN (where NNNNN represents a consecutive number), while only extending the number of digits in the designation from four to five. The newly added \(\delta\) Sct stars have been organized by their right ascension and given designations from OGLE-LMC-DSCT-02789 to OGLE-LMC-DSCT-15735.
The pulsation periods, along with their uncertainties, were computed using the Tatry code (Schwarzenberg-Czerny 1996) based on the OGLE-IV light curves obtained between 2010 and 2020. To expand the temporal coverage of the photometric data from 10 to even over 20 years, the OGLE-IV light curves can be merged with the OGLE-III and OGLE-II time series provided by Poleski (2010). However, it is crucial to consider the possibility of zero-point offsets between these datasets for individual stars.
Fig. 2 illustrates the distributions of pulsation periods (upper panel), apparent \(I\)-band mean magnitudes (middle panel), and \(I\)-band peak-to-peak amplitudes (lower panel) of \(\delta\) Sct stars in the LMC and SMC (Soszynski 2022). The shapes of these histograms reflect both the intrinsic characteristics of the \(\delta\) Sct population in the Magellanic Clouds and the limitations of the OGLE photometry. The faintest objects in our collection of \(\delta\) Sct stars in the LMC have mean brightness of about \(I=21.3\) mag, but the number of variables in our sample starts to decline beyond a luminosity of \(I=20.5\) mag. It can be attributed to the pronounced correlation between the amplitude detection limits and the observed magnitudes of pulsating stars. For example, for variables with the mean brightness around \(I=20\) mag, the smallest detectable amplitudes are approximately 0.1 mag, while for stars with \(I=21\) mag, the amplitude detection limit increases to about 0.2 mag.
Figure 2: Distributions of dominant pulsation periods (_upper panel_), \(I\)-band mean magnitudes (_middle panel_), and \(I\)-band peak-to-peak amplitudes (_lower panel_) of 15 256 \(\,\delta\) Sct stars in the LMC (blue histograms) and 2810 \(\,\delta\) Sct stars in the SMC (red histograms).
Keeping in mind these luminosity and amplitude detection limits of the OGLE survey, we evaluated the completeness of our collection by considering stars that were identified twice within overlapping regions of adjacent fields. In the final iteration of our catalog, each \(\delta\) Sct pulsator is uniquely represented by a single entry from the OGLE database, typically favoring the one with a greater number of data points in its light curve. We retrospectively checked that 1162 \(\delta\) Sct stars in our collection are located in the overlapping parts of neighboring OGLE-IV fields, implying that we could potentially detect 2324 objects from this group. In practice, we independently confirmed the classification of both components in 624 such pairs, whereas in 538 cases, only one component of the pair was identified. Consequently, this leads to the catalog completeness level of approximately 70%. Once again, we emphasize that this value applies to \(\delta\) Sct stars in the LMC, whose luminosities and amplitudes are sufficiently large to be detectable with the OGLE photometry.
### 7 On-sky Map
The upper panel of Fig. 3 displays the on-sky distribution of about 17 600 \(\delta\) Sct pulsators in the LMC and SMC (Soszynski 2022), while the lower panel shows the positions of approximately 400 SX Phe variables likely belonging to the Milky Way's halo. The latter group consists of stars that are at least 1.5 mag brighter than the average PL relation fulfilled by \(\delta\) Sct variables in the LMC or SMC, respectively.
The spatial distributions of different stellar populations provide valuable information about their history. For example, the ancient population of RR Lyr pulsators (Soszynski 2016) in the LMC exhibits a structure that can be approximated by a triaxial ellipsoid without any additional substructures (Jacyszyn-Dobrzeniecka 2017). Conversely, classical Cepheids, which are stars younger than 300 million years, tend to concentrate in the LMC bar and spiral arms (Soszynski 2015a, Jacyszyn-Dobrzeniecka 2016). \(\delta\) Sct stars also follow the bar and spiral arms of the LMC (Fig. 3), although this pattern is not as distinct as in the case of classical Cepheids, indicating that the majority of our \(\delta\) Sct sample consists of intermediate-age stars.
In the region of the LMC bar, the maximum surface density of \(\delta\) Sct stars from our catalog exceeds 600 objects per square degree. The surface density drops almost to zero at a distance of 5 degrees from the center of the LMC (toward the South) or 9 degrees (toward the North), while the distribution of RR Lyr stars extends to much larger distances (Soszynski 2016). This indicates that our collection of \(\delta\) Sct pulsators in the Magellanic Clouds is primarily comprised of Population I stars, while the majority of Population II SX Phe variables in the LMC and SMC are too faint to be detected in the OGLE frames. Of course, this statement does not apply to foreground \(\delta\) Sct stars in the halo of the Milky Way, which by definition belong to Population II.
[MISSING_PAGE_POST]
## 8 Period-Luminosity Relations
The investigation of the PL relations followed by \(\delta\) Sct pulsators is one of the most important applications of our collection. Many empirical calibrations of the PL relations for \(\delta\) Scuti stars have been reported in the literature (_e.g._, Nemec _et al._ 1994, Cohen and Sarajedini 2012, Ziaali _et al._ 2019, Jayasinghe _et al._ 2020, Barac _et al._ 2022, Ngeow _et al._ 2023), but most were based on nearby variables with well-determined parallaxes or SX Phe stars identified in the Galactic globular clusters, the distances to which were known from other standard candles, for example RR Lyr stars.
The LMC has many advantages in the context of studying the distribution of various classes of pulsating stars in the PL plane. The close proximity, favorable orientation, and low average reddening toward this galaxy offers a unique opportunity for conducting in-depth analyses of its stellar component. The LMC hosts large and diverse populations of variable stars, including some of the largest known samples of Cepheids, RR Lyr stars, and long-period variables. Moreover, the distance to the LMC is currently known to an unprecedented accuracy of 1% (Pietrzynski _et al._ 2019).
Previous efforts to measure the PL relationships for \(\delta\) Sct stars in the LMC (McNamara _et al._ 2007, Garg _et al._ 2010, Poleski _et al._ 2010, McNamara 2011) relied on small or biased samples of variables. Recently, Martinez-Vazquez _et al._ (2022) gathered data for approximately 4000 extragalactic \(\delta\) Sct variables (primarily from the LMC) and investigated their distribution in the PL plane. They concluded that extragalactic \(\delta\) Sct stars exhibit a single PL relationship with a notable change in slope occurring at a period of around 0.09 d.
The OGLE collection of about 15 000 \(\delta\) Sct stars in the LMC enables us to verify these results. In Fig. 4, we present four versions of the extinction-corrected \(I\)-band PL diagram for our sample. In the upper left panel (a), different colors of points denote different amplitudes of brightness variations. In this diagram, two linear PL relationships can be discerned, with variables of larger amplitudes prevailing along the lower ridge. Due to the significant dispersion of points around these relationships, in panel b of Fig. 4, we provide a density map of the points on this diagram. There is no doubt that \(\delta\) Sct stars in the LMC follow two PL relationships without apparent changes in slope. This contradicts the findings of Martinez-Vazquez _et al._ (2022), who reported a single segmented PL relation, which probably was an illusion resulting from incompleteness of the prior catalogs of extragalactic \(\delta\) Sct stars.
The identification of the pulsation modes corresponding to both PL ridges can be achieved by plotting double-mode \(\delta\) Sct pulsators with period ratios in the range of 0.755-0.785, _i.e._, corresponding to the simultaneous oscillations in the fundamental and first-overtone modes (see Section 9). The color symbols in panel c
of Fig. 4 unambiguously indicate that the lower ridge is populated by \(\delta\) Sct variables pulsating in the fundamental mode, while the upper ridge is composed of the first-overtone pulsators.
In order to fit the most precise regression lines to both relations in the \(I\)-band, we constructed histograms of brightness for consecutive period bins (each with a bin size of 0.05 in \(\log P\), moved by \(\Delta\log P=0.005\)). Then, we found two local maxima (corresponding to the fundamental-mode and the first-overtone ridges) of the magnitude distributions for each period bin. Any unreliable determinations of the maxima (due to a limited number of stars within a bin) were excluded. Finally, we performed linear least-square fits to the obtained points. The result
Figure 4: Extinction-corrected \(I\)-band PL diagram for \(\delta\) Sct stars in the LMC. _Panel a_: the colors of the points represent peak-to-peak amplitudes of the \(I\)-band light curves, as indicated by the scale in the bottom right corner. _Panel b_: density map of the points in the PL diagram. _Panel c_: PL diagram for double-mode \(\delta\) Sct stars pulsating in the fundamental (purple points) and first-overtone (orange points) modes. Background gray dots represent single-mode \(\delta\) Sct stars. _Panel d_: linear least-square fits to the fundamental-mode (purple line) and first-overtone (orange line) PL relations of \(\delta\) Sct stars in the LMC. Yellow and pink squares indicate local maxima of the magnitude distribution.
of our procedure is shown in panel d of Fig. 4 and summarized in Table 2. The same method was used to fit the \(V\)-band PL relations as well as the period-\(W_{I}\) (PW) relations, where \(W_{I}\) is an extinction-insensitive Wesenheit index, defined as \(W_{I}=I-1.55(V-I)\).
The dispersion of points around the average PL and PW relations is significant (\(\sigma\approx 0.2\) mag), which may stem from measurement errors of the photometry, blending by unresolved sources, the geometry of the LMC, differential interstellar extinction, as well as the diversity of stellar populations present in our sample. In particular, it is recognized that SX Phe stars are systematically underluminous relative to Population I \(\delta\) Sct pulsators (McNamara 2007).
The PL and PW relations for \(\delta\) Sct stars (Table 2) are distinctly steeper than the relations for classical Cepheids (Soszynski 2015a) pulsating in the same modes. In Fig. 5, we display the \(I\)-band PL diagram (left panel) and PW diagram (right panel) for \(\delta\) Sct stars and classical Cepheids in the LMC. The first-overtone variables lay along a continuous ridge in both diagrams, whereas the fundamental-mode pulsators demonstrate a discontinuity within the period range of approximately 0.3-1.0 d, except for several multimode Cepheids with the fundamental-mode periods falling within this range.
The lower panels of Fig. 5 show the residuals with respect to the linear fit applied to the first-overtone classical Cepheids with periods longer than 0.5 d. The
Figure 5: PL (_left panels_) and PW (_right panels_) diagrams for \(\delta\) Sct stars and classical Cepheids in the LMC. Blue, orange, and red points mark \(\delta\) Sct variables, fundamental-mode classical Cepheids, and first-overtone classical Cepheids, respectively. Solid lines represent fits to the PL and PW relations. Dashed lines are the extensions of the PL and PW relationships for first-overtone Cepheids with periods longer than 0.5 d. _Lower panels_ show the residuals with respect to the fit applied to the first-overtone classical Cepheids with periods longer than 0.5 d.
dashed line, which is an extension of this relation, lies above the majority of the \(\delta\) Sct stars, confirming the non-linear nature of the continuous PL and PW relationships for first-overtone Cepheids and \(\delta\) Sct variables. Moreover, most short-period overtone Cepheids also reside below this dashed line, indicating that the change in the slope of the PL and PW relations occurs at period of about 0.5 d (\(\log P\approx 0.3\)). This break in the PL relation for first-overtone classical Cepheids was first noticed by Soszynski (2008). Recently, Ripepi (2022) confirmed the non-linearity of the first-overtone PL and PW relations in the near-infrared bands, pinpointing the break point at \(P_{\rm 1O}=0.58\pm 0.1\) d.
## 9 Multimode \(\delta\) Sct Stars
Stars that pulsate in multiple modes are attractive targets for asteroseismological research. Within the general population of \(\delta\) Sct stars, most objects are low-amplitude variables with a number of non-radial modes simultaneously excited (Breger, 2000). However, due to the limitations of the OGLE photometry, the proportions of low- and high-amplitude variables are inverted in our collection. The predominant portion of LMC \(\delta\) Sct stars within our sample demonstrates high-amplitude oscillations in the fundamental or first-overtone modes.
In our collection, we provide up to three pulsation periods per star. However, for the majority of variables, only a dominant period could be reliably measured. Secondary or tertiary periods were identified only when their amplitudes exceeded the detection thresholds of the OGLE photometry. As a result, the final version of our catalog includes 621 double-mode and only 18 triple-mode \(\delta\) Sct variables.
Fig. 6 shows the Petersen diagram (a plot of the ratio between two periods against the logarithm of the longer one) for multimode \(\delta\) Sct stars, classical Cepheids (Soszynski 2015b), and RR Lyr variables (Soszynski 2016) in the
LMC. As expected, we predominantly detected stars oscillating in two or three low-order radial modes, particularly in the fundamental and first-overtone modes (F/1O). Roughly two-thirds of all multimode \(\delta\) Sct pulsators in our sample have these two modes excited. Additionally, our dataset includes about 50 double- and triple-mode \(\delta\) Sct stars simultaneously pulsating in the first, second, or third overtones (in various configurations), and about 80 variables exhibiting secondary periods very close to the primary ones. The latter phenomenon may indicate the presence of the non-radial modes.
Fig. 6 vividly illustrates the continuity between \(\delta\) Sct stars and classical Cepheids in the Petersen diagram. The choice of pulsation periods that distinguishes both classes of variable stars is a matter of convention. In the OCVS, we adopted \(P_{\rm F}=0.3\) d for the fundamental mode, \(P_{\rm 1O}=0.23\) d for the first-overtone, and \(P_{\rm 2O}=0.185\) d for the second overtone.
The Petersen diagram is a powerful tool to constrain stellar parameters such as masses or metallicities of multimode pulsators (_e.g._, Petersen and Christensen
Figure 6: Petersen diagram for multimode \(\delta\) Sct stars (blue points), classical Cepheids (red points), and RR Lyr stars (green points) in the LMC.
Dalsgaard 1996, Poretti 2005, Netzel 2022). Fig. 7 shows a zoom-in of the Petersen diagram focusing on the region occupied by F/1O \(\delta\) Sct stars in the Milky Way (Soszynski 2021), SMC (Soszynski 2022), and LMC (this work).
The Galactic double-mode pulsators exhibit a characteristic splitting of the period-period ratio sequence at the short-period end. This reflects the division of \(\delta\) Sct variables into the Population I stars (\(P_{\rm 1O}/P_{\rm F}\approx 0.773\)) and SX Phe stars (\(P_{\rm 1O}/P_{\rm F}\approx 0.778\), Breger 2000). Double-mode pulsators with such short periods are not present in the OGLE collection of \(\delta\) Sct stars in the Magellanic Clouds due to a selection bias. These stars are too faint to be identified through OGLE photometry. Nonetheless, it is evident that the period-period ratio sequence for \(\delta\) Sct stars in the LMC is situated above the sequence for Galactic stars, and even further above lie the SMC pulsators. Double-mode F/1O classical Cepheids in the Milky Way, LMC, and SMC exhibit analogous behavior (Udalski 2018), which can be attributed to the different metallicities among these three galaxies.
## 10 8 8 **S**ct Stars in Binary Systems
Eclipsing binary systems offer a unique opportunity to accurately measure the physical parameters of the stellar components, such as masses, radii, luminosities, and temperatures. As a result, binaries comprising pulsating stars serve as excellent testbeds for asteroseismology and stellar evolution theory. \(\delta\) Sct variables within binary systems are not infrequently encountered entities. According to the updated
Figure 7: Petersen diagram for F/1O double-mode \(\delta\) Sct stars in the LMC (blue points), SMC (magenta points), and Milky Way (orange points).
Figure 8: Disentangled \(I\)-band light curves of \(\delta\) Sct stars showing additional eclipsing or ellipsoidal modulation. In each pair, _left panel_ displays the pulsation light curve, while _right panel_ shows the eclipsing/ellipsoidal light curve after subtracting the pulsation component.
version of the Liakos and Niarchos (2017) catalog8, a total of 367 binaries with a \(\delta\) Sct component have been known so far, including 34 such objects identified by the OGLE team (Pietrukowicz 2020, Soszynski 2021). However, all of these systems are situated within the Milky Way.
Footnote 8: [https://alexiosliakos.weebly.com/catalogue.html](https://alexiosliakos.weebly.com/catalogue.html)
While searching for potential secondary periods in the light curves of \(\delta\) Sct stars in the LMC, we discovered 12 objects demonstrating additional variability caused by binarity, eclipses or ellipsoidal variations. The details of these stars, including their coordinates, mean magnitudes, pulsation and orbital periods, are provided in Table 3. Fig. 8 displays their disentangled pulsating and eclipsing/ellipsoidal light curves. To our best knowledge, these stars are the first known extragalactic candidates for binary systems containing \(\delta\) Sct components. The spectroscopic examination of these objects will be challenging due to their low apparent luminosity. Nevertheless, the long-term OGLE photometry could be useful in studying the stability of their orbital periods or the apsidal motion in the systems featuring eccentric orbits.
## 11 Summary
We presented the OGLE collection of about 15 000 \(\delta\) Sct variables in the LMC. Approximately, two-thirds of these stars represent new discoveries. This compilation constitutes the most extensive sample of extragalactic \(\delta\) Sct stars published to date. Our catalog is a part of the OCVS, which presently comprises around 1.1 mil
lion manually selected and classified variable stars in the Milky Way and the Magellanic Clouds. The OCVS facilitates comparative research on pulsating stars in different stellar environments. The extensive and well-sampled OGLE light curves in the standard photometric system offer opportunities to investigate exotic modes in pulsating stars, assess the stability of periods, and detect pulsating stars in binary systems.
This paper provides just a glimpse of the potential research that can be carried out on the variables within our collection. We presented the on-sky distribution of \(\delta\) Sct stars in the Magellanic System, derived empirical PL relations for fundamental-mode and first-overtone pulsators and compared them to the PL relations for classical Cepheids. Additionally, we conducted a comparison of period ratios in multimode \(\delta\) Sct variables originated from the Milky Way and the Magellanic Clouds. Finally, we reported the discovery of the first-known candidates for extragalactic eclipsing binaries containing a \(\delta\) Sct component.
This work has been funded by the National Science Centre, Poland, grant no. 2022/45/B/ST9/00243. For the purpose of Open Access, the author has applied a CC-BY public copyright license to any Author Accepted Manuscript (AAM) version arising from this submission.
|
2309.09546 | Training dynamic models using early exits for automatic speech
recognition on resource-constrained devices | The ability to dynamically adjust the computational load of neural models
during inference is crucial for on-device processing scenarios characterised by
limited and time-varying computational resources. A promising solution is
presented by early-exit architectures, in which additional exit branches are
appended to intermediate layers of the encoder. In self-attention models for
automatic speech recognition (ASR), early-exit architectures enable the
development of dynamic models capable of adapting their size and architecture
to varying levels of computational resources and ASR performance demands.
Previous research on early-exiting ASR models has relied on pre-trained
self-supervised models, fine-tuned with an early-exit loss. In this paper, we
undertake an experimental comparison between fine-tuning pre-trained backbones
and training models from scratch with the early-exiting objective. Experiments
conducted on public datasets reveal that early-exit models trained from scratch
not only preserve performance when using fewer encoder layers but also exhibit
enhanced task accuracy compared to single-exit or pre-trained models.
Furthermore, we explore an exit selection strategy grounded in posterior
probabilities as an alternative to the conventional frame-based entropy
approach. Results provide insights into the training dynamics of early-exit
architectures for ASR models, particularly the efficacy of training strategies
and exit selection methods. | George August Wright, Umberto Cappellazzo, Salah Zaiem, Desh Raj, Lucas Ondel Yang, Daniele Falavigna, Mohamed Nabih Ali, Alessio Brutti | 2023-09-18T07:45:16Z | http://arxiv.org/abs/2309.09546v2 | Training Dynamic Models Using Early Exits for Automatic Speech Recognition on Resource-Constrained Devices
###### Abstract
The possibility of dynamically modifying the computational load of neural models at inference time is crucial for on-device processing, where computational power is limited and time-varying. Established approaches for neural model compression exist, but they provide architecturally static models. In this paper, we investigate the use of early-exit architectures, that rely on intermediate exit branches, applied to large-vocabulary speech recognition. This allows for the development of dynamic models that adjust their computational cost to the available resources and recognition performance. Unlike previous works, besides using pre-trained backbones we also train the model from scratch with an early-exit architecture. Experiments on public datasets show that early-exit architectures from scratch not only preserve performance levels when using fewer encoder layers, but also improve task accuracy as compared to using single-exit models or using pre-trained models. Additionally, we investigate an exit selection strategy based on posterior probabilities as an alternative to frame-based entropy.
George August Wright\({}^{1}\), Umberto Cappellazzo\({}^{1}\), Salah Zaiem\({}^{2}\), Desh Raj\({}^{3}\),
Lucas Ondel Yang\({}^{4}\), Daniele Falavigna\({}^{5}\), Alessio Brutti\({}^{5}\)\({}^{1}\)University of Trento; \({}^{2}\)LTCI, Telecom Paris, Institut Polytechnique de Paris;
\({}^{3}\)Johns Hopkins University; \({}^{4}\)Universite Paris-Saclay, LISN, CNRS; \({}^{5}\)Fondazione Bruno Kessler dynamic models, early-exit, Conformer, ASR
## 1 Introduction
The edge-cloud continuum is an emerging complex ecosystem that integrates compute-enabled edge devices, distributing the overall computation workload among them [1]. Computational resources available on the devices considerably differ from each other and are time-varying due to sharing between different services. Therefore, having neural models that can dynamically change their trade-off between computation and performance is crucial. To this end, we investigate the use of early-exit architectures applied to large-vocabulary automatic speech recognition (ASR).
Previous work targeting neural models suitable for on-device processing mainly focused on decreasing the model size through compression [2], knowledge distillation [3, 4], pruning [5], and quantization [6]. Although very effective, these approaches deliver static solutions and require the models to be reconfigured each time the computational budget changes. Instead, it is preferable to dynamically adapt the model architecture to the memory and computational capabilities of each hosting device to avoid handling multiple models with varying trade-offs.
A solution for this task is represented by "early-exit" architectures that introduce intermediate exit branches [7, 8]. The input is not processed by all of the layers of the neural model but only by a subset of them, returning the result at an intermediate level and saving, in this way, the operations in the layers that are not traversed. An example is shown in Figure 1, where a layer-specific classifier/decoder (called "Exit Layer") is appended to some intermediate encoder layers. The motivation relies on the observation that, for easier inputs, the lower (earlier) layers have already learned a number of parameters sufficient to effectively predict the correct output. Early-exit architectures allow the development of **resource-aware** processing (Fig. 1, left) where the same model can be used on different devices, as well as **result-aware** processing (Fig. 1, right) where the model selects the earliest exit that would provide the same performance as processing the full network.
In this work, we investigate the use of early-exits applied to Conformer neural architectures, evaluated on three popular ASR benchmarks: LibriSpeech [9], TED-LIUM [10], and VoxPopuli [11]. While previous work on early-exit models for ASR mainly focused on inference exit selection using pre-trained large-scale models [12, 13, 14], we investigate training 3 different models both from scratch as well as pre-trained, using different early-exit losses as depicted in Fig. 2. We demonstrate that training the upstream network with the combined early-exit losses outperforms single-exit models that optimize only the loss of the final layer. Interestingly, early-exit training is found to be more effective when training the model from scratch, as opposed to fine-tuning an existing model. Overall, our contributions are:
1. We investigate early-exit training strategies for 3 different models, those trained from scratch as well as those initialized from pre-trained self-supervised models with different losses, showing that training the models from scratch is beneficial.
2. We compare early-exit selection methods based on entropy and confidence, and show that the N-best posterior provides a slightly
Figure 1: Example of _resource-aware_ and _result-aware_ use of early-exits. On the left: the micro-controller can afford only two layers; the server can process the whole model. On the right: the first input requires processing the whole network; in the second case after 2 blocks the model already produces the best transcription.
better trade-off than entropy.
3. To substantiate our claims, we perform experiments on 3 popular ASR benchmarks: LibriSpeech, TED-LIUM, and VoxPopuli.
## 2 Related Work
"Early-exit" was introduced for computer vision in BranchyNet [7] by adding two branches to AlexNet [15]. The authors optimized the joint loss of the exits and also defined a confidence measure, based on the entropy of the output class distribution, to decide the exit level. More recently, Scardapane et al. [16] provided a theoretical framework for multi-exit neural architectures. Early classifiers have also been used on tiny (KB-sized) models [17]. Beside early-exit, other methods for dynamically selecting the model architecture for efficient inference (such as HydraNet [18]) have also been explored.
In speech recognition, early-exit was first introduced in HuBERT-EE [12] to speed up inference for a pre-trained HuBERT [19] model according to confidence measures, based on CTC confidence or output entropies, with no significant performance degradation. Similarly, Zaiem et al. [14] investigated different fine-tuning strategies in the context of a large pre-trained WavLM [20] model, comparing them with approaches based on layer removal and input down-sampling.The overthinking issue of ASR encoders was also analysed in [21], where the authors reported theoretical lower bounds of speed/quality trade-offs for early-exit strategies. Exit selection strategies were proposed based on comparison between successive exits in terms of output distribution and transcriptions. Similar investigations using the entropy of the output distribution have also been conducted for recurrent neural networks [22].
All of the above investigations [12, 13, 14] employ pre-trained models by fine-tuning the transformer component, as is common for ASR. They are primarily _focused on efficient inference_ by selecting the best early exit according to some criteria. An analogous observation can be made for natural language processing (NLP), where early-exit research has focused on accelerating inference of large pre-trained language models such as BERT [23, 24, 25]. Conversely, in this work, our objective is to _understand the training dynamics_ of early-exit models (both trained from scratch and initialized from large pretrained models) by conducting exhaustive experiments on multiple datasets. We demonstrate that training the model from scratch, with joint optimisation of all exits, provides a significant performance improvements as compared to individual and pre-trained models (the latter, in particular, at the lowest exits).
## 3 Early-exit models for ASR
Given an input sequence \(\mathbf{X}\), such as a raw waveform \(\{x_{1},\dots,x_{N}\}\) or acoustic features \(\{\mathbf{x}_{1},\dots,\mathbf{x}_{T}\}\), where \(\mathbf{x}_{t}\in\mathbb{R}^{d}\), an ASR system estimates the output sequence \(\hat{\mathbf{y}}\) as
\[\hat{\mathbf{y}}=\arg\max_{\mathbf{y}}P(\mathbf{y}|\mathbf{X}), \tag{1}\]
where \(\mathbf{y}\in\mathcal{Y}^{\star}\), for some vocabulary \(\mathcal{Y}\), such as graphemes, phonemes, or BPE units. The distribution \(P(\mathbf{y}|\mathbf{X})\) is usually estimated using a parameterized model \(\Theta\) (such as a neural network), i.e., \(P(\mathbf{y}|\mathbf{X};\Theta)\), which is learned using input-output pairs (\(\mathbf{X}\),\(\mathbf{y}\)).
For convenience, \(\Theta\) is often factored into an encoder, which extracts high-dimensional representations \(\mathbf{h}_{1}^{T}\) from \(\mathbf{X}\), and a decoder, which maps \(\mathbf{h}_{1}^{T}\) to the output sequence \(\mathbf{y}_{1}^{U}\). Since \(U\ll T\) in general, ASR decoders either use (i) an _alignment function_ (\(\mathcal{B}:\mathbf{a}_{1}^{T}\to\mathbf{y}_{1}^{U}\)) for sequence training, or (ii) an _attention mechanism_ with label-based cross-entropy training. We apply early-exit to ASR by adding decoders at several intermediate layers of the encoder (as shown in Fig. 2). Assuming that \(M\) such intermediate exits are added (with hypothesis \(\hat{\mathbf{y}}^{1},\dots,\hat{\mathbf{y}}^{M}\)), the overall model is trained by optimizing the joint objective
\[\mathcal{L}_{EE}(\hat{\mathbf{y}}^{1},\dots,\hat{\mathbf{y}}^{M},\mathbf{y})= \sum_{m=1}^{M}\mathcal{L}(\hat{\mathbf{y}}^{m},\mathbf{y}), \tag{2}\]
where \(\mathcal{L}(\hat{\mathbf{y}}^{m},\mathbf{y})=-\log P(\mathbf{y}|\mathbf{X}; \Theta_{m})\), and \(\Theta_{m}\) denotes the subset of \(\Theta\) used for exit \(m\). In this work, we implement early-exit for several choices of the encoder and decoder, resulting in three models with different complexities, as described below. Hyperparameters for the models are summarized in Tab. 1.
**Conformer-CTC:** The Conformer encoder [26] is used to obtain \(\mathbf{h}_{1}^{T}\), and the decoder is a linear layer with softmax. The intermediate loss function is connectionist temporal classification (CTC) [27], which is given as
\[\mathcal{L}_{\mathrm{CTC}}(\hat{\mathbf{y}},\mathbf{y})=-\log\sum_{\mathbf{a}_ {1}^{T}\in\mathcal{B}^{-1}(\mathbf{y}_{1}^{U})}\prod_{t=1}^{T}P(a_{t}|\mathbf{ h}_{1}^{T}), \tag{3}\]
where \(a_{t}\in\mathcal{Y}\cup\{\phi\}\), and \(\mathcal{B}\) maps \(\mathbf{a}_{1}^{T}\) to \(\mathbf{y}_{1}^{U}\) by removing repeated tokens and \(\phi\). We use a 12-layer encoder, and insert intermediate exits at all even-numbered layers.
\begin{table}
\begin{tabular}{l r r r} \hline \hline
**Feature** & \multicolumn{1}{c}{**Conformer**} & \multicolumn{1}{c}{**Conformer**} & \multicolumn{1}{c}{**Wav2Vec2**} \\ & \multicolumn{1}{c}{**CTC**} & \multicolumn{1}{c}{**AED**} & \multicolumn{1}{c}{**CTC**} \\ \hline \# params (M) & 31.0 & 13.3 & 94.0 \\ Encoder & 12-layer Conf. & 12-layer Conf. & 12-layer Transf. \\ Attention dim. & 256 & 144 & 768 \\ Number heads & 8 & 4 & 8 \\ Feed-forward dim. & 2048 & 1024 & 3072 \\ Decoder & Linear & 4-layer Transf. & Linear \\ Inputs & 80-d MFCC & 80-d MFCC & Waveform \\ Loss function & \(\mathcal{L}_{\mathrm{CTC}}\) & \(\mathcal{L}_{\mathrm{CTC}}\) & \(\mathcal{L}_{\mathrm{CTC}}\) & \(\mathcal{L}_{\mathrm{CTC}}\) \\ Output units & BPE & BPE & Graphene \\ LM rescoring & ✗ & ✓ & ✗ \\ Data augmentation & ✗ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Hyperparameters for the early-exit model architectures shown in Fig. 2.
Figure 2: Early-exit model architectures (from left to right): Conformer-CTC, Conformer-AED, and Wav2Vec2-CTC. Conformer-based models are entirely trained from scratch. Wav2Vec2-CTC is initialized with the pre-trained model and fine-tuned with early-exit losses, freezing the convolutional feature extractor.
**Conformer-AED:** To test the robustness of early-exits with complex decoders, we use an attention-based encoder-decoder (AED) model [28]. We retain the Conformer encoder as above, but replace the linear decoder with four transformer layers with cross-attention on \(\mathbf{h}_{1}^{T}\). This decoder contains two output heads, trained with a CTC loss and a sequence-to-sequence cross-entropy loss respectively [29]. The overall loss function is given as
\[\mathcal{L}_{\mathrm{AED}}(\hat{\mathbf{y}},\mathbf{y})=\lambda_{\mathrm{CTC}} \mathcal{L}_{\mathrm{CTC}}(\hat{\mathbf{y}},\mathbf{y})+\lambda_{\mathrm{CE}} \mathcal{L}_{\mathrm{CE}}(\hat{\mathbf{y}},\mathbf{y}), \tag{4}\]
where \(\mathcal{L}_{\mathrm{CE}}(\hat{\mathbf{y}},\mathbf{y})=-\sum_{u=1}^{U}\log P(y _{u}|\mathbf{h}_{1}^{T},\mathbf{y}_{1}^{u-1})\), and \(\lambda\)'s are hyperparameters. Following the SpeechBrain recipe [30], we set \(\lambda_{\mathrm{CTC}}\) and \(\lambda_{\mathrm{CE}}\) to 0.3 and 0.7, respectively. During inference, only the cross-entropy head is used, and a transformer-based language model trained with the same tokenization is used to rescore the hypothesis.
**Wav2Vec2-CTC:** Both the above models are trained _from scratch_ by optimizing equation (Eq. 2). We also apply early-exit fine-tuning on a _pre-trained_ Wav2Vec-2.0 [31] encoder using the joint CTC losses (Eq. 3). Unlike the above models, this model operates on raw waveforms processed using a convolutional feature extractor which is normally frozen during fine-tuning.
## 4 Early-Exit Selection
To decide the exit for an early-exit model one can use a measure of its uncertainty, i.e., an exit layer is selected when its uncertainty drops below a given threshold which is, in turn, estimated to guarantee a desired performance level. Since the outputs from the encoder layers are converted to posterior probabilities by passing them to a softmax module, a suitable measure of their uncertainty is represented by their average frame entropies:
\[\Xi^{m}=-\frac{1}{T|\mathcal{Y}|}\sum_{t=1}^{t=T}\sum_{y\in\mathcal{Y}}P[y| \mathbf{h}_{t}^{m}]\log(P[y|\mathbf{h}_{t}^{m}]) \tag{5}\]
where, \(P[y|\mathbf{h}_{t}^{m}]\) is the probability in the \(m^{th}\) encoder output at time \(t\) for each output token \(y\in\mathcal{Y}\). While entropy is a common choice in literature, we also investigate a metric based on an estimate of the sentence confidence. This is computed by applying a softmax to the scores of the N-best hypotheses provided by each decoder:
\[\Psi^{m}=\frac{e^{s_{1}^{m}}}{\sum_{1}^{K}e^{s_{k}^{m}}} \tag{6}\]
where \(s_{k}^{m}\) is the log-probability of the \(k^{th}\) hypothesis at layer \(m\), i.e. \(s_{k}^{m}=\log(P[\hat{\mathbf{y}}_{k}^{m}|\mathbf{X};\mathbf{\Theta}_{m}])\), and \(K\) is the number of N-best hypotheses. Preliminary experiments, aimed at finding the optimal performance/complexity trade-off, suggested the value \(K=300\).
## 5 Experiments
We carry out experiments using LibriSpeech [9], TED-LIUM [10] and VoxPopuli [11]. LibriSpeech contains around 1,000 hours of read-aloud speech (audibooks) partitioned into \(\approx\)960h to be used for training and \(\approx\)20h to be used for evaluation. TED-LIUM (release 3, [10]) comprises around 452 training hours of transcribed English speeches (from TED video conferences) and around 6 hours for evaluation. Finally, VoxPopuli is a multi-lingual corpus formed of around 400K hours of recordings (collected from European Parliament events). For this work, we used the English subset which consists of around 543 hours of training recordings and around 60 hours to be used for evaluation.
### Implementation Details
As mentioned above, we consider 3 different models: Conformer-CTC (Eq. 3), Conformer-AED (Eq. 4), Wav2Vec2-CTC. The two Conformer models take as input \(80\) Mel Frequency Cepstral Coefficients (MFCCs) This MFCC sequence is passed through a series of 1D convolution sub-sampling layers. The output of this block is applied to a positional encoding module that feeds a stack of 12 Conformer blocks. The _Wav2Vec2.0_ model (hereinafter referred to as _Wav2Vec-CTC_) also consists of a convolutional feature extractor followed by a 12-layer self-attention encoder, but takes as input raw waveforms.Both Conformer-CTC and Conformer-AED use a byte pair encoding (BPE) based tokenizer [32], with 256 and 5000 tokens respectively. Exit decoders of Wav2Vec2-CTC instead produces 32 grapheme-based tokens (28 characters + 1 blank token + 2 sentence boundary tokens + 1 unknown token) as per its official recipe.
The code for both training and inference for the Conformer-CTC and Wav2Vec2-CTC models is available1, while the Conformer-CTC model is trained following the SpeechBrain recipe. Tab. 1 summarizes the main hyperparameters for the 3 models.
Footnote 1: [https://github.com/augustgw/early-exit-transformer](https://github.com/augustgw/early-exit-transformer) and [https://github.com/augustgw/wav2vec2-ece](https://github.com/augustgw/wav2vec2-ece)
## 6 Results
All results reported in this section are expressed in terms of word error rates (WERs) computed on the standard test partitions of the three datasets 2. Tab. 2 reports the performance on LibriSpeech at different exits both training from scratch using the Conformer-CTC and Conformer-AED models, and fine-tuning Wav2Vec2-CTC. For each model we also report the performance of the corresponding single-exit model for comparison.
Footnote 2: Results on the dev-sets are not reported for the sake of space but are available.
In our settings, the performance for the Conformer-CTC model with 12 layers is 6.6% on test-clean and 17.7% on test-other. As expected, the WER is higher in the lower layers. The performance significantly decreases only in the lowest two exits (exits 2 and 4), while it remains not far from the best one (that of layer 12) in the middle layers (i.e., 6, 8 and 10), which, however, require significantly fewer parameters. Similar trends are achieved with the Conformer-AED model but with significantly better absolute performance (2.3% and 6.0% WER in the highest layer for test-clean and test-other, respectively). This absolute improvement is attributed both to the use of transformer-based decoders as well as to the language model rescoring, allowing the model to reach state-of-the-art on LibriSpeech. Tab. 2 shows that the Wav2Vec2-CTC model exhibits a behaviour similar to the two Conformer models. However, since the pre-trained Wav2Vec2-CTC model has been optimised solely on the loss of the highest layer, it leads to very high WERs at the lower exits with an evident performance gain already at the 8th layer. This degradation is much less evident in the Conformer models trained from scratch.
In summary, although smaller and trained on less data, the Conformer-CTC/AED models perform better than Wav2Vec2-CTC in the last 3 layers. **These results suggest that for early-exit architectures, training a model from scratch is more efficient than fine-tuning a large and accurate model not pre-trained with early-exits.** It is worth noting that the same trends are observed considering different decoders, different training losses, and independently of the use of a language model.
Another important outcome emerges as we compare the performance achieved with the early-exit models with those obtained
with the corresponding single-exit models (column "no-EE" in Tab. 2). Apart from the lowest exits (layers 2 and 4), the single-exit Conformer-CTC/AED models deliver worse WERs than the early-exit counterpart. This indicates the beneficial effects of the compound loss, acting as a regularizer and improving both robustness and generalization. This observation is in line with previous studies applying losses at lower layers, although with different granularity than an ASR decoder [16, 7, 33]. In other words, using a single model with multiple exits not only reduces the computational burden of training multiple single exit models but also delivers better performance. Note that this claim is not valid for the top layer of the Wav2Vec2-CTC model where the performance decreases (from 3.4% to 4.3% on test-clean and from 8.6% to 12.2% on test-other) when fine-tuning with the early-exit loss. In this case, however, it has to be considered that the model has not been trained from scratch but, as usual, its convolutional feature encoder has been frozen and only the transformer encoder module has been fine-tuned.
Finally, experiments on TED-LIUM and VoxPopuli, shown in Tab. 3, confirm the observations drawn on LibriSpeech. In these experiments, we also observe superior performance in models trained with the compound early-exit loss as compared to those trained with single exits, for layers higher than \(4\).
### Exit selection during inference
Having assessed the efficacy of early-exit architectures for resource-aware processing (i.e., for each individual exit), we now analyse the behavior of the different models when selecting the exit, using either the average frame entropy (Eq. 5) or the sentence confidence (Eq. 6), i.e., addressing a result-aware solution. In both cases, we follow the common practice, implementing a thresholding approach: given a predefined threshold, we select the first exit whose entropy is below that value or whose posterior is above. Previous studies [14, 13] have observed that although the overall performance of lower layers is inferior to those processing the whole network (i.e., the final layers), in many cases the performance is on par. Being able to identify those cases would considerably reduce the overall computational cost. Fig. 3 shows the average exit (y-axis) with the corresponding WER (x-axis) when varying the selection threshold for the three models and the two metrics. The closer the curve to the chart origin, the better. We observe that, as expected, better models deliver better performance in exit selection: the Conformer-AED lines are well below the others. Sentence confidence (dotted lines) on average selects lower exits than entropy at the same WER values.
## 7 Conclusion and future works
In this paper, we investigated early-exit architectures for ASR by comparing the training and inference of three models, two based on a Conformer architecture and one based on Wav2Vec2. We demonstrated the benefits of training models from scratch using early-exit, as compared to fine-tuning a pre-trained model, on three datasets. Future works will investigate weighting schemes for the compound loss in Eq. 2 or alternative training strategies [16], including distillation (similar to [33]) from upper layers of the model.
**Acknowledgments.** This work was partially funded by the PNRR ICSC National Research Centre for High Performance Computing, Big Data and Quantum Computing (CN00000013), under the NRRP MUR program funded by the NextGenerationEU. We also acknowledge support from the JSALT 2023 workshop, hosted at Le Mans University, France, and sponsored by Johns Hopkins University with unrestricted gifts from Amazon, Facebook, Google, and Microsoft.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Layer**} & \multicolumn{4}{c}{**Conformer-CTC**} & \multicolumn{4}{c}{**Conformer-AED**} & \multicolumn{4}{c}{**Wav2Vec2-CTC**} \\ \cline{2-13} & test-clean & test-other & \multicolumn{2}{c}{test-clean} & test-other & \multicolumn{2}{c}{test-clean} & test-clean & test-other \\ \cline{2-13} & no-EE & EE & no-EE & EE & no-EE & EE & no-EE & EE & no-EE & EE & no-EE & EE \\ \hline
2 & 17.6 & 23.9 & 36.1 & 43.8 & 18.9 & 20.1 & 38.0 & 40.1 & 35.7 & 33.7 & 56.7 & 56.0 \\
4 & 9.8 & 11.6 & 24.3 & 25.7 & 12.8 & 12.5 & 25.8 & 25.2 & 17.4 & 17.4 & 35.5 & 36.7 \\
6 & 7.6 & 6.8 & 20.0 & 18.1 & 8.4 & 7.7 & 20.1 & 17.1 & 10.7 & 9.6 & 24.8 & 23.7 \\
8 & – & 5.9 & – & 16.3 & – & 4.4 & – & 11.5 & – & 5.8 & – & 15.9 \\
10 & – & 5.2 & – & 15.8 & – & 2.8 & – & 6.9 & – & 4.5 & – & 12.6 \\
12 & 6.5 & 5.1 & 17.7 & 15.1 & 2.5 & 2.3 & 6.1 & 6.0 & 3.4 & 4.3 & 8.6 & 12.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2: WERs on the **LibriSpeech** at different exits, obtained with the 3 models under investigation. “layer” indicates at which layer the exit is located or the number of layers of the single-exit model. EE indicates that the model has been trained with the early-exit losses while no-EE refers to the single-exit model.
Figure 3: Average-exit selection and WER varying the exit selection threshold for the 3 models and using both entropies and sentence confidence as exit metrics.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Layer**} & \multicolumn{2}{c}{**TED-LIUM**} & \multicolumn{2}{c}{**VoxPopuli**} \\ \cline{2-5} & no-EE & EE & no-EE & EE \\ \hline
2 & 42.7 & 43.8 & 27.3 & 36.7 \\
4 & 35.4 & 23.4 & 19.7 & 21.1 \\
6 & 25.5 & 18.0 & 18.7 & 17.3 \\
8 & – & 16.1 & – & 15.4 \\
10 & – & 14.9 & – & 14.7 \\
12 & 16.4 & 14.6 & 16.3 & 14.3 \\ \hline \hline \end{tabular}
\end{table}
Table 3: WERs on the **TED-LIUM** and **VoxPopuli** at different exits, training the model Conformer-CTC from scratch. |
2309.08880 | Data-Driven H-infinity Control with a Real-Time and Efficient
Reinforcement Learning Algorithm: An Application to Autonomous
Mobility-on-Demand Systems | Reinforcement learning (RL) is a class of artificial intelligence algorithms
being used to design adaptive optimal controllers through online learning. This
paper presents a model-free, real-time, data-efficient Q-learning-based
algorithm to solve the H$_{\infty}$ control of linear discrete-time systems.
The computational complexity is shown to reduce from
$\mathcal{O}(\underline{q}^3)$ in the literature to
$\mathcal{O}(\underline{q}^2)$ in the proposed algorithm, where $\underline{q}$
is quadratic in the sum of the size of state variables, control inputs, and
disturbance. An adaptive optimal controller is designed and the parameters of
the action and critic networks are learned online without the knowledge of the
system dynamics, making the proposed algorithm completely model-free. Also, a
sufficient probing noise is only needed in the first iteration and does not
affect the proposed algorithm. With no need for an initial stabilizing policy,
the algorithm converges to the closed-form solution obtained by solving the
Riccati equation. A simulation study is performed by applying the proposed
algorithm to real-time control of an autonomous mobility-on-demand (AMoD)
system for a real-world case study to evaluate the effectiveness of the
proposed algorithm. | Ali Aalipour, Alireza Khani | 2023-09-16T05:02:41Z | http://arxiv.org/abs/2309.08880v1 | Data-Driven H-infinity Control with a Real-Time and Efficient Reinforcement Learning Algorithm: An Application to Autonomous Mobility-on-Demand Systems
###### Abstract
Reinforcement learning (RL) is a class of artificial intelligence algorithms being used to design adaptive optimal controllers through online learning. This paper presents a model-free, real-time, data-efficient Q-learning-based algorithm to solve the H\({}_{\infty}\) control of linear discrete-time systems. The computational complexity is shown to reduce from \(\mathcal{O}(\underline{q}^{3})\) in the literature to \(\mathcal{O}(\underline{q}^{2})\) in the proposed algorithm, where \(\underline{q}\) is quadratic in the sum of the size of state variables, control inputs, and disturbance. An adaptive optimal controller is designed and the parameters of the action and critic networks are learned online without the knowledge of the system dynamics, making the proposed algorithm completely model-free. Also, a sufficient probing noise is only needed in the first iteration and does not affect the proposed algorithm. With no need for an initial stabilizing policy, the algorithm converges to the closed-form solution obtained by solving the Riccati equation. A simulation study is performed by applying the proposed algorithm to real-time control of an autonomous mobility-on-demand (AMoD) system for a real-world case study to evaluate the effectiveness of the proposed algorithm.
## 1 Introduction
Nowadays, people are able to use mobility-on-demand (MoD) services to travel and share vehicles with other people by sending requests through mobile devices. MoD can be replaced by AMoD due to the lower costs of autonomous vehicle (AV) operations [1]. The cooperative nature of AVs is in contrast with selfish taxi drivers seeking to maximize their profits. By optimizing routing, rebalancing, charging schedules, etc., central coordination can minimize externalities in AMoD systems. Further, customers do not have to drive, which allows them to save time on commuting. Various companies started to develop this technology in response to such promising benefits.
Fleet management studies focus on optimizing vehicle routing to rebalance empty vehicles and serve customers in the network. They aim to reduce the operational costs and the waiting times of customers. The AMoD system avoids the costs of rebalancing drivers to drive vehicles from oversupplied origins to undersupplied origins. Moreover, similar to the current car-sharing companies such as Car2go, AMoD provides excellent convenience for one-way trips since users do not have to return vehicles to the origin of the trip. AMoD services, therefore, provide opportunities for efficient fleet management.
If the AMoD framework is not appropriately controlled, it can run into an imbalanced system, i.e., oversupplying the stations frequently opted as destinations. In contrast, regions with a high number of originating trips are undersupplied. To circumvent this issue, a rebalancing strategy is needed in moving vehicles to high-demand stations. Aiming for the rebalancing strategy, we need a model to capture the dynamics of the AMoD system.
### Literature Review
At a high level, approaches to tackling operational AMoD issues may be divided into two categories: model-free and model-based.
#### 1.1.1 Model-free
Model-free techniques for AMoD fleet rebalancing can be characterized as centralized or decentralized. A centralized agent rebalances cars in order to optimize specific objectives, such as travel time. The authors in [2] present a RL approach that uses a dynamic pricing AMoD framework aiming to maximize the profit and rebalance the fleet. On the other hand, [3] emphasizes customer waiting time minimization. A RL approach for the taxi dispatching and rebalancing problem is introduced in [4] to maximize taxi driver long-term revenues utilizing Q-learning and discrete states and actions on a grid-shaped map. A double deep Q-learning architecture is proposed in [5] for vehicle routing in a ride-sharing AMoD system, where idle cars are rebalanced to fulfill future demands. [6] employs RL and mixed integer linear programming (MILP) for fleet rebalancing and uses hierarchical binary partitioning and tabular Q-learning for RL.
Decentralized approaches, as contrasted with centralized methods, enable each vehicle to act as its own agent and be trained in either a cooperative or competitive fashion. In [7], a ride-sharing architecture for vehicles utilizing a deep Q-network to learn the optimal policies is proposed for individual vehicles in a distributed and uncoordinated fashion. Passenger satisfaction and vehicle utilization are the two most important objectives of the framework. The authors in [8] employ multi-agent RL, where each vehicle functions as an individual agent. Similarly, [9] provides a dynamic ride-sharing system in which both passenger assignments and fleet rebalancing are learned and performed by individual agents using multi-agent RL. Furthermore, [10] addresses rebalancing idle vehicles by developing a deep Q-learning approach.
#### 1.1.2 Model-based
Model-based AMoD techniques attribute an explicit model to system dynamics and utilize it to determine optimal decisions. Despite their complexity, they are powerful and allow us to examine the model's properties, including convergence. Numerous studies proposed and developed system models including queuing [11, 12], fluidic [13], network flow [14, 15, 16], and data-driven [17] approaches. Further classifications of model-based approaches include mathematical optimization and simulation-based methods. Various studies have tackled the rebalancing of vehicle fleets as a complex optimization problem. Combining the model predictive control (MPC) algorithms with the network flow model offers an efficient tool for expressing complex constraints. For example, [18] implements an MPC algorithm leveraging historical data and neural networks to develop a model for short-term demand forecasts to address the dispatching and rebalancing problem. Moreover, to improve social welfare, [19] suggests a real-time MPC framework that optimizes a weighted combination of vehicle mileage and passenger travel time. In addition, a scalable MPC control has been developed in [16] to keep the system balanced.
Reinforcement learning (RL) is one of the three fundamental machine learning paradigms, alongside supervised learning and unsupervised learning, which has a long history [20]. The RL discipline has also been reinvented by recent developments in machine learning (ML), particularly employing deep networks. The dynamical system's model is typically unknown in RL settings, and the ideal controller is discovered by engagement with the environment. It is fundamental for the RL algorithms to deliver assured stability and performance as the range of RL extends to more difficult tasks. Due to deep networks' inherent complexity and the intricacy of the tasks, we are still a long way from being able to analyze RL algorithms. This encourages thinking about a case study that is simplified and allows for analysis. There are well-known challenges associated with model-free RL algorithms. Examples of these challenges include the necessity of a trade-off between policy exploitation and exploration, problem-dependent reward shaping, and the design of an appropriate neural architecture for the policy network. In addition, the majority of them are neither theoretically tractable nor can their convergence be investigated.
H\({}_{\infty}\) problem is a classical control problem where the dynamical system follows linear dynamics and the cost function to be minimized is quadratic. It is a robust control method that is implemented to attenuate the effects of disturbances on the performance of dynamical systems. It is a great benchmark for studying
since the closed-form solution for H\({}_{\infty}\) is available. Moreover, it is theoretically tractable in comparison to the RL algorithms.
As a result of the aforementioned factors, the linear quadratic (LQ) problem has received greater attention from the RL community [21, 22, 23], see also [24] for a thorough overview of RL methods and their properties for the LQ problems. In addition, the convergence of policy gradient methods for the linear quadratic regulator (LQR) problem is shown in [25]. RL also has been applied for solving optimal control problems in an uncertain environment,[26, 27, 28]. Inherently, the Q-learning algorithm does not eliminate the impacts of the probing noise, which is employed to excite the system, in the Bellman equation when evaluating the value function. The algorithm's convergence may be impacted, and this may lead to bias. In [28], two separate policies are used to update the algorithm to cancel the effects of probing noise. However, there should be enough generated data for each iteration to estimate the policies.
### Contributions
In this paper, we propose a RL algorithm to solve the H\({}_{\infty}\) control of linear discrete-time systems. It is model-free, real-time, and data-efficient, i.e., using a single data, the parameters of the actor and critic networks are updated. This feature results in reducing the order of computational complexity to square (\(\mathcal{O}(\underline{q}^{2})\)) where \(\underline{q}\) is the number of parameters being estimated compared to the cube order (\(\mathcal{O}(\underline{q}^{3})\)) in the state-of-the-art algorithms in the literature (e.g., [26, 28]). This RL algorithm does not suffer from bias if probing noise is used. Moreover, a sufficient amount of probing noise is only needed in the first iteration, i.e., the policy used to generate data, called the behavior policy is different only in the first iteration than the policy that is being evaluated and improved, called the estimation policy or target policy. The convergence of the proposed algorithm is shown. Moreover, we apply the proposed algorithm to an AMoD system which can be modeled as an H\({}_{\infty}\) control of linear discrete-time systems.
In summary, the contributions of this paper can be expressed as follows:
1. Proposed a model-free, real-time, and data-efficient algorithm to solve the H\({}_{\infty}\) control of linear discrete-time systems.
2. Reduced the order of computational complexity form cube (\(\mathcal{O}(\underline{q}^{3})\)) in the state-of-the-art algorithms in the literature to square (\(\mathcal{O}(\underline{q}^{2})\)).
3. Discussed the properties of the proposed algorithm and proved its convergence.
4. Applied the proposed algorithm to an AMoD system which can be modeled as an H\({}_{\infty}\) control of linear discrete-time systems.
### Organization
The remainder of the paper is organized as follows. Section 4 presents the problem formulation and model for the AMoD system. In Section 2, the discrete-time H\({}_{\infty}\) control problem is formulated. This section is concluded by implementing the Q-learning algorithm. In Section 3, the online implementation of the proposed algorithm and its properties are analyzed. Besides, the convergence of the proposed algorithm is proved. We present the results of the numerical case study example in Section 5. Finally, the paper is concluded in Section 6.
### Notation
The symbols \(\mathbf{1}_{m}\) and \(\mathbf{0}_{n}\) denote column vectors of dimension \(m\) and \(n\) with all entries equal to \(1\) and \(0\), respectively. Given a vector \(p\in\mathbb{R}^{n}\), we define \(\tilde{P}=\mathrm{diag}\left(p\right)\in\mathbb{R}^{n\times n}\) as a diagonal matrix with the elements of the vector \(p\) on the diagonal. \(\mathrm{vecs}(P)=\left[p_{11},...,p_{1n},p_{22},...,p_{2n},...,p_{nn}\right]^{T}\) is the vectorization of the upper-triangular part of a symmetric matrix \(P\in\mathbb{R}^{n\times n}\), and \(\mathrm{vecv}(v)=\left[v_{1}^{2},2v_{1}v_{2},...,2v_{1}v_{n},v_{2}^{2},...,2v_ {2}v_{n},...v_{n}^{2}\right]^{T}\) is the quadratic vector of the vector \(v\in\mathbb{R}^{n}\).
### Preliminaries
Consider a directed graph \(G\left(N,A\right)\) where \(N=\{1,\ldots,n\}\) is the set of nodes and \(A=\{1,\ldots,m\}\) is the set of links. Let \(E_{\text{in}}\) and \(E_{\text{out}}\in\{0,1\}^{n\times m}\) be the in-neighbors and out-neighbors matrices. The incidence matrix \(E\in\left\{-1,0,1\right\}^{n\times m}\) can be derived by \(E=E_{\text{in}}-E_{\text{out}}\).
## 2 Discrete-time (DT) H\({}_{\infty}\) Control Problem
Consider the following linear discrete-time system
\[x_{t+1}=\mathcal{A}x_{t}+\mathcal{B}v_{t}+\mathcal{L}d_{t}, \tag{1}\]
where \(x_{t}\in\mathbb{R}^{m_{1}}\) is the system state, \(v_{t}\in\mathbb{R}^{m_{2}}\) is the control input, and \(d_{t}\in\mathbb{R}^{m_{3}}\) is the external disturbance input.
**Assumption 1**.: _The pair \(\left(\mathcal{A},\mathcal{B}\right)\) is stabilizable, i.e., all uncontrollable modes are asymptotically stable._
We consider the standard Q-learning algorithm and discuss its properties. Since system identification is not going to be performed to estimate the parameters of systems, we use the following objective function
\[\mathcal{J}(x_{t},v_{t},d_{t})=\sum\limits_{i=t}^{\infty}r(x_{i},v_{i},d_{i}) \tag{2}\]
where
\[r(x_{i},v_{i},d_{i})=x_{i}^{T}R_{x}x_{i}+v_{i}^{T}R_{v}v_{i}-\gamma^{2}d_{i}^ {T}d_{i},\]
for a prescribed fixed value of \(\gamma\). Matrices \(R_{x}\) and \(R_{v}\) are positive semidefinite (PSD) and positive definite (PD), respectively. In the H\({}_{\infty}\) control problem, \(\gamma\) is an upper bound on the desired \(L_{2}\) gain disturbance attenuation [29]. Note that the formulation we used is similar to min-max LQ in [30] and [31]. In particular, the authors in [31] consider a nonconvex-nonconcave saddle-point problem in the policy space and show that despite its non-convexity and non-concavity, zero-sum LQ games have the property that the stationary point of the objective function with respect to the linear feedback control policies constitutes the Nash equilibrium (NE) of the game. In the zero-sum game LQ problem, it is desired to find the optimal control \(v_{t}^{*}\) and the worst-case disturbance \(d_{t}^{*}\). Note that functions in \(L_{2}\left[0,\infty\right)\) represent the signals having finite energy over infinite interval \(\left[0,\infty\right)\). That is, \(\sum\limits_{t=0}^{\infty}d_{t}^{T}d_{t}<\infty\). Moreover, using (2) and given some fixed policy for an admissible control policy \(v_{t}=K_{v}x_{t}\) and a disturbance policy \(d_{t}=K_{d}x_{t}\) the value function is defined as
\[V(x_{t},K_{v},K_{d})=\sum\limits_{i=t}^{\infty}r(x_{i},K_{v}x_{i},K_{d}x_{i}), \tag{3}\]
and the Bellman equation reads
\[V(x_{t},K_{v},K_{d})=r(x_{t},K_{v}x_{t},K_{d}x_{t})+V(x_{t+1},K_{v},K_{d}). \tag{4}\]
Since \(V(x_{t},K_{v},K_{d})=Q(x_{t},K_{v}x_{t},K_{d}d_{t})\), the Bellman equation under the policy gains \(K_{v}\) and \(K_{d}\) can be rewritten as follows:
\[Q(x_{t},v_{t},d_{t})= r(x_{t},v_{t},d_{t})+V(x_{t+1},K_{v},K_{d}), \tag{5}\]
and the Bellman optimality equation for the Q-function under the optimal policy gains \(K_{v}^{*}\) and \(K_{d}^{*}\) is
\[Q^{*}(x_{t},v_{t},d_{t})= r(x_{t},v_{t},d_{t})+Q^{*}(x_{t+1},K_{v}^{*}x_{t+1},K_{v}^{*}x_{t+ 1}). \tag{6}\]
### Derivation of Q-learning Algorithm
We use the Q-function to develop a Q-learning algorithm ([20, 32]) to solve for the DT H\({}_{\infty}\) Control Problem using the Bellman equation (5). This routine is an actor-critic class of reinforcement learning, where the critic agent evaluates the current control policy using methods based on the policy evaluation. After this evaluation is completed, the action is updated by an actor agent based on the policy improvement. The learning process starts with an initial Q-function \(Q^{0}(x,v,d)=0\) in the Q-learning that is not necessarily optimal, and then derives \(Q^{1}(x,v,d)\) by solving Eq. (7) with \(i=0\).
#### 2.1.1 Policy evaluation
We evaluate the policy by using \(Q\)-function in (7).
\[Q^{i+1}(x_{t},v_{t},d_{t})= r(x_{t},v_{t},d_{t})+Q^{i}(x_{t+1},K_{v}^{i}x_{t+1},K_{v}^{i}x_{t+ 1}). \tag{7}\]
#### 2.1.2 Policy improvement
The control and disturbance policies will be improved as follows:
\[K_{v}^{i+1}= \arg\min_{K_{v}}Q^{i+1}(x_{t},v_{t},d_{t})\] \[K_{d}^{i+1}= \arg\max_{K_{d}}Q^{i+1}(x_{t},v_{t},d_{t}).\]
Let \(z_{t}=[x_{t}^{T},v_{t}^{T},d_{t}^{T}]^{T}\) and \(P^{i}=\begin{bmatrix}I&{K_{v}^{i}}^{T}&{K_{d}^{i}}^{T}\end{bmatrix}S^{i} \begin{bmatrix}I&{K_{v}^{i}}^{T}&{K_{d}^{i}}^{T}\end{bmatrix}^{T}.\) Given a linear system, linear policies, and quadratic cost, we can assume the quality function (Q-function) is quadratic in the state, control, and disturbance so that
\[Q^{i+1}(z_{t})=z_{t}^{T}S^{i+1}z_{t}. \tag{8}\]
Applying (8) in (7), the Lyapunov equation yields
\[z_{t}^{T}S^{i+1}z_{t}=r(x_{t},v_{t},d_{t})+x_{t+1}^{T}P^{i}x_{t+1}. \tag{9}\]
Replacing the dynamics (1) in (9), we have:
\[z_{t}^{T}S^{i+1}z_{t}= x_{t}^{T}R_{x}x_{t}+v_{t}^{T}R_{v}v_{t}-\gamma^{2}d_{t}^{T}d_{t}+( \mathcal{A}x_{t}+\mathcal{B}v_{t}+\mathcal{L}d_{t})^{T}P^{i}(\mathcal{A}x_{t} +\mathcal{B}v_{t}+\mathcal{L}d_{t})\] \[= \begin{bmatrix}x_{t}^{T}&v_{t}^{T}&d_{t}^{T}\end{bmatrix} \begin{bmatrix}R_{x}+\mathcal{A}^{T}P^{i}\mathcal{A}&\mathcal{A}^{T}P^{i} \mathcal{B}&\mathcal{A}^{T}P^{i}\mathcal{L}\\ \mathcal{B}^{T}P^{i}\mathcal{A}&R_{v}+\mathcal{B}^{T}P^{i}\mathcal{B}&\mathcal{ B}^{T}P^{i}\mathcal{L}\\ \mathcal{L}^{T}P^{i}\mathcal{A}&\mathcal{L}^{T}P^{i}\mathcal{B}&\mathcal{L}^{T }P^{i}\mathcal{L}-\gamma^{2}I\end{bmatrix}\begin{bmatrix}x_{t}\\ v_{t}\\ d_{t}\end{bmatrix}\] \[= z_{t}^{T}\begin{bmatrix}R_{x}+\mathcal{A}^{T}P^{i}\mathcal{A}& \mathcal{A}^{T}P^{i}\mathcal{B}&\mathcal{A}^{T}P^{i}\mathcal{L}\\ \mathcal{B}^{T}P^{i}\mathcal{A}&R_{v}+\mathcal{B}^{T}P^{i}\mathcal{B}&\mathcal{ B}^{T}P^{i}\mathcal{L}\\ \mathcal{L}^{T}P^{i}\mathcal{A}&\mathcal{L}^{T}P^{i}\mathcal{B}&\mathcal{L}^{T }P^{i}\mathcal{L}-\gamma^{2}I\end{bmatrix}z_{t}.\]
Let us partition matrix \(S^{i+1}\) as
\[S^{i+1}=\begin{bmatrix}S^{i+1}_{xx}&S^{i+1}_{xy}&S^{i+1}_{x\neq d}\\ S^{i+1}_{yx}&S^{i+1}_{yv}&S^{i+1}_{yv}\\ S^{i+1}_{dx}&S^{i+1}_{dy}&S^{i+1}_{dd}\\ S^{i+1}_{dx}&S^{i+1}_{dy}&S^{i+1}_{dd}\end{bmatrix}. \tag{10}\]
Optimizing \(Q^{i+1}(z_{t})\) over \(v_{t}\) and \(d_{t}\) results in
\[v_{t} =-{S^{i+1}_{vv}}^{-1}(S^{i+1}_{vd}d_{t}+S^{i+1}_{vx}x_{t}),\] \[d_{t} =-{S^{i+1}_{dd}}^{-1}(S^{i+1}_{dv}v_{t}+S^{i+1}_{dx}x_{t}).\]
Substituting \(v_{t}\) in \(d_{t}\) and vice versa yields the equations \(v_{t}^{i+1}=K_{v}^{i+1}x_{t}\) and \(d_{t}^{i+1}=K_{d}^{i+1}x_{t}\) where
\[K_{v}^{i+1}= \left(S^{i+1}_{vv}-S^{i+1}_{vd}S^{i+1}_{dd}S^{i+1}_{dv}\right)^{-1 }\left(S^{i+1}_{vd}S^{i+1}_{dd}S^{i+1}_{dx}-S^{i+1}_{vx}\right), \tag{11a}\] \[K_{d}^{i+1}= \left(S^{i+1}_{dd}-S^{i+1}_{dv}S^{i+1}_{vv}S^{i+1}_{vd}\right)^{-1 }\left(S^{i+1}_{dv}S^{i+1}_{vv}S^{i+1}_{vx}-S^{i+1}_{dx}\right). \tag{11b}\]
Using (8) and applying the above result in (9), the following recursion can be concluded:
\[S^{i+1}= \underbrace{\begin{bmatrix}R_{x}&0&0\\ 0&R_{v}&0\\ 0&0&-\gamma^{2}I\end{bmatrix}}_{G}+\begin{bmatrix}\mathcal{A}^{T}\\ \mathcal{B}^{T}\\ \mathcal{L}^{T}\end{bmatrix}\begin{bmatrix}I&{K_{v}^{i}}^{T}&{K_{d}^{i}}^{T} \end{bmatrix}S^{i}\begin{bmatrix}I\\ K_{v}^{i}\\ K_{d}^{i}\end{bmatrix}\begin{bmatrix}\mathcal{A}&\mathcal{B}&\mathcal{L} \end{bmatrix}. \tag{12}\]
Given
\[P^{i}=\begin{bmatrix}I&{K_{v}^{i}}^{T}&{K_{d}^{i}}^{T}\end{bmatrix}S^{i}\begin{bmatrix} I&{K_{v}^{i}}^{T}&{K_{d}^{i}}^{T}\end{bmatrix}^{T},\]
the following equation can be concluded:
\[P^{i+1}=\begin{bmatrix}I&{K_{v}^{i+1}}^{T}&{K_{d}^{i+1}}^{T}\end{bmatrix}S^{i+1} \begin{bmatrix}I&{K_{v}^{i+1}}^{T}&{K_{d}^{i+1}}^{T}\end{bmatrix}^{T}.\]
Substituting (12), (11a), and (11b), one can obtain:
\[P^{i+1}=R_{x}+\mathcal{A}^{T}P^{i}\mathcal{A}-\begin{bmatrix}\mathcal{A}^{T}P^ {i}\mathcal{B}&\mathcal{A}^{T}P^{i}\mathcal{L}\end{bmatrix}\begin{bmatrix}R_{v }+B^{T}P^{i}\mathcal{B}&\mathcal{B}^{T}P^{i}\mathcal{L}\\ \mathcal{L}^{T}P^{i}\mathcal{B}&\mathcal{L}^{T}P^{i}\mathcal{L}-\gamma^{2}I \end{bmatrix}^{-1}\begin{bmatrix}\mathcal{B}^{T}P^{i}\mathcal{A}\\ \mathcal{L}^{T}P^{i}\mathcal{A}\end{bmatrix}. \tag{13}\]
Equation (13) is called Lyapunov Recursion.
In summary, we evaluate the policy gains \(K_{v}\) and \(K_{d}\) by finding the quadratic kernel \(S\) of the \(Q\)-function using (12) and then improved policy gains are given by (11a) and (11b).
## 3 Online Implementation of the Proposed Algorithm
In this section, we discuss the online implementation of the proposed algorithm and prove its convergence. Algorithm 1 summarizes the steps of the proposed algorithm for the H\({}_{\infty}\) problem (1).
```
1:Initialization:\(i=0\), Any arbitrary policy gain \(K_{v}^{0}\), \(K_{d}^{0}\), and \(S=\mathbf{0}\)
2:for\(\tau=-(q-1),\cdots,0\)do
3: Sample \(\lambda\sim\mathcal{N}\left(0,W_{\lambda}\right)\) and set \(v=K_{v}^{0}x+\lambda\).
4: Take \(v\) and \(d\) and observe \(x_{+}\).
5:endfor
6:Estimate \(S^{1}\) by (15)
7:Improve the policies \(K_{v}^{1}\) and \(K_{d}^{1}\) by (11a) and (11b).
8:while\(\|S^{i+1}-S^{i}\|_{2}>\epsilon\)do
9: Take \(K_{v}^{i}\) and \(K_{d}^{i}\) and observe \(x_{i+1}\).
10: Estimate \(S^{i+1}\) by (15).
11: Improve the policies by (11a) and (11b).
12:\(i=i+1\).
13:endwhile
```
**Algorithm 1**
We will parameterize the Q-function in (7) so that we can separate the unknown matrix \(S\). Using parameterization and defining \(s=\mathrm{vecs}(S)\), \(p=\mathrm{vecs}(P)\), \(z_{t}=\begin{bmatrix}x_{t}^{T},v_{t}^{T},d_{t}^{T}\end{bmatrix}^{T}\), \(\phi_{t}(K_{v}^{i})=[x_{t}^{T},(K_{v}^{i}x_{t})^{T},d_{t}^{T}]^{T}\), and \(\phi_{t}(K_{v}^{i},K_{d}^{i})=[x_{t}^{T},(K_{v}^{i}x_{t})^{T},(K_{d}^{i}x_{t}) ^{T}]^{T}\), we have the below equation:
\[\mathrm{vecv}(z_{t})s_{i+1}=r\left(x_{t},v_{t},d_{t}\right)+\mathrm{vecv}( \phi_{t+1}(K_{v}^{i},K_{d}^{i}))s_{i}. \tag{14}\]
To find the optimal policy in each iteration, we need to solve the following least square (LS) problem:
\[s_{i+1}=\min_{s}\quad\left\|\Psi_{i}s-\Gamma_{i}\right\|_{2}^{2}, \tag{15}\]
where:
* \(\xi=\mathrm{vecs}\left(G\right).\)
* \(\Gamma_{i}=\Psi_{i}\xi+\Phi_{i}s^{i}=\begin{bmatrix}\Gamma_{i-1}^{T},\gamma_{ i}^{T}\end{bmatrix}^{T}\) where \(\gamma_{i}=\mathrm{vecv}(\phi_{i}(K_{v}^{i}))\xi+\mathrm{vecv}(\phi_{i+1}(K_{v} ^{i},K_{d}^{i}))s^{i}\).
* \(\Phi_{i}s_{i}\) can be written as \(X_{i}^{+}p_{i}\) where \(X_{i}^{+}=\begin{bmatrix}X_{i-1}^{+}&{}^{T},\mathrm{vecv}(x_{i+1})^{T}\end{bmatrix}^{T}\).
* \(\Psi_{i}=\left[\Psi_{i-1}{}^{T},\text{vecvec}(\phi_{i}(K_{v}^{i}))^{T}\right]^{T}\) and \(\Phi_{i}=\left[\Phi_{i-1}^{T},\text{vecvec}(\phi_{i+1}(K_{v}^{i},K_{d}^{i}))^{ T}\right]^{T}\), for \(i=1,2,\cdots\).
* The initial values are given as \(\Psi_{0}=\left[\text{vecvec}(z_{-(q-1)})^{T},\text{vecvec}(z_{-(q-2)})^{T}, \cdots,\text{vecvec}(z_{0})^{T}\right]^{T}\), \(\Phi_{0}=\left[\text{vecvec}(\phi_{-(q-2)}(K_{v}^{0},K_{d}^{0}))^{T},\ \cdots\,\text{vec}(\phi_{1}(K_{v}^{0},K_{d}^{0}))^{T}\right]^{T}\), \(\Gamma_{0}=\Psi_{0}\xi+\Phi_{0}s^{0}\), and \(X_{0}^{+}=\left[\text{vecvec}(x_{-(q-2)}))^{T},\ \cdots\,\text{vec}(x_{1})^{T}\right]^{T}\).
Equation (14) is used in the policy evaluation step to solve for the unknown vector \(s\) in the least-squares sense by collecting \(q\geq\underline{q}\) data samples of \(x\), \(v\), and \(d\), where \(\underline{q}=(m_{1}+m_{2}+m_{3})(m_{1}+m_{2}+m_{3}+1)/2\). It should be noted that \(v_{t}\) and \(d_{t}\) are linearly dependent on \(x_{t}\) which means that \(\Psi^{T}\Psi\) is not invertible. To resolve this issue, excitation noise is added in \(v_{t}\) and \(d_{t}\) in only the first iteration such that a unique solution to (15) is guaranteed. On the other hand, \(\text{rank}\left(\Psi\right)=\underline{q}\). In Algorithm (1), instead of getting \(q\) samples in each iteration and updating matrix \(S\), we update the algorithms using only a single data. Another advantage is that persistent excitation is needed only in the initial iteration.
**Remark 1**.: _In section 3, we only have one index, \(i\), since in each iteration we use only a single data. Therefore, we do not require the use of both subscript \(i\) and superscript \(t\) and only use index \(i\)._
### Recursive Least Square (RLS)
Least square (LS) estimation is used when one has an overdetermined system of equations. If data is coming in sequentially, we do not have to recompute everything each time a new data point comes in. Moreover, we can write our new, updated estimate in terms of our old estimate [33].
Consider Eq. (15). The solution can thus be written as
\[\Psi_{i}{}^{T}\Psi_{i}s_{i+1}=\Psi_{i}{}^{T}\Gamma_{i}. \tag{16}\]
By defining \(\Xi_{i}=\Psi_{i}^{T}\Psi_{i}\), we have
\[\Xi_{i}=\Psi_{i}^{T}\Psi_{i}= \Psi_{i-1}^{T}\Psi_{i-1}+\text{vecv}(\phi_{i}(K_{v}^{i}))^{T} \text{vecv}(\phi_{i}(K_{v}^{i}))=\Xi_{i-1}+\text{vecv}(\phi_{i}(K_{v}^{i}))^{ T}\text{vecv}(\phi_{i}(K_{v}^{i})). \tag{17}\]
Rearranging Eq. (16), we get
\[\Xi_{i}s_{i+1}= \Psi_{i-1}^{T}\Gamma_{i-1}+\text{vecv}(\phi_{i}(K_{v}^{i}))^{T} \gamma_{i}=\Xi_{i-1}s_{i}+\text{vecv}(\phi_{i}(K_{v}^{i}))^{T}\gamma_{i}.\]
By denoting \(M_{i}=\Xi_{i}^{-1}\),
\[s_{i+1}=M_{i}\left(\Xi_{i-1}s_{i}+\text{vecv}(\phi_{i}(K_{v}^{i}))^{T}\gamma_ {i}\right).\]
Plug the above equation into (17), it yields
\[s_{i+1}= s_{i}-M_{i}\left(\text{vecv}(\phi_{i}(K_{v}^{i}))^{T}\text{vecv}( \phi_{i}(K_{v}^{i}))s_{i}-\text{vecv}(\phi_{i}(K_{v}^{i}))^{T}\gamma_{i}\right)\] \[= s_{i}+M_{i}\text{vecv}(\phi_{i}(K_{v}^{i}))^{T}\left(\gamma_{i}- \text{vecv}(\phi_{i}(K_{v}^{i}))s_{i}\right),\]
where \(M_{i}\) can be updated in each iteration using Sherman-Morrison formula ([34]) as follows:
\[M_{i}=M_{i-1}-\frac{M_{i-1}\text{vecv}(\phi_{i}(K_{v}^{i}))^{T}\text{vecv}( \phi_{i}(K_{v}^{i}))M_{i-1}}{1+\text{vecv}(\phi_{i}(K_{v}^{i}))M_{i-1}\text{vecv }(\phi_{i}(K_{v}^{i}))^{T}}. \tag{18}\]
The quantity \(M_{i}\text{vecv}(\phi_{i}(K_{v}^{i}))^{T}\) is called the "Kalman Filter Gain", and \(\gamma_{i}-\text{vecv}(\phi_{i}(K_{v}^{i}))s_{i}\) is called 'innovations' since it compares the difference between a data update and the action given the last estimate. If the dimension of \(\Xi_{i}\) is very large, computation of its inverse can be computationally expensive, so one would like to have a recursion for the \(M_{i+1}\) as in (18).
**Theorem 1** (Convergence of Algorithm 1).: _Assume that the linear quadratic problem (1)-(3) is solvable and has a value under the state feedback information structure or equivalently assume there exists a solution to the game's algebraic Riccati recursion (13). Then, iterating on (12) (equivalent to iterating on (13)) with \(S^{0}=0\), \(K_{v}^{0}=0\), and \(K_{d}^{0}=0\) converges with \(S^{i}\to S^{\star}\) and equivalently \(P^{i}\to P^{\star}\) where the matrix \(P^{\star}\) satisfies the following Recatti equation:_
\[P^{\star}=R_{x}+\mathcal{A}^{T}P^{\star}\mathcal{A}-\begin{bmatrix}\mathcal{A} ^{T}P^{\star}\mathcal{B}&\mathcal{A}^{T}P^{\star}\mathcal{L}\end{bmatrix} \begin{bmatrix}R_{v}+B^{T}P^{\star}\mathcal{B}&\mathcal{B}^{T}P^{\star} \mathcal{L}\\ \mathcal{L}^{T}P^{\star}\mathcal{B}&\mathcal{L}^{T}P^{\star}\mathcal{L}- \gamma^{2}I\end{bmatrix}^{-1}\begin{bmatrix}\mathcal{B}^{T}P^{\star} \mathcal{A}\\ \mathcal{L}^{T}P^{\star}\mathcal{A}\end{bmatrix}. \tag{19}\]
Proof.: Recall the solution of the problem (15):
\[\Psi_{i}^{T}\Psi_{i}s_{i+1}=\Psi_{i}^{T}\Psi_{i}\xi+\Psi_{i}^{T}X_{i}^{+}p_{i}.\]
The following equation can be concluded:
\[s_{i+1}=\xi+M_{i}\Omega_{i}p_{i}, \tag{20}\]
where
\[\Omega_{i}=\Psi_{i}^{T}X_{i}^{+}=\Psi_{i-1}^{T}X_{i-1}^{+}+\mathrm{vecv}( \phi_{i}(K_{v}^{i}))^{T}\mathrm{vecv}(x_{i+1})=\Omega_{i-1}+\mathrm{vecv}( \phi_{i}(K_{v}^{i}))^{T}\mathrm{vecv}(x_{i+1}).\]
Using Sherman-Morrison formula (18) and defining \(W_{i}=M_{i}\Omega_{i}\), we have
\[W_{i} =M_{i}\Omega_{i}=\left(M_{i-1}-\frac{M_{i-1}\mathrm{vecv}(\phi_{i }(K_{v}^{i}))^{T}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))M_{i-1}}{1+\mathrm{vecv}( \phi_{i}(K_{v}^{i}))M_{i-1}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T}}\right) \left(\Omega_{i-1}+\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T}\mathrm{vecv}(x_{i+1} )\right)\] \[=W_{i-1}-\frac{M_{i-1}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T} \mathrm{vecv}(\phi_{i}(K_{v}^{i}))}{1+\mathrm{vecv}(\phi_{i}(K_{v}^{i}))M_{i- 1}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T}}W_{i-1}+M_{i}\mathrm{vecv}(\phi_{i+1 }(K_{v}^{i}))^{T}\mathrm{vecv}(x_{i+1})\] \[=W_{i-1}-\frac{M_{i-1}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T} \mathrm{vecv}(\phi_{i}(K_{v}^{i}))}{1+\mathrm{vecv}(\phi_{i}(K_{v}^{i}))M_{i- 1}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T}}W_{i-1}+\frac{M_{i-1}\mathrm{vecv}( \phi_{i}(K_{v}^{i}))^{T}}{1+\mathrm{vecv}(\phi_{i}(K_{v}^{i}))M_{i-1}\mathrm{ vecv}(\phi_{i}(K_{v}^{i}))^{T}}\mathrm{vecv}(x_{i+1}).\]
Recall \(x_{i+1}=\mathcal{A}x_{i}+\mathcal{B}K_{v}^{i}x_{i}+\mathcal{L}d_{i}\). Hence, \(\mathrm{vecv}(x_{i+1})=\mathrm{vecv}(\mathcal{A}x_{i}+\mathcal{B}K_{v}^{i}x_ {i}+\mathcal{L}d_{i})\). Note that \(\mathrm{vecv}(x_{i+1})\) can be partitioned as \(\mathrm{vecv}(\phi_{i}(K_{v}^{i}))f(\mathcal{A},\mathcal{B},\mathcal{L})\), where \(f(\mathcal{A},\mathcal{B},\mathcal{L})\) is a matrix that its entities are a function of the entities of the matrices \(\mathcal{A}\), \(\mathcal{B}\), and \(\mathcal{L}\). Therefore, \(W_{i}\) can be written as follows:
\[W_{i}= W_{i-1}-\frac{M_{i-1}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T} \mathrm{vecv}(\phi_{i}(K_{v}^{i}))}{1+\mathrm{vecv}(\phi_{i}(K_{v}^{i}))M_{i- 1}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T}}W_{i-1}+\frac{M_{i-1}\mathrm{vecv}( \phi_{i}(K_{v}^{i}))^{T}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))f(\mathcal{A}, \mathcal{B},\mathcal{L})}{1+\mathrm{vecv}(\phi_{i}(K_{v}^{i}))M_{i-1}\mathrm{ vecv}(\phi_{i}(K_{v}^{i}))^{T}}.\]
By subtracting \(f(\mathcal{A},\mathcal{B},\mathcal{L})\) from both sides of above equation, we have:
\[W_{i}-f(\mathcal{A},\mathcal{B},\mathcal{L})=W_{i-1}-f(\mathcal{A},\mathcal{B},\mathcal{L})-\frac{M_{i-1}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T}\mathrm{vecv} (\phi_{i}(K_{v}^{i}))}{1+\mathrm{vecv}(\phi_{i}(K_{v}^{i}))M_{i-1}\mathrm{vecv} (\phi_{i}(K_{v}^{i}))^{T}}\left(W_{i-1}-f(\mathcal{A},\mathcal{B},\mathcal{L })\right).\]
Let's denote \(\hat{W}_{i-1}=W_{i-1}-f(\mathcal{A},\mathcal{B},\mathcal{L})\). Above equation yields
\[\hat{W}_{i}=\left(I-\frac{M_{i-1}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T} \mathrm{vecv}(\phi_{i}(K_{v}^{i}))}{1+\mathrm{vecv}(\phi_{i}(K_{v}^{i}))M_{i-1} \mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T}}\right)\hat{W}_{i-1}=M_{i}M_{i-1}^{-1} \hat{W}_{i-1}.\]
Considering \(\hat{W}_{1}=M_{1}M_{0}^{-1}\hat{W}_{0}\), we can have \(\hat{W}_{2}\) as follows:
\[\hat{W}_{2}=M_{2}M_{1}^{-1}\hat{W}_{1}=M_{2}M_{1}^{-1}M_{1}M_{0}^{-1}\hat{W}_{0 }=M_{2}M_{0}^{-1}\hat{W}_{0}.\]
As a result, we can conclude \(\hat{W}_{i}=M_{i}M_{i-1}^{-1}\hat{W}_{i-1}=M_{i}M_{0}^{-1}\hat{W}_{0}\). By expanding \(\hat{W}_{i}\), we have:
\[W_{i}= M_{i}M_{0}^{-1}\left(W_{0}-f(\mathcal{A},\mathcal{B},\mathcal{L}) \right)+f(\mathcal{A},\mathcal{B},\mathcal{L})\] \[= M_{i}\Psi_{0}^{T}\Psi_{0}\left(M_{0}\Psi_{0}^{T}X_{0}^{+}-f( \mathcal{A},\mathcal{B},\mathcal{L})\right)+f(\mathcal{A},\mathcal{B},\mathcal{L})\] \[= M_{i}\Psi_{0}^{T}\Psi_{0}\left((\Psi_{0}^{T}\Psi_{0})^{-1}\Psi_{0}^ {T}X_{0}^{+}-f(\mathcal{A},\mathcal{B},\mathcal{L})\right)+f(\mathcal{A}, \mathcal{B},\mathcal{L})\] \[= M_{i}(\Psi_{0}^{T}X_{0}^{+}-\Psi_{0}^{T}\Psi_{0}f(\mathcal{A}, \mathcal{B},\mathcal{L}))+f(\mathcal{A},\mathcal{B},\mathcal{L})\] \[= M_{i}\Psi_{0}^{T}(X_{0}^{+}-\Psi_{0}f(\mathcal{A},\mathcal{B}, \mathcal{L}))+f(\mathcal{A},\mathcal{B},\mathcal{L}).\]
The terms \(X_{0}^{+}\) and \(\Psi_{0}f(\mathcal{A},\mathcal{B},\mathcal{L})\) are vectorized form of \(x_{t+1}\) and \(\mathcal{A}x_{t}+\mathcal{B}v_{t}+\mathcal{L}d_{t}\), respectively, for the initial \(q\) time steps, respectively. Hence,
\[X_{0}^{+}-\Psi_{0}f(\mathcal{A},\mathcal{B},\mathcal{L})=0.\]
Consequently, it results in \(W_{i}=f(\mathcal{A},\mathcal{B},\mathcal{L})\). Therefore, by reconstructing (20), it follows (12) and consequently the following Ricatti recursion:
\[P^{i+1}=R_{x}+\mathcal{A}^{T}P^{i}\mathcal{A}-\begin{bmatrix}\mathcal{A}^{T}P^ {i}\mathcal{B}&\mathcal{A}^{T}P^{i}\mathcal{L}\end{bmatrix}\begin{bmatrix}R_{ v}+B^{T}P^{i}\mathcal{B}&\mathcal{B}^{T}P^{i}\mathcal{L}\\ \mathcal{L}^{T}P^{i}\mathcal{B}&\mathcal{L}^{T}P^{i}\mathcal{L}-\gamma^{2}I \end{bmatrix}^{-1}\begin{bmatrix}\mathcal{B}^{T}P^{i}\mathcal{A}\\ \mathcal{L}^{T}P^{i}\mathcal{A}\end{bmatrix}. \tag{21}\]
By using _Lemma 4.1 and Theorem 4.2_ in [35], it is shown that iterating on (13) with \(P_{0}=0\) converges to \(P^{*}\).
### Computational Complexity Analysis
Recall \(\underline{q}=(m_{1}+m_{2}+m_{3})(m_{1}+m_{2}+m_{3}+1)/2\) as the number of parameters to be estimated. In both classical Q-learning and the proposed algorithm, the number of parameters being estimated is similar. For the sake of comparison, assume \(q=\underline{q}\). for the initial iteration both of the algorithms have a computational complexity of order \(\mathcal{O}(\underline{q}^{3})\) while in the rest of the iterations, Algorithm 1 has a computational complexity of order \(\mathcal{O}(\underline{q}^{2})\), unlike classical Q-learning that has \(\mathcal{O}(\underline{q}^{3})\) order of computational complexity. In [26, 28]), to update the parameters of the critic network, at least \(\underline{q}\) data is required, and because there is a batch of data in each iteration, a pseudo-inverse (with the computational complexity of \(\mathcal{O}(\underline{q}^{3})\)) in each iteration must be computed. In contrast, we emphasize sample complexity and use only a _single data_ to update the parameters of the critic network. It is a huge advantage for systems that have long time steps or when acquiring data is not trivial. On the order of computational complexity, using the key equation
\[s_{i+1}=s_{i}+M_{i}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T}\left(\gamma_{i}- \mathrm{vecv}(\phi_{i}(K_{v}^{i}))s_{i}\right),\]
the computational complexity of \(M_{i_{\underline{q}\times\underline{q}}}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{ T}_{\underline{q}\times 1}\) is \(\mathcal{O}(\underline{q}^{2})\) (considering \(\gamma_{i}-\mathrm{vecv}(\phi_{i}(K_{v}^{i}))s_{i}\) is a scalar). The computational complexity of the key equation reduces to the computational complexity of calculating \(M_{i}\) in (18). The computational complexity of calculating the column vector \(M_{i-1_{\underline{q}\times\underline{q}}}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^ {T}_{\underline{q}\times 1}\) and the row vector \(\mathrm{vecv}(\phi_{i}(K_{v}^{i}))_{1\times\underline{q}}M_{i-1_{\underline{q }\times\underline{q}}}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T}_{\underline{q} \times 1}\) are \(\mathcal{O}(\underline{q}^{2})\). Considering the computational complexity of the scalar \(\mathrm{vecv}(\phi_{i}(K_{v}^{i}))_{1\times\underline{q}}M_{i-1_{\underline{q }\times\underline{q}}}\mathrm{vecv}(\phi_{i}(K_{v}^{i}))^{T}_{\underline{q} \times 1}\) is \(\mathcal{O}(\underline{q}^{2})\), therefore, the computational complexity of calculating \(M_{i}\) is \(\mathcal{O}(\underline{q}^{2})\), and consequently, the computational complexity of calculating \(s_{i+1}\) is \(\mathcal{O}(\underline{q}^{2})\).
## 4 Autonomous Mobility-on-Demand (AMoD) Model
In this section, a discrete-time linear dynamic model is formulated for the AMoD system. We relax the model in [16] by considering origin-destination demand. The linear discrete-time time-delay dynamic system is as follows:
\[w^{rs}\left(t+1\right)= w^{rs}\left(t\right)+d^{rs}\left(t\right)-U^{rs}\left(t\right) \tag{22a}\] \[p_{r}\left(t+1\right)= p_{r}\left(t\right)-\sum_{s\in N}\left(U^{rs}\left(t\right)+R^{ rs}\left(t\right)\right)+\sum_{q\in N}\left(\frac{g^{qr}\left(t\right)}{T_{qr}}\right)\] (22b) \[g^{rs}\left(t+1\right)= \left(1-\frac{1}{T_{rs}}\right)g^{rs}\left(t\right)+U^{rs}\left(t \right)+R^{rs}\left(t\right), \tag{22c}\]
for \(\forall\,r,s\in N\) where state variable \(w^{rs}\) denotes the waiting customers at \(r\) aiming to go to \(s\). State variable \(p_{r}\) characterizes the waiting or available vehicles at station \(r\). State variable \(g^{rs}\) denotes vehicles moving along the link \(\left\{r,s\right\}\), including both customer-carrying and rebalancing vehicles. Control input \(U^{rs}\) is the number of available vehicles at station \(r\) with a customer that will be dispatched to link \(\left\{r,s\right\}\). \(R^{rs}\) is the number of available vehicles at station \(r\) that will be dispatched to link \(\left\{r,s\right\}\) for rebalancing. The term \(d^{rs}(t)\) represents the arrival of customers in a time step given by the realization of a Poisson process of
parameter \(\lambda^{rs}\). Note that each vehicle serves only one customer request at a time, i.e., sharing/pooling is not considered. We also assume that the travel times \(T_{rs}\) are constant and exogenous. The reason is that the number of AMoD vehicles is much less than the rest of the traffic. Model (22) is derived using a first-order lag approximation of the time delays. It is assumed that the number of vehicles exiting a link is proportional to the number of vehicles on that link. In other word, at each time instant \(t\), the quantity \(g^{rs}\left(t\right)/T_{rs}\) leaves the link \(\left\{r,s\right\}\). Therefore, \(U^{rs}\left(t-T_{rs}\right)+R^{rs}\left(t-T_{rs}\right)\) can be replaced by \(g^{rs}\left(t\right)/T_{rs}\).
This AMoD system is subject to some constraints that enforce the non-negativity of state and control input variables. The global system associated with graph \(G\) is represented as
\[x_{t+1}=\mathcal{A}x_{t}+\mathcal{B}v_{t}+\mathcal{L}d_{t}, \tag{23}\]
where the vector of all state variables \(x_{t}\in\mathbb{R}^{2n^{2}-n}\) is \(\left[w\left(t\right)^{T},p\left(t\right)^{T},g\left(t\right)^{T}\right]^{T}\) and the vector of all control input variables \(v_{t}\in\mathbb{R}^{2n(n-1)}\) is defined as \(v_{t}=\left[U\left(t\right)^{T},R\left(t\right)^{T}\right]^{T}\). \(d_{t}\in\mathbb{R}^{n(n-1)}\) represents arriving customers. Matrices \(\mathcal{A}\), \(\mathcal{B}\), and \(\mathcal{L}\) can be written as below:
\[\mathcal{A}=\begin{bmatrix}I_{n(n-1)}&0&0\\ 0&I_{n}&E_{\text{in}}\tilde{T}^{-1}\\ 0&0&I_{n(n-1)}-\tilde{T}^{-1}\end{bmatrix},\quad\mathcal{B}=\begin{bmatrix}-I_ {n(n-1)}&0\\ -E_{\text{out}}&-E_{\text{out}}\\ I_{n(n-1)}&I_{n(n-1)}\end{bmatrix},\quad\mathcal{L}=\begin{bmatrix}I_{n(n-1)} \\ 0\\ 0\end{bmatrix}. \tag{24}\]
If graph \(G\) is strongly connected and \(d^{rs}=\lambda^{rs}\) for \(\forall\left\{r,s\right\}\in A\), where \(\lambda^{rs}\) represents the Poisson arrival rate for the link \(\left\{r,s\right\}\), then equilibrium points of system (23) are given by \(\bar{x}=\left(\bar{w},\bar{p},\bar{g}\right)\), where \(\bar{w}\) and \(\bar{p}\) can be any arbitrary positive vector, \(\bar{g}=\tilde{T}\left(\lambda+\bar{R}\right)\), \(\bar{U}=\lambda\), and \(\bar{R}\) satisfies \(E\left(\bar{R}+\lambda\right)=0\). If the number of nodes, \(n\), is greater than 2, there will be an infinite number of equilibrium points. Also, the desired equilibrium point that minimizes the number of rebalancing, \(\tilde{R}^{\star}\), can be found by solving the following optimization problem:
\[\min_{\bar{R}} \left\|\tilde{T}^{\frac{1}{2}}\bar{R}\right\|_{2}^{2} \tag{25a}\] \[\text{s.t.} E\left(\bar{R}+\lambda\right)=0,\quad\bar{R}\geq 0. \tag{25b}\]
By changing the coordinates of (23), we aim to regulate the AMoD system around the desired equilibrium points.
## 5 Simulation Study
We first introduce a network for the test we perform. Then, we apply Algorithm 1 developed in Section 3 to obtain optimal control, disturbance actions, and the value function parameters in time.
### Studied Network
The University of Minnesota-Twin Cities (UMN) campus network is considered as the site on which to perform the test. The network we consider is partitioned into six zones. A digraph with \(n=6\) vertices and \(m=30\) links is produced by partitioning; the graph vertices are superimposed on the map shown in Fig. 1. It should be noted that the rebalancing performance is certainly affected by partitioning, but a detailed analysis is beyond the scope of this article.
Figure 2 shows the histogram of the daily demand for UMN's campuses. The peak occurs between 11:00 AM and 1:00 PM. Since the demand represents the intra-zonal trips on the campuses (not commutes to the campus), it does not necessarily follow the typical morning and afternoon peaks.
### Case Study
A 12-hour historical trip dataset is considered for the case study. We consider each time step equal to two minutes. So, the number of iterations is 360. Some origin-destination pairs are used more frequently than
Figure 1: The network zones.
Figure 2: Histogram of the daily demand for the UMN’s campuses.
others, which implies a significant imbalance in demand. The number of vehicles is constant at each time step (including equilibrium) and is equal to \(\mathbf{1}_{n}^{T}p\left(t\right)+\mathbf{1}_{n(n-1)}^{T}g\left(t\right)\) ([13, 16]). Therefore, \(\underline{M}=\mathbf{1}_{n(n-1)}^{T}\bar{g}=T^{T}\left(\lambda+\bar{R}\right)\) can be considered a lower band for the fleet size. The origin and destination of every trip in the travel data are subsequently assigned to the corresponding zones in the graph. We used Dijkstra's algorithm [36] to compute the shortest path between the zone centers on a real road network (Google Maps).
Initial conditions for the AMoD model are \(x_{0}=\left[0_{\frac{n(n-1)}{2}}^{T}\quad\frac{M}{n}\mathbf{1}_{n}^{T}\quad 0_{ \frac{n(n-1)}{2}}^{T}\right]^{T}\). In the performed simulation, no congestion effects have been considered, i.e., travel times are considered exogenous. If congestion is considered in the model, travel times are endogenous and a function of the policies performed by Algorithm 1. In that case, the model is no longer linear and a detailed analysis is beyond the scope of this article. The average queue length, the average number of rebalancing vehicles, and the average number of customer-carrying vehicles are the metrics that we are interested in investigating using Algorithm 1. The disturbance attenuation \(\gamma\) is selected to be \(0.1\). Let \(W_{\lambda}=0.01I\). Weights matrices \(R_{x}\) and \(R_{v}\) are chosen as \(R_{x}=\begin{bmatrix}\widetilde{\lambda}&0&0\\ 0&0&0\\ 0&0&0\end{bmatrix}\) and \(R_{v}=\begin{bmatrix}\rho\tilde{T}&0\\ 0&\rho\tilde{T}\end{bmatrix},\) where \(\rho=0.05\). The reference being tracked (\(\lambda\), \(\bar{R}^{*}\)) is recomputed via Problem (25) every 2 hours (60 iterations). Therefore, \(R_{x}\) will be changing every 60 iterations. The recursive least-squares algorithm is used to tune the parameters of the critic network online. The parameters of the actions networks are updated according to (11a) and (11b). The parameters of the critic and the actions networks are initialized to identity and zero, respectively. Based on this initialization step, the system dynamics move forward in time, and tuning the parameter structures is done by observing the states online. In the RLS problems, the persistency of the excitation condition required to converge the recursive least-squares tuning, i.e., avoiding the parameter drift problem, will hold. However, In Algorithm 1, the persistency of the excitation condition is only required for the initial iteration.
In the studied network depicted in Fig. 1, \(m_{1}=66\), \(m_{2}=60\), and \(m_{3}=30\). Let's denote \(F_{\text{npe}}\) the number of parameters estimated in the matrix \(F\). So, \(S_{\text{npe}}=12246\), \(P_{\text{npe}}=2211\), \(K_{v,\text{npe}}=3960\), and \(K_{d,\text{npe}}=1800\).
In Fig. 2(a), the convergence of the critic network is illustrated. Fig. 2(b) shows the convergence of the control action network, while Fig. 2(c) depicts the convergence of the disturbance action network.
Figure 4 shows the average queue length over all origin-destinations. At the beginning of the peak hours, the queue length increases while after peak hours, it decreases remarkably. Note that each step of the trace in Figures 4, 5, and 6 represents ten minutes.
Figure 5 depicts the average customer-carrying vehicles in the network. The dashed (red) figure, shows the average of \(\lambda\) over all origin-destinations. Note that we showed in Section 4 that \(\bar{U}=\lambda\). Since the arrival of customers \(d_{t}\) is given by the realization of a Poisson process of parameter \(\lambda\), the expectation of the average customer-carrying vehicles on the links in the network tracks the average of \(\lambda\).
Figure 6 illustrates the average number of rebalancing vehicles over all origin-destinations. The dashed (red) figure, shows the average of \(\bar{R}^{*}\) over all origin-destinations obtained by (25). Similar to Fig. 5, since the arrival of customers \(d_{t}\) is not deterministic, the expectation of the average number of rebalancing vehicles on the links in the network tracks the average of \(\bar{R}^{*}\). Moreover, the optimal rebalancing policy of Algorithm 1 changes over time due to the change in the demand during the different time intervals.
## 6 Conclusion
In this paper, we proposed a model-free, real-time, data-efficient Q-learning-based algorithm to solve the H\({}_{\infty}\) control of linear discrete-time systems and applied it to AMoD systems modeled as a H\({}_{\infty}\) control of the linear discrete-time system.
Besides presenting the algorithm, its convergence was discussed and proved. Afterward, we showed the parameters of the actions and critic networks converged to the optimal values. Numerical results from an AMoD system control in a real case study showed that the proposed algorithm can be implemented in high-dimension systems thanks to the quadratic computational complexity \(\mathcal{O}(\underline{q}^{2})\) and using only a single data point for updating the actor and critic networks in each iteration.
Figure 3: Convergence of the parameters of actions and critic network.
Figure 4: The average queue length.
Figure 5: The average customer-carrying vehicles.
Figure 6: The average rebalancing vehicles.
## Acknowledgement
This research is conducted at the University of Minnesota Transit Lab, currently supported by the following, but not limited to, projects:
* National Science Foundation, award CMMI-1831140
* Freight Mobility Research Institute (FMRI), Tier 1 Transportation Center, U.S. Department of Transportation
* Minnesota Department of Transportation
|
2309.16466 | Entangling extreme ultraviolet photons through strong field pair
generation | Entangled photon pairs are a vital resource for quantum information,
computation, and metrology. Although these states are routinely generated at
optical frequencies, sources of quantum of light are notably lacking at extreme
ultraviolet (XUV) and soft X-ray frequencies. Here, we show that strongly
driven systems used for high harmonic generation (HHG) can become versatile
sources of entangled photon pairs at these high frequencies. We present a
general theory of photon pair emission from non-perturbatively driven systems,
which we refer to as "strong field pair generation" (SFPG). We show that
strongly driven noble gases can generate thousands of entangled pairs per shot
over a large XUV bandwidth. The emitted pairs have distinctive properties in
angle and frequency, which can be exploited to discriminate them from the
background HHG signal. We connect SFPG theory to the three-step-model of HHG,
showing that this pair emission originates from the impact of high frequency
vacuum fluctuations on electron recombination. The light produced by SFPG
exhibits attosecond Hong-Ou-Mandel correlations, and can be leveraged as a
source of heralded single photon attosecond pulses. Our findings aid ongoing
efforts to propel quantum optics into the XUV and beyond. | Jamison Sloan, Alexey Gorlach, Matan Even Tzur, Nicholas Rivera, Oren Cohen, Ido Kaminer, Marin Soljačić | 2023-09-28T14:27:39Z | http://arxiv.org/abs/2309.16466v1 | # Entangling extreme ultraviolet photons through strong field pair generation
###### Abstract
Entangled photon pairs are a vital resource for quantum information, computation, and metrology. Although these states are routinely generated at optical frequencies, sources of quantum of light are notably lacking at extreme ultraviolet (XUV) and soft X-ray frequencies. Here, we show that strongly driven systems used for high harmonic generation (HHG) can become versatile sources of entangled photon pairs at these high frequencies. We present a general theory of photon pair emission from non-perturbatively driven systems, which we refer to as "strong field pair generation" (SFPG). We show that strongly driven noble gases can generate thousands of entangled pairs per shot over a large XUV bandwidth. The emitted pairs have distinctive properties in angle and frequency, which can be exploited to discriminate them from the background HHG signal. We connect SFPG theory to the three-step-model of HHG, showing that this pair emission originates from the impact of high frequency vacuum fluctuations on electron recombination. The light produced by SFPG exhibits attosecond Hong-Ou-Mandel correlations, and can be leveraged as a source of heralded single photon attosecond pulses. Our findings aid ongoing efforts to propel quantum optics into the XUV and beyond.
## I Introduction
One of the profound surprises of the twentieth century was that particles can become entangled with one another, leading to seemingly non-local correlations that evade description by classical physics [1; 2; 3]. A major area of impact of these ideas is in quantum optics, which was transformed by the measurement of entangled photon pairs generated through nonlinear optical processes [4; 5]. Since then, decades of work have harnessed entangled pairs to interrogate the fundamental quantum nature of light [6], and to move the needle on quantum metrology, imaging, and information [7]. Nowadays, optical and infrared (IR) pair generation is routine, and these sources can even be purchased, off the shelf.
There are, however, massive ranges of the spectrum where quantum optics is much less developed. An example of this is at high frequencies (UV - X-ray and beyond), where quantum sources are scarce. Exceptions on the UV end include resonant semiconductor sources [8], and non-degenerate four wave mixing sources which have recently entangled UV and IR frequencies [9]. At much higher energies, facility scale hard X-ray sources enable parametric down conversion into X-ray pairs [10; 11; 12], and highly non-degenerate X-ray - UV/optical pairs [13; 14]. More recent theoretical proposals have also investigated spontaneous pair emission from ions [15], and X-ray pair generation in free electron lasers [16]. Despite this progress toward high frequency sources, there remains a great need for new mechanisms of pair generation from compact sources, especially at frequencies ranging from the extreme ultraviolet (XUV) through the so-called "water window," which is critical to biological imaging.
A leading technique for creating coherent XUV radiation is high harmonic generation (HHG), in which a strong IR field incident on a sample induces the conversion of many pump photons into high harmonic frequencies. HHG has been explored across many platforms (gases [17; 18; 19], solids [20], liquids [21], plasmas [22]), producing light ranging from the XUV to soft X-ray regimes [23]. HHG also makes possible attosecond pulse generation [24; 25], which fuels a vast array of spectroscopy, photoionization, and interferometry experiments that probe the dynamics of electrons on their natural timescale [26; 27]. The majority of experiments are well-described by considering the role of light in HHG from a classical perspective. However, some selected works -- starting decades ago [28; 29], and followed by a more recent resurgence [30; 31; 32; 33; 34; 35] -- have investigated quantum optical aspects of high harmonic generation. A few pioneering experiments have already begun to report quantum features in HHG [36; 37; 38], and yet other theoretical proposals [39; 40; 41] may soon become possible. Even though attosecond quantum optics has been identified as a critical new domain [42; 43], HHG has not been considered as an entangled pair source.
Here, we introduce the concept of "strong field pair generation" (SFPG), which enables broadly tunable sources of high frequency entangled pairs. In this non-perturbative quantum electrodynamical process, many low frequency pump photons in a high intensity field incident on a piece of material are converted into an entangled pair of photons at frequencies much higher than that of the pump (Figs.1a-b). We show that SFPG can yield degenerate XUV/X-ray pairs, or highly non-degenerate pairs which have an XUV signal but UV or optical/IR idler. Our results are based on a quantum optical theory of SFPG which predicts the spectral and angular correlations of entangled high frequency pairs, as well as the efficiency with which they can be generated. We interpret SFPG through the "three-step-model" of HHG, and show that the two-photon nature of SFPG leads to primary and secondary "cutoff" laws. The non-perturbative multi-harmonic nature of SFPG also leads to highly entangled biphoton states with attosecond correlation times, enabling heralded attosecond single photon sources. We show that SFPG should be observable within current HHG platforms, thus providing a new route to
bring quantum optics to the attosecond domain.
Such XUV and soft X-ray entangled pairs would unearth many opportunities, both fundamental and applied. For example, such entangled pairs could lead to quantum-enhanced sensing modalities, such as two-photon spectroscopy or ghost imaging [44]. As another example, pair sources are known to exhibit squeezing of fluctuations below the famed "shot noise limit," which could improve sensitivity in attosecond science. This is especially relevant in the context of recent experimental advances which achieve zeptosecond time resolution [38], and have potential to reach the yocosecond scale, where it is anticipated that electromagnetic vacuum fluctuations influence electron dynamics [45]. Moreover, sources of quantum XUV or soft X-ray radiation could probe high energy electronic or even low frequency nuclear transitions [46], allowing a quantum optical interface to new degrees of freedom.
## II Results
**The concept of strong field pair generation:** The key ingredient for creating entangled pairs through SFPG is similar to that required for HHG: a sample illuminated by ultrafast laser pulses with sufficient intensity to induce non-perturbative dynamics (Fig. 1a). From a quantum optical perspective (represented with Feynman diagrams in Fig. 1b), HHG is a non-perturbative process in which many photons at the drive frequency \(\omega_{0}\) are converted into a high harmonic photon at frequency \(q\omega_{0}\). In SFPG, the subject of this work, many driving photons at \(\omega_{0}\) are converted into an entangled pair at frequencies \(\omega\), \(\omega^{\prime}\), whose energies sum to some integer multiple \(q\) of the drive (\(\omega+\omega^{\prime}=q\omega_{0}\)).
SFPG is in some ways analogous to spontaneous parametric down conversion (SPDC) in second-order nonlinear media (Fig. 1b). Both SPDC and SFPG yield entangled photon pairs which can exhibit correlations in frequency, angle, and polarization. However, as a strong field phenomenon, the highly non-perturbative nature of SFPG introduces significant differences from SPDC. For example, SPDC suffers from the constraint that to produce entangled photon pairs at a given frequency, one must pump with the sum of the frequencies to be emitted. This poses a significant challenge for entangling high frequency photons through SPDC, due to the scarcity of the intense high frequency sources, and intrinsic high frequency \(\chi^{(2)}\) nonlinearities [47] required to realize these effects. In contrast, the strong field nature of SFPG enables the emission of pairs at frequencies many times higher than that of the drive. In fact, the annihilation of dozens or hundreds of IR photons corresponds to emitted pair frequencies in the XUV and beyond (Fig. 1c). This is possible since the breakdown of
Figure 1: **Concept of strong field pair generation (SFPG).****(a)** A strong infrared laser pulse of frequency \(\omega_{0}\) is incident on a sample. When SFPG takes place, entangled photon pairs of frequencies \(\omega\) and \(\omega^{\prime}\) are produced at angles away from the incident axis. **(b)** Feynman diagrams depict the quantum optical nature of various harmonic generation processes. The top row shows harmonic generation processes which result in the emission of a single photon: second harmonic generation (SHG) results from the conversion of two pump photons at frequency \(\omega_{0}\) into a signal photon at \(2\omega_{0}\); high harmonic generation (HHG) results from the conversion of \(q\) pump photons into a high frequency signal photon at \(q\omega_{0}\). The bottom row shows processes which generate entangled pairs: spontaneous parametric down conversion (SPDC) results from the conversion of one pump photon at \(\omega_{0}\) into an entangled pair of photons \(\omega\), \(\omega^{\prime}\) which satisfy \(\omega+\omega^{\prime}=\omega_{0}\); SFPG results from the conversion of \(q\) pump photons into an entangled pair so that \(\omega+\omega^{\prime}=q\omega_{0}\). **(c)** State of current sources of degenerate and non-degenerate entangled photon pairs across a wide range of the electromagnetic spectrum. SFPG can produce highly tunable degenerate and non-degenerate pairs over a large range of the UV to soft X-ray where no other sources currently exist.
perturbative nonlinear optics in strong-field settings enables high order \(\chi^{(n)}\) processes to occur with an efficiency much higher than would be be possible otherwise.
**The physics of strong field pair generation:** We now explain the physical nature of SFPG, using a single 1D Neon atom driven at \(\lambda_{0}=800\) nm as an example. Traditional HHG in such a system exhibits a characteristic "plateau" of harmonics generated with similar intensities, followed by a sharp "cutoff," after which emission drops rapidly (Fig. 2a). These behaviors are commonly described in terms of the so-called "three-step-model,", which describes the expedition of a valence electron in the driven atom in three critical steps: (1) tunnel ionization of the electron from its bound state into the continuum due to the strong field, (2) acceleration of the liberated electron in the strong field, and its eventual (3) re-collision with the parent ion, emitting high frequency radiation through recombination. The maximum possible energy gain results in a cutoff harmonic \(q_{c}\) defined by \(q_{c}\hbar\omega_{0}\approx I_{p}+3.17\,U_{p}\)[48; 49]. Here, \(I_{p}\) is the ionization energy, and \(U_{p}=(e^{2}E_{0}^{2})/(2m\omega_{0}^{2})\) is the ponderomotive energy, where \(e\) and \(m\) are the electron charge and mass, and \(E_{0}\) is the peak electric field.
HHG originates from the acceleration of the dipole moment \(\langle d(t)\rangle\). For an atom driven by a linearly polarized field, the probability \(P_{\text{HHG}}\) of HHG emission per unit frequency \(\omega\) is
\[\frac{dP_{\text{HHG}}}{d\omega}=\frac{2\alpha}{3\pi}\frac{\omega^{3}|x(\omega )|^{2}}{c^{2}}, \tag{1}\]
where \(\alpha\) is the fine structure constant, and \(x(\omega)\) is the Fourier transform of the position matrix element \(\langle\psi_{0}|x(t)|\psi_{0}\rangle\) taken on the initial electron state \(|\psi_{0}\rangle\). This average dipole moment can be found through the Schrodinger equation for the bound electron driven by a classical electromagnetic field.
In contrast, SFPG originates from second-order dipole moment correlations \(\langle d(t)d(t^{\prime})\rangle\). We find that for a 1D model, the probability of SFPG emission \(P_{\text{SFPG}}\) into frequencies \(\omega\) and \(\omega^{\prime}\) is
\[\frac{dP_{\text{SFPG}}}{d\omega d\omega^{\prime}}=\frac{2\alpha^{2}}{9\pi^{2 }}\frac{(\omega\omega^{\prime})^{3}|C_{xx}(\omega,\omega^{\prime})|^{2}}{c^{4}}, \tag{2}\]
where \(C_{xx}\) is the Fourier transform of a time-ordered (\(\mathcal{T}\)) connected correlation function
Figure 2: **Strong field pair generation from single atoms.****(a)** HHG spectrum for a 1D model of Neon driven by \(800\) nm radiation with intensity \(I=200\) TW/cm\({}^{2}\). The system exhibits a plateau over many harmonics, before reaching a cutoff at \(q_{c}\approx 39\). **(b)** Differential emission probability of entangled pairs from a single particle for the system shown in (a). The frequency correlations satisfy \(\omega+\omega^{\prime}=q\omega_{0}\), where \(q\) is an even integer. The pair emission spectrum exhibits two cutoff behaviors: (1) a primary cutoff corresponding to \(\omega+\omega^{\prime}=q_{c}\omega_{0}\), and (2) a secondary cutoff corresponding to \(\omega,\omega^{\prime}=q_{c}\omega_{0}\). **(c)** Three-step-model of entangled pair production from gases. Extreme ultraviolet vacuum fluctuations stimulate the production of entangled pairs at \(\omega\) and \(\omega^{\prime}\). Panel (i) shows pairs emitted during the same re-collision. The maximum energy available for pair production is the maximum attainable value of \(E_{k}+I_{p}\), leading to the primary cutoff in (b). Panel (ii) shows pairs emitted during different half cycles, leading to the secondary cutoff in (b). This correlated emission between different half cycles is indicative of a memory effect in the dipole.
\(\left\langle\psi_{0}|\mathcal{T}x(t)x(t^{\prime})|\psi_{0}\right\rangle\)\(-\)\(\left\langle\psi_{0}|x(t)|\psi_{0}\right\rangle\left\langle\psi_{0}|x(t^{\prime})|\psi_{0} \right\rangle.\) The derivation of this result, as well as its generaliztion to 3D systems is provided in the S.I. Regardless of the dimensionality, the emitted spectrum of pairs exhibits a series of diagonal stripes, corresponding to the condition \(\omega+\omega^{\prime}=q\omega_{0}\) for integers \(q\) (Fig. 2b).
To connect SFPG to the three-step-model, we note that a number of past theoretical and experimental works have investigated HHG probed by weak XUV fields [50, 51, 52]. These works revealed that even a very weak XUV probe impacts recombination (3) by weakly modulating the bound state as the free wavepacket collides with the parent ion [53]. Quantum optics provides the critical insight that even when no XUV probe field is applied, high frequency vacuum fluctuations persist; these XUV vacuum fluctuations virtually modulate the un-ionized portion of the bound electron, stimulating the production of pairs during electron recombination (Fig. 2c, left). The primary contribution to SFPG is thus well-explained in terms of the three-step-model: The ionization (1) and acceleration (2) steps are unchanged. However, during vacuum-assisted recombination (3), the energy \(E_{k}\) gained by the free electron is converted into an entangled pair (\(\hbar(\omega+\omega^{\prime})=E_{k}+I_{p}\)).
_Cutoff behavior:_ The physical picture provided above is closely tied to the question of whether or not SFPG also obeys a robust cutoff. Based on the relation of SFPG to the three-step-model described above, it is unsurprising that SFPG into frequencies \(\omega\) and \(\omega^{\prime}\) exhibits a primary cutoff according to
\[\omega+\omega^{\prime}\approx q_{c}\omega_{0}, \tag{3}\]
where \(q_{c}\) is the HHG cutoff harmonic. This is for the simple reason that the maximum energy which is ordinarily available for the production of a single HHG photon may now produce a pair. Frequencies within this primary cutoff lie in the lower triangular region of the pair spectrum (Fig. 2b).
However, the SFPG cutoff exhibits an additional complexity which is not possible in HHG. In particular, the two photon nature of SFPG presents the possibility that _each photon of the pair is emitted during a different half-cycle of the drive_ (Fig. 2c, right). For such events, the electron gains energy during each of the distinct half-cycles so that each photon individually may be emitted up to the cutoff frequency. We refer to this behavior as the "secondary cutoff," which is described by the constraint
\[\omega,\omega^{\prime}\approx q_{c}\omega_{0}. \tag{4}\]
Contributions from these different half-cycle events result in a pair spectrum which does not drop off after the primary cutoff as rapidly as one might imagine. This presence of contributions beyond the primary cutoff (see upper triangle region of Fig. 2b) indicates a notable memory effect in the temporal dipole correlations \(C_{xx}(t,t^{\prime})\). In other words, the recombination event associated with the first photon imprints a memory onto the bound state which can correlate with future recombinations. By performing a time-frequency analysis of the dipole correlations (see S.I.), we found that for this particular model, the strongest contribution beyond the primary cutoff stems from pair emissions correlated between neighboring half-cycles.
_Selection rules:_ A famous feature of traditional HHG is that an inversion-symmetric sample driven by a linearly polarized laser field of a single frequency emits only odd harmonics. Various symmetry-breaking techniques [54, 55, 56, 57], have been employed to control the selection rules for frequencies and polarizations. We found that similar constraints govern SFPG. For the simple case of a symmetric potential driven by a monochromatic, linearly polarized field, SFPG pairs are subject to the constraint \(\omega+\omega^{\prime}=q\omega_{0}\), where \(q\) is an even integer, rather than an odd integer. This is consistent with the famous result of perturbative nonlinear optics that centro-symmetric materials have no even-order nonlinearities (\(\chi^{(2)}\) for example) [58]. In such a symmetric material, SPDC (\(q=1\)) is forbidden, but spontaneous four-wave-mixing (\(q=2\)) is allowed. It follows that breaking dynamical symmetry would enable SFPG processes with all integers \(q\). We anticipate that further analysis of spatio-temporal symmetries will lead to robust SFPG selection rules.
_Efficiency of SFPG:_ A key area of interest is the efficiency of SFPG compared to ordinary HHG. At the single particle level, SFPG process is strictly less efficient than HHG, since HHG occurs at first order in the emitted field, while SFPG occurs at second order (Fig. 1b). For a given single particle sample and driving field, it is loosely the case that the probability of a pair emission event (SFPG) at frequencies \(\omega\) and \(\omega^{\prime}\) is on the order of the product of the probabilities for single photon emission events (HHG) at the two frequencies. In noble gasses, HHG probabilities for a single harmonic typically range from \(10^{-10}\) - \(10^{-6}\). Correspondingly, pair generation probabilities can take values in the approximate range of \(10^{-20}\) - \(10^{-12}\). In the next section, we show that even though the per-particle emission probability is low, phase matched interactions can lead to detectable numbers of pairs which are distinguishable from HHG.
**Correlations in angle and frequency:** Let us detail how SFPG can be realized on mature HHG platforms. For concreteness, we focus on a hollow-core waveguide filled with a noble gas with variable pressure used for phase matching control (Fig. 3a). Our theory allows for the computation of the numbers of pairs per solid angle and frequency which are emitted from a uniformly illuminated gas sample, taking into account the dispersion of the pump and signal frequencies (see S.I.). We found that for a 1D model, SFPG signal consists of entangled pairs directed into cones at different angles, carrying some features similar to SPDC. The classically measured angle and frequency spectrum of SFPG pairs exhibits many stripes of correlation between angle and frequency, which arise from energy and momentum conservation constraints (Fig. 3c). In particular, if the emitted photons at \(\omega\) and \(\omega^{\prime}\) have the same refractive index \(n\), then the emission angle satisfies
\[\cos\theta=\frac{2n^{2}\omega-n^{2}q\omega_{0}+n_{0}^{2}q\omega_{0}}{2nn_{0} \omega}, \tag{5}\]
where \(n_{0}\) is the index at the pump frequency \(\omega_{0}\). This constraint between angles and frequencies is the origin of the
stripes seen in Fig. 3c.
For the parameters considered, the pairs are emitted a few degrees away from the pump axis. At a fixed frequency of observation, emission is peaked at many discrete angles, corresponding to different integer orders \(q\) (Fig. 3b). Similarly, at a fixed angle of observation, emission is peaked in a frequency comb pattern (Fig. 3d). Emission can be either degenerate (\(\omega=\omega^{\prime}\)), or highly nondegenerate (\(\omega\neq\omega^{\prime}\)), and both contributions are substantial. Additionally, the spectrum of SFPG collected over many angles or frequencies will be broad, since it is the correlation spectrum, not the emitted frequencies themselves, which exhibit harmonics.
An important practical aspect of many nonlinear optical processes is phase-matching, which equivalently corresponds to energy and momentum conservation of incoming and outgoing fields. For ordinary HHG, phase matching amounts to index matching the driving and emitted frequencies (i.e., \(n(\omega_{0})=n(\omega)\)). For SFPG, the considerations are different: the phase velocity of the pump needs to exceed that of the emitted frequencies, giving a Cherenkov-type condition (\(\cos\theta_{0}=n(\omega_{0})/n(\omega)\)). The higher the pump phase velocity compared to that of the emitted frequencies, the larger the angle the phase-matched pairs will make with the optical axis. For these reasons, _SFPG phase matching will be ideal for parameters where ordinary HHG is totally non-phase-matched_. A detailed analysis of phase matching reveals that SFPG can be perfectly phase matched while HHG phase mismatch provides orders of magnitude of suppression (see S.I.). For these reasons, it may actually be desirable to operate with higher gas pressures and higher ionization fractions to increase the
Figure 3: **Strong field pair generation from bulk gas samples.****(a)** Gas filled waveguide pumped with femtosecond laser pulses. The ordinary HHG beam is produced on-axis along the \(z\) axis. SFPG pairs of frequency \(\omega\) and \(\omega^{\prime}\) are produced at angles \(\theta\) and \(\theta^{\prime}\) off-axis. **(b)** Angular rings of radiation at fixed frequencies marked in (c). Each of the rings corresponds to a different integer order \(q\). **(c)** Number of photons per shot \(dN/d\omega d\theta\) produced per unit angle and frequency. Each of the stripes is the result of the angle-frequency correlations described by Eq. 5 for even integers \(q\). **(d)** Photon count rate per harmonic for measurement at the degeneracy angle \(\theta_{0}\). Degenerate pairs are emitted most efficiently until the primary cutoff at \(q_{c}/2\). Gas parameters are the same as those used in Fig. 2. The gas sample is held in a hollow core waveguide which has length \(L=1\) mm, radius \(a=400\mu\)m, and is held at a pressure \(P=1\) atm.
pump phase velocity. The ability to use higher gas pressures \(P\) is also favorable from an efficiency point of view due to the \(\propto P^{2}\) scaling of yield which comes from the density of atoms.
**Quantum states of light from SFPG:** The quantum state of light produced by SFPG features a rich structure of entanglement, primarily due to the possibility of pair emission from many integers \(q\) simultaneously. This is most easily seen through the joint spectral amplitude for detecting a pair of photons at \(\omega\) and \(\omega^{\prime}\), which exhibits many stripes of anti-correlation (Fig. 4b). In contrast, the joint spectral amplitude produced from SPDC consists only of a single stripe of anti-correlation (\(q=1\)). We now highlight three essential features of the quantum state of SFPG pairs.
First, the attosecond pulse nature of the entangled pairs is made evident through a hypothetical Hong-Ou-Mandel (HOM) experiment (Fig. 4a) that counts coincident detection of photons as a function of time delay between arms [6]. The resulting HOM curves for various intensities all show a characteristic dip at zero time delay (Fig. 4c). These dips occur over a \(\sim\)100 as timescale, owing to the large frequency bandwidth covered by the phase matched pairs. In addition to the zero delay HOM dip, all three driving intensities produce a more complex structure of interference fringes, which share a common strong feature at a half cycle delay. We attribute this feature to the memory effect between neighboring half cycles (Fig. 2c, panel (ii))
Second, the SFPG state can be used to create a heralded single photon attosecond pulse. The fact that many different numbers of pump photons can be converted into an entangled pair means that a photon measured at frequency \(\omega\) has many possible partner photons at \(\omega^{\prime}=q\omega_{0}-\omega\). Thus, by measuring one photon, the other heralded photon lies in a coherent superposition of many frequencies spaced by \(\omega_{0}\). This can be most easily seen by taking a "slice" of the joint spectral amplitude at some fixed frequency. Since this heralded photon consists of many frequencies in the XUV spaced by \(\omega_{0}\), the heralded photon is an _attosecond pulse train carried by a single photon._ These high frequency heralding experiments could be conducted using similar techniques to those used to herald hard X-rays from facility-scale X-ray PDC sources [12].
Finally, the states generated by SFPG also present considerable interest from the standpoint of entanglement structure and quantum information. The simultaneous presence of many integers \(q\) in the joint spectral amplitude gives a richer structure than that seen in traditional SPDC. We quantified this structure through calculations of Schmidt number and entanglement entropy for the entangled states, indeed indicating a high degree of entanglement (see S.I.). A SFPG source based on strongly driven gases also has the unique feature that by tuning the gas pressure, one affects the phase matching, which in turn controls the quantum state and corresponding entanglement. Accordingly, we foresee that these high dimensional entangled states may serve as a platform to represent high dimensional quantum information in the high frequency and attosecond regimes. Although these high frequencies carry the disadvantage that constructing optical elements
Figure 4: **Quantum nature of strong field pair generation.****(a)** Schematic of a Hong-Ou-Mandel (HOM) experiment for SFPG pairs. Pairs are collected from two arms, and then directed through a beam-splitter (BS), and to a coincidence detection apparatus. One of the arms experiences a time delay \(\Delta t\). **(b)** Joint spectral amplitude which shows the quantum state of the collected pairs. Many stripes of frequency anti-correlation are present with comparable amplitude. **(c)** Attosecond HOM dip at different driving intensities, obtained from the coincidence probability from the setup depicted in (a). Varying the driving intensity impacts the fringes which result from interference between different integer orders \(q\).
can be difficult, they come with the advantage of very high efficiency detection -- a critical element for quantum state characterization.
## III Factors for experimental observation
We now outline some key factors for experiments on SFPG. For the parameters we considered (Fig. 3), many thousands of entangled pairs are created in a single shot, with hundreds or more over a 1 eV bandwidth in some cases. If the incoming pulse which contains \(N_{\text{in}}\approx 10^{17}\) IR photons creates \(N_{\text{out}}\approx 100\) photons over some narrow XUV bandwidth, this corresponds to an efficiency of \(N_{\text{out}}/N_{\text{in}}\approx 10^{-15}\). We thus conclude that SFPG should be efficient enough to enable observation, even if counts are at the single photon level [59].
To aid observation, it will be crucial to create frequency and/or angular ranges where SFPG counts exceed any HHG background. This can be done in part by exploiting the differences in angular distribution between HHG and SFPG. In general, momentum conservation dictates that ordinary HHG is emitted along the pump axis, with narrow angular divergence of only a few milliradians in some cases [60; 61]. Conversely, SFPG pairs have the potential to be emitted in cones which are tens to hundreds of milliradians off axis, spatially distinguishing them from HHG.
Another helpful factor is that over a narrow angular range, SFPG can produce pairs peaked at frequencies which are not permitted from ordinary HHG (see Fig. 3c,d). In particular, measurement at the appropriate angle \(\theta_{0}\) for degenerate pairs will result in the detection of both even and odd harmonics in a spatially symmetric noble gas, in contrast to the strictly odd pairs which are allowed for ordinary HHG. More strikingly, measurement of non-degenerate pairs could lead to the observation of frequencies which are not harmonics at all.
Yet another important consideration is that ordinary HHG will be entirely mismatched when SFPG is optimally matched. This factor could provide 4-5 orders of magnitude of suppression of the background HHG signal compared to its ordinary optimal strength. It may also be possible to engineer more complex HHG geometries (such as two pump beams which come in at different angles) to force the SFPG beams to emerge at entirely different angles from the HHG signal. Once a SFPG signal is isolated, it may be possible to use an interferometry scheme such as the one recently reported in [38] to discern the quantum nature of the pairs.
Finally, while we have focused in this work on SFPG from gas samples, much of the fundamental physics in SFPG should carry over into solids used for HHG, frequently driven with mid-IR pulses [20]. In principle, solids could be used to realize non-perturbative conversion of many mid-IR photons into pairs ranging from the optical to UV. These platforms offer the additional advantage of the potential to engineer metasurfaces or other optical structures which could offer degrees of angle and frequency control over emitted pairs [62].
## IV Conclusion and outlook
We have presented the theory of strong field pair generation (SFPG): a non-perturbative nonlinear optical process in which many photons of a high intensity driving field incident on matter are converted into an entangled pair of photons at high frequencies. Such sources have a potential to generate both degenerate and highly non-degenerate entangled photon pairs, covering large swaths of the electromagnetic spectrum over which entangled pairs have never been produced. This method also carries the distinct advantage that it can generate these pairs using mature HHG platforms, and without reliance on a pre-existing source of intense high frequency radiation.
The generation of entangled pairs at high frequencies can enable critical new applications in XUV/X-ray quantum optics. As one example, sources in the water window could produce new modalities of quantum microscopy which exploit correlated pairs to image biological samples with improved phase sensitivity. As another example, the generation of highly non-degenerate pairs consisting of XUV radiation entangled with infrared radiation could yield a quantum optical interface between optical and XUV radiation. Moreover, this work paves the way toward further studies of dipole correlations in strongly time-driven systems. In the future, the measurement of these correlated pairs may provide a lens into aspects of the attosecond dynamics of matter, such as correlation and memory effects, which are not accessible through classical detection schemes.
The eventual experimental realization of SFPG pairs is expected to serve as an important milestone for the broader challenge of bringing quantum optics to the XUV and soft X-ray regimes. We anticipate that such an observation would open new domains of research in attosecond science, and provide important fundamental tests for the nascent field of strong-field quantum optics.
## V Acknowledgments
J.S. acknowledges prior support of a Mathworks graduate fellowship, as well as prior support of a National Defense Science and Engineering (NDSEG) fellowship. This research was supported by Grant No 2022144 from the United States-Israel Binational Science Foundation (BSF). This research project was partially supported by the Helen Diller Quantum Center at the Technion through the Flagship research project (QUBIT).This material is based upon work sponsored in part by the U.S. Army DEVCOM ARL Army Research Office through the MIT Institute for Soldier Nanotechnologies under Cooperative Agreement number W911NF-23-2-0121, and also supported in part by the Air Force Office of Scientific Research under the award number FA9550-21-1-0299.
|
2309.10017 | A Change-Point Approach to Estimating the Proportion of False Null
Hypotheses in Multiple Testing | For estimating the proportion of false null hypotheses in multiple testing, a
family of estimators by Storey (2002) is widely used in the applied and
statistical literature, with many methods suggested for selecting the parameter
$\lambda$. Inspired by change-point concepts, our new approach to the latter
problem first approximates the $p$-value plot with a piecewise linear function
with a single change-point and then selects the $p$-value at the change-point
location as $\lambda$. Simulations show that our method has among the smallest
RMSE across various settings, and we extend it to address the estimation in
cases of superuniform $p$-values. We provide asymptotic theory for our
estimator, relying on the theory of quantile processes. Additionally, we
propose an application in the change-point literature and illustrate it using
high-dimensional CNV data. | Anica Kostic, Piotr Fryzlewicz | 2023-09-18T16:53:37Z | http://arxiv.org/abs/2309.10017v2 | # A Change-Point Approach to Estimating the Proportion of False Null Hypotheses in Multiple Testing
###### Abstract
For estimating the proportion of false null hypotheses in multiple testing, a family of estimators by Storey (2002) is widely used in the applied and statistical literature, with many methods suggested for selecting the parameter \(\lambda\). Inspired by change-point concepts, our new approach to the latter problem first approximates the \(p\)-value plot with a piecewise linear function with a single change-point and then selects the \(p\)-value at the change-point location as \(\lambda\). Simulations show that our method has among the smallest RMSE across various settings, and we extend it to address the estimation in cases of superuniform \(p\)-values. We provide asymptotic theory for our estimator, relying on the theory of quantile processes. Additionally, we propose an application in the change-point literature and illustrate it using high-dimensional CNV data.
_Keywords:_ Multiple testing; change-point detection; p-values
## 1 Introduction
Of interest to us in this work is the problem of estimating the proportion of false null hypotheses when many are tested simultaneously. Under the standard assumption of uniformity of true null \(p\)-values, \(p\)-value distribution is modeled as a mixture, with cumulative distribution function (CDF)
\[F(x)=\pi_{1}F_{1}(x)+\pi_{0}x,\quad x\in[0,1], \tag{1}\]
where \(\pi_{1}\) is the unknown proportion of false null hypotheses, \(\pi_{0}=1-\pi_{1}\) and \(F_{1}\) is the CDF of the \(p\)-values under the alternative (Storey, 2002; Meinshausen and Rice, 2006; Patra and Sen, 2016).
Estimating the false null proportion \(\pi_{1}\) comes down to the problem of estimating the proportion parameter of the mixture distribution, particularly when one component is known. This measure quantifies the overall magnitude of significant deviations from baseline non-significant behavior, making it independently valuable. Furthermore, this measure is of interest in the classification literature, mainly when only positive and unlabeled examples
are available (Blanchard et al., 2010; Jain et al., 2016, 2017). In the multiple testing literature, proportion estimators are mainly of indirect interest as they can be used to increase the power of multiple testing procedures, such as the Benjamini-Hochberg procedure by Benjamini and Hochberg (1995) (BH), that controls the false discovery rate (FDR). Incorporating a proportion estimator into the BH procedure is originally proposed in Benjamini and Hochberg (2000), where the authors suggest increasing the number of rejections by increasing the threshold while keeping the FDR controlled at a desired level approximately. The proportion parameter is also valuable in practical applications, particularly in astronomy and astrophysics (Meinshausen and Rice, 2006; Patra and Sen, 2016; Swanepoel, 1999).
Storey's method (Storey, 2002), initially introduced in Schweder and Spjotvoll (1982), is the most common approach to the problem of estimating the proportion parameter. Assuming that \(F_{1}(x)\approx 1\) for sufficiently large \(x\in(0,1)\), the CDF (quantile function) of the \(p\)-values is approximately linear with slope \(\pi_{0}\) (\(1/\pi_{0}\)). Storey's family of plug-in true null proportion estimators is
\[\hat{\pi}_{0}(\lambda)=\frac{1-\hat{F}_{n}(\lambda)}{1-\lambda},\quad\lambda \in(0,1), \tag{2}\]
and \(\hat{\pi}_{1}(\lambda)=1-\hat{\pi}_{0}(\lambda)\). There are multiple estimators in the literature based on Storey's family, each proposing different tuning parameter values, with no general agreement on the optimal value of \(\lambda\)(Benjamini and Hochberg, 2000; Storey and Tibshirani, 2003; Storey et al., 2004; Jiang and Doerge, 2008). In general, a smaller \(\lambda\) introduces higher bias, while choosing \(\lambda\) close to 1 increases the variance of the proportion estimator. Asymptotically, Storey's estimator is guaranteed not to overestimate \(\pi_{1}\) for any \(\lambda\). The properties of consistency and asymptotic normality of this estimator are examined in Genovese and Wasserman (2004).
In this paper, we propose a new data-driven method for tuning Storey's estimator, which we call "Difference of Slopes" or DOS. We propose to approximate the plot of sorted \(p\)-values (\(i,p_{(i)}\)), \(i=1,\ldots,n\), with a piecewise linear function with a single change in slope, using a statistic inspired by the change-point literature. If \(1\leq\hat{k}\leq n\) is the estimated change-point location, we set \(\lambda=p_{(\hat{k})}\) in Storey's estimator (2) to obtain the proportion estimator, referred to as "DOS-Storey". This value of \(\lambda\) aims to separate true from false null \(p\)-values. Specifically, we aim for it to be the smallest value at which \(F_{1}(\lambda)\approx 1\), marking the onset of the linear part in the quantile function. By choosing such \(\lambda\) our goal is to reduce the variance while maintaining low bias in the corresponding Storey's estimator. An illustration of the piecewise linear approximation produced by our method is shown in Figure 1.
In the applied literature, adaptive FDR control is primarily achieved using Storey-based proportion estimators proposed in Benjamini and Hochberg (2000)(Xu et al., 2017; Taquet et al., 2021; Wittenbecher et al., 2022) and Storey and Tibshirani (2003)(Cuomo et al., 2020; Klunk et al., 2022; Legut et al., 2022; Gigante et al., 2022). However, it is noted that Benjamini and Hochberg (2000) produces very conservative estimators, while Storey and Tibshirani (2003) produces highly variable estimators (Broberg, 2005; Langaas et al., 2005; Jiang and Doerge, 2008). These two methods are among those employed in the simulation study in Section 4. The simulation results show that the proposed DOS-Storey estimator outperforms other proportion estimators under various settings, most notably when \(\pi_{1}\) is small. Furthermore, the proposed estimator outperforms its competitors in small samples, which is particularly significant when some of the other estimators are inapplicable.
We investigate the asymptotic properties of the DOS-Storey estimator under model (1), using results from the theory of quantile processes. The properties of the estimator depend on the unknown quantile function. However, we only impose weak assumptions on this function.
We extend our contribution by considering the case of superuniform (stochastically larger than uniform) true null \(p\)-value distributions. This scenario arises when the null hypothesis is misspecified or composite. In such cases, the linearity assumption used by Storey's estimator does not hold, making it unsuitable to use. As an alternative, we propose to estimate the change-point in the \(p\)-value plot and to use it directly as a proportion estimator. Specifically, we define \(\hat{k}/n\) as the estimated false null proportion in this method, which we refer to as "uncorrected DOS" or uDOS. A precise definition of uDOS follows in Section 4.2. Simulations show that the uDOS estimator has a uniformly smaller mean squared error than the competing method.
Finally, in our real data example, we propose applying the DOS-Storey method in the change-point literature, illustrating it with high-dimensional copy number variation (CNV) data from neuroblastoma patients.
This paper is organized as follows. Section 2 describes our proposed method. Theoretical results are presented in Section 3, while Section 4 contains simulation results under the standard model (1), as well as in the superuniform case. The real data example can be found in Section 5. Finally, Section 6 covers the discussion and potential extensions. The code implementing the introduced approach, the simulation study, and the real data example is included in the R package MTCP, which is available at [https://github.com/anickostic/MTCP](https://github.com/anickostic/MTCP).
Figure 1: Illustration of our method’s piecewise linear approximation (solid line) applied to \(p\)-value plots in two settings. \(p\)-values come from Gaussian mean testing, \(H_{0}:\mu=0\) versus \(H_{1}:\mu>0\). The test statistics \(T_{i}\) have \(N(0,1)\) distribution under the null and \(N(\mu_{1},1)\) under the alternative. \(p\)-values are calculated as \(p_{i}=1-\Phi(T_{i})\). Left (sparse case): 5 false null (crosses) and 95 true null (points) \(p\)-values, where \(\mu_{1}=3\). Right (dense case): 20 false null (crosses) and 80 true null (points), where \(\mu_{1}=2\).
DOS Threshold and the DOS-Storey Estimator
Consider the sequence of sorted \(p\)-values, \(p_{(1)},\ldots,p_{(n)}\), and their representation as points \((i,p_{(i)})\) for \(i=1,\ldots,n\), forming a \(p\)-value plot. The proposed piecewise linear approximation of the \(p\)-value plot is determined by the change-point location \(\hat{k}\). It consists of a line connecting \((0,0)\) and \((\hat{k},p_{\hat{k}})\), and another line connecting \((\hat{k},p_{\hat{k}})\) and \((n+1,1)\). To calculate the change-point estimate \(\hat{k}\), we first define the Difference of Slopes (DOS) sequence as
\[d_{\alpha}(i) =\frac{p_{(2i)}-p_{(i)}}{i^{\alpha}}-\frac{p_{(i)}}{i^{\alpha}} \tag{3}\] \[=\frac{p_{(2i)}-2p_{(i)}}{i^{\alpha}},\]
for some \(\alpha\in[1/2,1]\). The DOS statistic, serving as the change-point estimate, is the index of the maximum term in the DOS sequence:
\[\hat{k}_{\alpha}=\underset{nc_{n}\leq i\leq n/2}{\operatorname{argmax}}d_{ \alpha}(i). \tag{4}\]
The choice of the non-random sequence \(c_{n}\) and the value of \(\alpha\) is discussed below. The proposed separation threshold is \(p_{(\hat{k}_{\alpha})}\). To obtain the proportion estimate using the DOS statistic, we plug \(\lambda=p_{(\hat{k}_{\alpha})}\) into Storey's estimator (2) and get the proposed DOS-Storey false null proportion estimator:
\[\hat{\pi}_{1}^{\alpha}=\frac{\hat{k}_{\alpha}/n-p_{(\hat{k}_{\alpha})}}{1-p_{ (\hat{k}_{\alpha})}}. \tag{5}\]
To ensure the asymptotic results stated in Section 3 hold, we exclude the first \(nc_{n}\) values from \(d_{\alpha}(i)\) from the search for maximum in (4). Precisely, the sufficient conditions are
\[\frac{nc_{n}}{\log\log n} \rightarrow\infty,\] \[\frac{\log\log(1/c_{n})}{\log\log n} \to C<\infty.\]
In Remark 1 in the Appendix, we discuss how different rates for \(c_{n}\) affect the asymptotic results. The practical selection of \(c_{n}\) is addressed in Section 4.
An illustration of the proposed method with \(\alpha=1\) for the two examples from Figure 1 is given in Figure 2. The two plots in the right column show the dependency of \(\hat{\pi}_{1}(\lambda)\) on \(\lambda\), and the substantial influence of \(\lambda\) due to the bias-variance trade-off, for two different values of \(\pi_{1}\). In the sparse case, the estimated change-point location is at \(\hat{k}_{1}=7\) and the estimated number of false nulls is \(\hat{n}_{1}=4\) (true number is \(n_{1}=5\)). In the dense case, the change-point location is at \(\hat{k}_{1}=28\) with an estimated number of false nulls of \(\hat{n}_{1}=24\) (\(n_{1}=20\)). Figure 2 also shows how the false null \(p\)-values exceeding the threshold \(p_{(\hat{k}_{\alpha})}\) are relatively few compared to the true null \(p\)-values. In both scenarios, it is evident how our approach effectively reduces variance and maintains low bias in the associated Storey's estimator (5). In larger samples, this effect is best seen when the proportion of false null hypotheses is small, as shown in the simulation study in Section 4.
We now provide further explanation of the DOS statistic and discuss the role of the parameter \(\alpha\) by examining its boundary values. We begin with the case where \(\alpha=1\), and then we discuss \(\alpha=1/2\). These values enable us to interpret our statistic within the context of change-point literature.
For \(\alpha=1\), the first term in (3) is the slope of the line connecting points \((i,p_{(i)})\) and \((2i,p_{(2i)})\), while the second term corresponds to the slope of the line connecting \((0,0)\) and \((i,p_{(i)})\). Therefore, \(d_{1}(i)\) is the sequence of slopes differences in the \(p\)-value plot, and \(\hat{k}_{1}\) is the location of the maximum slopes difference. Let \(s_{j}=p_{(j)}-p_{(j-1)}\) be the sequence of spacings
Figure 2: Illustration of the DOS method for \(\alpha=1\) for the two examples from Figure 1. Left column: \(p\)-value plot and the corresponding DOS sequence (solid line) with the estimated change-point location (vertical dashed line) in the sparse (top) and the dense (bottom) case. Right column: corresponding Storey’s sequence of estimators in the sparse (top) and the dense (bottom) case; solid horizontal line is at the unknown false null proportion level, and dashed vertical line marks the proportion estimated by the DOS-Storey method.
of \(p\)-values and \(p_{(0)}=0\). The DOS sequence can be written as
\[d_{1}(i)=\frac{1}{i}\sum_{j=1}^{i}s_{j}-\frac{1}{i}\sum_{j=i+1}^{2i}s_{j}.\]
Thus, the DOS statistic finds the maximum difference of means on symmetric \(((0,i),(i,2i))\) and increasing intervals \((i=1,\ldots,n/2)\) in the spacings sequence. A similar statistic, aiming to detect shifts in the piecewise constant mean of an ordered sample, has been studied in the nonparametric change-point literature (Brodsky and Darkhovsky, 1993). Therefore, the DOS statistic can be viewed as a technique for fitting a piecewise constant function to the sequence of spacings \(s_{i}\). An illustration of the piecewise constant fit to the spacings sequence is provided in Figure 3.
Similarly, for \(\alpha=1/2\), we can interpret \(d_{1/2}(i)\) as the standardized difference between the means of the first \(i\) and the second \(i\) spacings. In the context of the change-point literature, the statistic \(\max_{i}d_{1/2}(i)\) can be regarded as a CUSUM-like statistic.
In general, we use symmetric intervals \(((0,i),(i,2i))\) for calculating slopes to focus on the local behavior and changes in the quantile function. These intervals expand \((i=1,\ldots,n/2)\) to incorporate information from an increasing number of \(p\)-values until a shift to linearity is observed. As the estimated change-point is at most at location \(n/2\), our method is unsuitable for cases when the proportion of false null hypotheses is high.
Using a change-point method for the purpose of tuning Storey's estimator has been previously mentioned in the literature. In Benjamini and Hochberg (2000), it is implied that a change-point method can be used for estimating the proportion of false null hypotheses by identifying the end of the linear segment in the \(p\)-value plot. This approach is explored
Figure 3: Illustration of the piecewise constant fit to the spacings sequence using the DOS method with \(\alpha=1\) for the two examples from Figure 1. The spacings are divided into three groups as follows: If for a given \(s_{i}\), both \(p_{(i)}\) and \(p_{(i-1)}\) are false null, the spacing is denoted by a diamond. If both are true null, circles are used. Squares are used for cases where one \(p\)-value is true null, and the other is false null.
in Turkheimer et al. (2001) and Hwang et al. (2014). However, the simulations included in Section B.5 of the Supplementary Material show that the latter two methods do not perform well when compared to our method, and we do not include them in the simulation study.
Note that this scenario differs from the conventional change-point in the slope problem. The \(p\)-value plot has no typical change-point; instead, the change-point is linked to the suggested linear approximation. Asymptotically, the change-point location depends on the properties of the unknown quantile function and on our fitting procedure. Additionally, it is reasonable to think of our method as identifying a 'knee' in the quantile plot, a topic briefly discussed in Section C of the Supplementary Material.
## 3 Theoretical Considerations
We begin this section by introducing the assumptions on the \(p\)-value model introduced in (1). Following that, we present the statement of the main Theorem 1, along with two immediate corollaries. The theorem contains results regarding the asymptotic behaviour of the change-point estimator and the proportion estimator. The proof can be found in Appendix A.
Different assumptions on \(F_{1}\) can be found in the literature. Strong assumptions specify a family of distributions for \(F_{1}\), see for example Cai et al. (2007) and Pounds and Morris (2003), while weaker ones only restrict its shape. We place the following two assumptions concerning \(F\):
1. \(F_{1}\) is a continuous distribution stochastically smaller than \(U[0,1]\), with a concave CDF.
2. Let \[h_{\alpha}^{F}(t):=\frac{F^{-1}(2t)-2F^{-1}(t)}{t^{\alpha}},\quad t\in(0,1/2).\] (6) \[h_{\alpha}^{F}(t)\text{ has a unique point of local maximum at }\widetilde{t}_{\alpha}\leq 1/2.\]
Assumption (A1) implies that the density of the false null \(p\)-values is decreasing and is a common assumption in the literature (Langaas et al., 2005; Celisse and Robin, 2010). Assumption (A2) is specific to our approach and is necessary for uniquely defining the asymptotic change-point location. As the ordered \(p\)-values are sample quantiles, \(h_{\alpha}^{F}\) represents the "ideal function" that the DOS sequence approximates:
\[\frac{p_{(2i)}-2p_{(i)}}{(i/n)^{\alpha}}\approx h_{\alpha}^{F}(i/n).\]
Assumption (A2) excludes situations where \(h_{\alpha}^{F}(t)\) is constant on an interval where it achieves its maximum value. This constant behavior of \(h_{\alpha}^{F}(t)\) is a highly specific scenario and not easily characterized by conditions on \(F\). (A2) also excludes cases where \(h_{\alpha}^{F}\) is an increasing function, which can happen if the signal is too weak or the false null proportion is too large. In typical \(p\)-value models, these scenarios do not pose an issue.
**Theorem 1**.: _Consider the \(p\)-value distribution given in (1) and assume that conditions (A1) and (A2) hold. Let \(p_{(1)},\ldots,p_{(n)}\) be the order statistics of the iid sample from (1)._
_Let \(\hat{k}_{\alpha}\) and \(\hat{\pi}_{1}^{\alpha}\) be as defined in (4) and (5), respectively, with \(c_{n}\) such that \(\frac{nc_{n}}{\log\log n}\to\infty\), \(\frac{\log\log(1/c_{n})}{\log\log n}\to C<\infty\). It holds that_
\[\hat{k}_{\alpha}/n\stackrel{{ a.s.}}{{\to}}\tilde{t}_ {\alpha}:=\ \operatorname*{argmax}_{0\leq t\leq 1/2}\frac{F^{-1}(2t)-2F^{-1}(t)}{t^{ \alpha}}, \tag{7}\] \[p_{(\hat{k}_{\alpha})}\stackrel{{ a.s.}}{{\to}}F^{- 1}(\tilde{t}_{\alpha}^{\ast}),\] (8) \[\hat{\pi}_{1}^{\alpha}\stackrel{{ a.s.}}{{\to}} \frac{\tilde{t}_{\alpha}-F^{-1}(\tilde{t}_{\alpha}^{\ast})}{1-F^{-1}( \tilde{t}_{\alpha}^{\ast})}\leq\pi_{1}. \tag{9}\]
Theorem 1 explains the asymptotic behavior of the estimated change-point and the estimated proportion in terms of the ideal quantities, which are functionals of the \(p\)-value distribution quantile function. The convergence rates of the statistics in Theorem 1 are not stated here but are considered within its proof in Appendix A. These rates depend on the differentiability of \(h_{\alpha}^{F}\) at \(\tilde{\tilde{t}}_{\alpha}\), and the degree of "flatness" of \(h_{\alpha}^{F}\) at \(\tilde{\tilde{t}}_{\alpha}\), which is quantified using higher order derivatives.
When \(\alpha_{1}<\alpha_{2}\), the decreasing function \(1/t^{\alpha_{2}-\alpha_{1}}\) implies that \(\operatorname*{argmax}_{t}h_{\alpha_{1}}^{F}(t)>\operatorname*{argmax}_{t}h_ {\alpha_{2}}^{F}(t)\). This suggests that for larger \(\alpha\) values, the "change-point" occurs later. From Theorem 1 it also follows that the rate of convergence is slower for smaller \(\alpha\).
The following two corollaries follow easily from Theorem 1 and we provide them without proof. They illustrate the behavior of the proposed statistics in two specific cases.
Corollary 1 considers a special case when the \(p\)-values come from a mixture of two uniform distributions. In that case, \(F\) and \(F^{-1}\) are piecewise linear functions with one change-point where the slope changes. From Theorem 1 it follows that \(\hat{k}_{\alpha}/n\) consistently estimates this change-point and that \(\hat{\pi}_{1}^{\alpha}\) is an unbiased estimator of \(\pi_{1}\) for any \(\alpha\in[1/2,1]\), with the consistency rates uniquely determined and provided in the statement.
**Corollary 1**.: _Let \(p_{(1)},\ldots,p_{(n)}\) be the order statistics of the iid sample of \(p\)-values coming from a mixture of two uniform distributions_
\[\pi_{1}U[0,b]+\pi_{0}U[0,1], \tag{10}\]
_where \(0<b<1\). Let \(\hat{k}_{\alpha}\) and \(\hat{\pi}_{1}^{\alpha}\), be the corresponding statistics proposed in (4) and (5), respectively, with \(c_{n}\) such that \(\frac{nc_{n}}{\log\log n}\to\infty\), \(\frac{\log\log(1/c_{n})}{\log\log n}\to C<\infty\). It holds that_
\[\hat{k}_{\alpha}/n\stackrel{{ a.s.}}{{\to}}\pi_{1}+b \pi_{0},\] \[p_{(\hat{k}_{\alpha})}\stackrel{{ a.s.}}{{\to}}b,\] \[\hat{\pi}_{1}^{\alpha}\stackrel{{ a.s.}}{{\to}}\pi_{1}.\]
_For large enough \(n\), with probability one it holds that_
\[\left|p_{(\hat{k}_{\alpha})}-b\right| \leq C\frac{\log\log n}{nc_{n}^{2\alpha-1}} \tag{11}\] \[\left|\hat{\pi}_{1}^{DOS}-\pi_{1}\right| \leq C\frac{\log\log n}{nc_{n}^{2\alpha-1}}. \tag{12}\]
_Thus, \(p_{(\hat{k}_{\alpha})}\) and \(\hat{\pi}_{1}^{\alpha}\) are strongly consistent estimators of the uniform mixture parameters \(b\) and \(\pi_{1}\), respectively._
Corollary 2 shows that in general, when the support of the false null distribution is \([0,b]\), \(p_{(\hat{k}_{\alpha})}\) will a.s. not overestimate \(b\), so \(\hat{\pi}_{1}^{\alpha}\) will be a conservative estimator of the proportion.
**Corollary 2**.: _Let \([0,b]\), \(b\leq 1\) be the support of the alternative distribution \(F_{1}\), where \(F_{1}\) is stochastically smaller than \(U[0,b]\) distribution, in the sense that \(F_{1}(t)\geq t/b\) for all \(0\leq t\leq b\). \(p_{(\hat{k}_{\alpha})}\) is an almost surely conservative estimator of the support boundary \(b\)._
Additional remarks and discussions related to Theorem 1 can be found in Section A of the Supplementary material. Additionally, in Section B.3 of the Supplementary material, we perform a numerical examination of the asymptotic quantities defined in Theorem 1 under a specific model.
## 4 Simulations
In this section, we assess the performance of the DOS method by comparing it with various proportion estimators from the literature.
The choice of \(\alpha\) impacts the estimated change-point location. As \(\alpha\) increases, the estimated change-point location tends to occur earlier, leading to a more conservative proportion estimate on average. We conduct the simulations using \(\alpha=1/2\) and \(\alpha=1\).
The simulations in Section B.4 of the Supplementary Material show that excluding the values from the beginning of the sequence \(d(i)\) does not affect the estimates. Therefore, in practice, there is no need to exclude any values when computing the estimates using the DOS method.
Below, we discuss the two simulation settings.
1. In Section 4.1, we consider the Gaussian mean testing problem, \(H_{0}:\mu=0\) against \(H_{1}:\mu>0\). The test statistics \(T_{i}\) follow a \(N(0,1)\) distribution under the null hypotheses and \(N(\mu_{1},1)\) with \(\mu_{1}>0\) under the alternative. One-sided \(p\)-values are calculated from the test statistics as \(p_{i}=1-\Phi(T_{i})\) where \(\Phi\) is the standard Gaussian CDF. We compare the DOS-Storey estimator with various other proportion estimators in terms of their bias, standard deviation (SD), and root mean squared error (RMSE).
2. In Section 4.2, we consider the composite Gaussian mean testing problem, \(H_{0}:\mu\leq 0\) against \(H_{1}:\mu>0\). The distribution under the null is \(N(\mu_{0},1)\), where \(\mu_{0}\leq 0\) and under the alternative \(N(\mu_{1},1)\) for \(\mu_{1}>0\). The \(p\)-values are calculated using the least favorable parameter configuration - when \(\mu_{0}\) is closest to the parameter values under the alternative. This corresponds to \(\mu_{0}=0\), so \(p_{i}=1-\Phi(T_{i})\). This setting produces superuniform \(p\)-values. The uDOS proportion estimator is compared to the method proposed in Hoang and Dickhaus (2020).
In both settings, we consider a fixed proportion of false null hypotheses. That is, for a sample of size \(n\), and a given false null proportion \(\pi_{1}\), the number of false null test statistics is set to \(\lfloor n\pi_{1}\rfloor\). The behavior of the proposed estimators under dependence is considered in Section B.1 of the Supplementary material.
### Comparison Under Uniformity
Below, we list and briefly describe the methods used in the simulation study.
1. STS - Storey and Tibshirani's _smoother_ method from Storey and Tibshirani (2003), implemented in the R package qvalue by Storey et al. (2020)
2. MGF - Moment Generating Function method by Broberg (2005), implemented in the R package SAGx by Broberg (2020)
3. LLF - Langaas-Lindqvist-Ferkingstad by Langaas et al. (2005)
4. LSL - Lowest Slope estimator by Benjamini and Hochberg (2000)
5. MR - Meinshausen-Rice by Meinshausen and Rice (2006)
6. JD - Jiang-Doerge by Jiang and Doerge (2008)
7. ST-MED - Storey's estimator (2) with \(\lambda=p_{(n/2)}\), as proposed in Benjamini et al. (2006)
8. ST-1/2 - Storey's estimator (2) with \(\lambda=1/2\)
Among the methods listed above, Storey-based methods include LSL, JD, ST-MED, and ST-1/2. LSL aims to identify the onset of the linear part and results in a conservative estimator. JD uses bootstrap and averages Storey's proportion estimator across several \(\lambda\) values. The statistical literature on adaptive FDR control typically recommends using ST-1/2 (Blanchard and Roquain, 2009; Lei and Fithian, 2016; Ignatiadis and Huber, 2021). Additionally, we have LLF, a density estimation-based method, and MR, a consistent estimator constructed using the empirical processes theory. MGF is a moment generating function-based method that accounts for the behavior under the alternative. STS uses spline smoothing to combine the information from several \(\lambda\) values close to 1. The implementation of STS within the function pi0est from the R package qvalue is not suitable for small sample sizes and typically requires a sample size of \(n\geq 200\).
The data is simulated as described at the beginning of Section 4. Table 1 provides a comparison of various proportion estimators for a sample size \(n=1000\) and different values of \(\mu_{1}\) and \(\pi_{1}\), based on \(N=1000\) repetitions. The results show that, in terms of the RMSE, DOS-Storey with \(\alpha=1\) performs better in sparse cases, whereas \(\alpha=1/2\) is better suited when there is a higher proportion of false nulls. The DOS-Storey method has a lower variance for stronger signals than the other methods (for instance, \(\mu_{1}=3,\pi_{1}=0.1\) in Table 1). However, for smaller nonzero means (\(\mu_{1}=2,\pi_{1}=0.1\)), the objective function is smoother, increasing variability in the estimated change-point location and the proportion estimates. In general, Storey-based estimators outperform the consistent MR estimator, which significantly underestimates the false null proportion.
Simulation results for small sample sizes \(n=50\) and \(n=100\) are presented in Table 2. The STS method is excluded as it requires a larger sample size to compute the estimates. The results indicate that the two DOS-Storey estimators exhibit one of the smallest RMSE values among the considered estimators, independently of \(\alpha\). It remains that the DOS-1 estimator performs better in sparser cases, and the DOS-1/2 performs better in dense cases.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} & DOS1 & DOS05 & ST-1/2 & ST-MED & JD & LLF & LSL & MGF & MR & STS \\ \hline \multicolumn{1}{c}{} & \multicolumn{8}{c}{\(\mu_{1}=3.5,\pi_{1}=0.01,n_{1}=10\)} \\ \hline BIAS & **-0.4** & 9.6 & 8.1 & 8.1 & 6.8 & 16.9 & -3.7 & 3.2 & -4.5 & 29.7 \\ SD & 3.9 & 15.6 & 22.3 & 21.5 & 21.2 & 22.0 & **2.4** & 14.2 & 7.0 & 54.0 \\ RMSE & **3.9** & 18.3 & 23.8 & 22.9 & 22.3 & 27.8 & 4.4 & 14.5 & 8.3 & 61.6 \\ \hline \multicolumn{1}{c}{} & \multicolumn{8}{c}{\(\mu_{1}=3.5,\pi_{1}=0.03,n_{1}=30\)} \\ \hline BIAS & -3.0 & 6.3 & 1.8 & 1.5 & **0.4** & 15.3 & -7.3 & **-0.4** & -8.6 & 23.1 \\ SD & 5.5 & 13.7 & 26.9 & 25.3 & 26.1 & 21.1 & **3.5** & 17.3 & 5.6 & 62.2 \\ RMSE & **6.3** & 15.1 & 27.0 & 25.3 & 26.1 & 26.0 & 8.1 & 17.3 & 10.2 & 66.3 \\ \hline \multicolumn{1}{c}{} & \multicolumn{8}{c}{\(\mu_{1}=3.0,\pi_{1}=0.05,n_{1}=50\)} \\ \hline BIAS & -8.9 & 4.3 & **0.3** & 0.9 & -0.9 & 16.4 & -17.5 & -1.2 & -16.6 & 18.6 \\ SD & 8.4 & 16.9 & 29.2 & 27.6 & 29.5 & 23.5 & **5.4** & 18.0 & 7.2 & 67.6 \\ RMSE & **12.2** & 17.4 & 29.2 & 27.6 & 29.5 & 28.6 & 18.3 & 18.1 & 18.1 & 70.2 \\ \hline \multicolumn{1}{c}{} & \multicolumn{8}{c}{\(\mu_{1}=2.0,\pi_{1}=0.1,n_{1}=100\)} \\ \hline BIAS & -37.1 & -4.6 & **-3.6** & -5.0 & -7.8 & 12.2 & -67.8 & -14.4 & -44.9 & 8.9 \\ SD & 19.3 & 24.6 & 31.5 & 27.8 & 32.1 & 31.7 & **9.1** & 18.5 & 14.0 & 77.7 \\ RMSE & 41.8 & 25.0 & 31.7 & 28.3 & 33.0 & 33.9 & 68.4 & **23.4** & 47.0 & 78.2 \\ \hline \multicolumn{1}{c}{} & \multicolumn{8}{c}{\(\mu_{1}=3.0,\pi_{1}=0.1,n_{1}=100\)} \\ \hline BIAS & -13.7 & **0.5** & -0.6 & **-0.5** & -6.0 & 14.6 & -28.0 & -2.6 & -23.4 & -1.1 \\ SD & 10.2 & 16.4 & 29.3 & 26.3 & 30.7 & 23.4 & **7.7** & 17.5 & 8.2 & 72.9 \\ RMSE & 17.1 & **16.4** & 29.3 & 26.4 & 31.2 & 27.6 & 29.1 & 17.6 & 24.8 & 72.9 \\ \hline \multicolumn{1}{c}{} & \multicolumn{8}{c}{\(\mu_{1}=2.0,\pi_{1}=0.2,n_{1}=200\)} \\ \hline BIAS & -48.2 & -17.1 & -9.5 & -14.9 & -13.8 & 7.4 & -117.1 & -29.7 & -63.7 & **-0.3** \\ SD & 26.1 & 23.2 & 28.3 & 22.2 & 30.0 & 32.0 & **15.8** & 16.8 & 17.4 & 77.3 \\ RMSE & 54.8 & 28.8 & 29.8 & **26.7** & 33.0 & 32.8 & 118.1 & 34.1 & 66.0 & 77.3 \\ \hline \multicolumn{1}{c}{} & \multicolumn{8}{c}{\(\mu_{1}=3.0,\pi_{1}=0.2,n_{1}=200\)} \\ \hline BIAS & -20.4 & -3.4 & 1.0 & **-0.5** & -3.7 & 15.2 & -44.0 & -3.9 & -31.7 & 1.3 \\ SD & 12.7 & 16.7 & 28.4 & 21.5 & 31.9 & 24.8 & 10.6 & 16.3 & **10.0** & 80.2 \\ RMSE & 24.0 & **17.0** & 28.4 & 21.5 & 32.1 & 29.1 & 45.2 & 16.8 & 33.3 & 80.2 \\ \hline \multicolumn{1}{c}{} & \multicolumn{8}{c}{\(\mu_{1}=3.0,\pi_{1}=0.3,n_{1}=300\)} \\ \hline BIAS & -24.2 & -6.4 & -1.6 & -3.4 & -5.1 & 12.8 & -53.8 & -7.9 & -37.6 & **-1.0** \\ SD & 13.7 & 14.8 & 25.6 & 16.1 & 30.2 & 23.0 & 14.0 & 14.9 & **9.9** & 74.6 \\ RMSE & 27.8 & **16.1** & 25.7 & 16.4 & 30.6 & 26.3 & 55.6 & 16.8 & 38.9 & 74.6 \\ \end{tabular}
\end{table}
Table 1: Bias, standard deviation, and the RMSE of the estimated number of the false null hypotheses (\(n\times\hat{\pi}_{1}\)), given the proportion of false null \(p\)-values \(\pi_{1}\), and the non-zero mean \(\mu_{1}\), for a sample of size \(n=1000\), based on 1000 repetitions. Bold and underlined values correspond to the smallest values in each row.
### Superuniform \(p\)-values
In this section, we explore superuniform \(p\)-values generated from the composite null model introduced in Section 4. As the uniformity assumptions is violated, Storey's estimator is unsuitable. Modifications suggested in Dickhaus (2013) and Hoang and Dickhaus (2020) involve randomizing \(p\)-values to enforce uniformity and utilizing the ST-1/2 method on the randomized \(p\)-values for proportion estimation.
Under the superuniformity assumption, we propose estimating the false null proportion
\begin{table}
\begin{tabular}{c c c c c c c c c c} & DOS1 & DOS05 & ST-1/2 & ST-MED & JD & LLF & LSL & MGF & MR \\ \hline \multicolumn{8}{c}{\(\mu_{1}=3,\pi_{1}=0.1,n=50\)} \\ \hline BIAS & 0.9 & 2.5 & 0.9 & 1.0 & 0.9 & 3.9 & -2.0 & **0.1** & -3.2 \\ SD & 2.6 & 3.0 & 5.5 & 4.4 & 5.1 & 5.2 & 1.8 & 3.5 & **1.5** \\ RMSE & 2.8 & 3.9 & 5.5 & 4.5 & 5.2 & 6.5 & **2.7** & 3.5 & 3.5 \\ \hline \multicolumn{8}{c}{\(\mu_{1}=2,\pi_{1}=0.2,n=50\)} \\ \hline BIAS & -0.9 & 0.9 & 0.2 & **-0.1** & **0.1** & 3.9 & -4.8 & -1.1 & -6.4 \\ SD & 3.9 & 3.4 & 6.2 & 4.6 & 5.7 & 6.9 & 2.7 & 3.8 & **2.2** \\ RMSE & 4.0 & **3.5** & 6.2 & 4.6 & 5.7 & 8.0 & 5.5 & 4.0 & 6.8 \\ \hline \multicolumn{8}{c}{\(\mu_{1}=2,\pi_{1}=0.4,n=50\)} \\ \hline BIAS & -3.0 & -2.3 & -0.9 & -2.4 & **-1.5** & 2.7 & -6.4 & -2.9 & -9.4 \\ SD & 2.8 & **2.4** & 5.5 & 2.8 & 5.6 & 6.5 & 3.8 & 3.5 & 2.6 \\ RMSE & 4.1 & **3.3** & 5.6 & 3.7 & 5.8 & 7.1 & 7.4 & 4.5 & 9.8 \\ \hline \multicolumn{8}{c}{\(\mu_{1}=3,\pi_{1}=0.1,n=100\)} \\ \hline BIAS & 0.7 & 3.8 & 1.7 & 1.9 & 1.8 & 5.5 & -2.2 & **0.5** & -3.4 \\ SD & 3.3 & 4.8 & 7.2 & 6.3 & 6.8 & 7.4 & 2.0 & 4.7 & **1.9** \\ RMSE & 3.3 & 6.1 & 7.4 & 6.6 & 7.0 & 9.2 & **3.0** & 4.7 & 3.9 \\ \hline \multicolumn{8}{c}{\(\mu_{1}=3,\pi_{1}=0.1,n=100\)} \\ \hline BIAS & 0.5 & 3.5 & 1.0 & 0.7 & 0.9 & 5.5 & -2.8 & **0.1** & -4.4 \\ SD & 3.9 & 4.9 & 8.6 & 7.2 & 7.9 & 7.5 & 2.3 & 5.5 & **2.1** \\ RMSE & 3.9 & 6.0 & 8.7 & 7.2 & 8.0 & 9.3 & **3.7** & 5.5 & 4.8 \\ \hline \multicolumn{8}{c}{\(\mu_{1}=2,\pi_{1}=0.2,n=100\)} \\ \hline BIAS & -2.7 & **0.3** & -1.0 & -1.4 & -1.5 & 4.0 & -9.5 & -3.0 & -11.0 \\ SD & 5.9 & 5.2 & 8.8 & 6.9 & 8.5 & 8.9 & 4.0 & 5.3 & **3.5** \\ RMSE & 6.5 & **5.2** & 8.8 & 7.0 & 8.7 & 9.7 & 10.3 & 6.1 & 11.5 \\ \hline \multicolumn{8}{c}{\(\mu_{1}=2,\pi_{1}=0.4,n=100\)} \\ \hline BIAS & -6.6 & -5.2 & -2.2 & -5.3 & **-2.9** & 3.0 & -14.2 & -6.0 & -16.3 \\ SD & 4.3 & **3.6** & 7.8 & 3.8 & 8.1 & 9.3 & 5.4 & 4.8 & 4.1 \\ RMSE & 7.9 & **6.3** & 8.1 & 6.5 & 8.6 & 9.7 & 15.2 & 7.7 & 16.8 \\ \end{tabular}
\end{table}
Table 2: Bias, standard deviation, and the RMSE of the estimated number of the false null hypotheses (\(n\times\hat{\pi}_{1}\)), given the total number of hypotheses \(n\), the proportion of false null hypotheses \(\pi_{1}\), and the non-zero mean \(\mu_{1}\), based on 1000 repetitions. Bold and underlined values correspond to the smallest values in each row.
directly from the change-point using the uDOS estimator defined as:
\[\hat{\pi}^{\alpha}_{\rm 1,uDOS}=\hat{k}_{\alpha}/n. \tag{13}\]
In this approach, the DOS threshold acts as a separation threshold for classifying the \(p\)-values into two groups. The estimated proportion is the proportion of values in the "small \(p\)-values group".
We compare the performance of the uDOS estimator with the proportion estimator by Hoang and Dickhaus (2020) (HD). The data is simulated as described at the beginning of Section 4. We adopt the mean parameter values used in Hoang and Dickhaus (2020) where the mean under the null is \(\mu_{0}=-0.2r\) and under the alternative \(\mu_{1}=1+0.25r\), for \(r\in\{1,...,10\}\). The simulations are conducted for \(n=100\) and based on \(N=10000\) repetitions.
In Figure 4, we compare the performance of the uDOS proportion estimator (13) with the estimator proposed by Hoang and Dickhaus (2020) (HD) and Storey's estimator with \(\lambda=0.5\) applied to the non-randomized \(p\)-value sequence. In terms of the RMSE, the uDOS estimator with \(\alpha=1\) outperforms the HD method overall, although it leads to negatively biased estimates.
Figure 4: Mean, standard deviation, and the MSE of different proportion estimators when applied to superuniform true null \(p\)-values, generated as described in Section 4.2. The \(x\)-axis represents \(r\), indicating the distance between the true and false null means. Top: \(\pi_{1}=0.05\). Bottom: \(\pi_{1}=0.25\).
Real Data Example
Copy Number Variations (CNVs) are genetic alterations characterized by changes in the number of copies of specific DNA segments within an individual's genome, including duplications or deletions of these segments. These variations play a crucial role in cancer development and progression, making their detection important for understanding the genetic causes of the disease. CNV data is typically obtained through aCGH (array comparative genomic hybridization), resulting in _log ratio data_. In this data, no variation corresponds to a value of 0, while deletions are represented as decreases in value, and duplications as increases in value.
We analyze the dataset from the R package neuroblastoma, which includes annotated log ratio data for 575 patients with neuroblastoma. This data is initially considered in Hocking et al. (2013). The dataset covers six chromosomes: 1, 2, 3, 4, 11, and 17. Within each chromosome, an interval of interest is examined. An expert annotates each profile as either "breakpoint" or "normal" to indicate the suspected presence of changes within that interval. CNV data for the first chromosome for the first ten patients is shown in Figure 5. The probe locations are not aligned across patients, and the \(x\)-axis shows the index of the probe for which we have available data. Deletions can be seen in profiles 2, 3, 6, and 8. We refer to these jumps in the piecewise constant mean of the profiles as breakpoints, not to be confused with the notion of change-points in the quantile function of the \(p\)-values used in the paper so far.
We explore the application of our method in the context of breakpoint inference. Our objective is to estimate the prevalence of copy number alterations (breakpoints) among these
Figure 5: Log ratio data for chromosome 1, for the first ten patients.
patients, for which we propose to use the DOS-Storey method. Copy number log ratio data is usually assumed to be independent and Gaussian, with a piecewise constant underlying mean (Zhang et al., 2010; Jeng et al., 2012). However, the available data contains some outliers (see profile 8 in Figure 5), and for that reason before the analysis we trimmed the data by excluding the data points in the lower or upper 2.5th quantile. For each profile and each chromosome, a \(p\)-value arising from testing whether the profile contains a breakpoint is obtained using the method by Jewell et al. (2022). As a result we get a \(p\)-value for each patient in each of the six genomic regions. All \(p\)-values are shown in Figure 6. Note that the black bars correspond to the profiles that are annotated as "breakpoint" by the expert, however the ground truth of whether the breakpoint is present is unknown. The critical component of this analysis is the computation of \(p\)-values using the method proposed in Jewell et al. (2022). The estimation and inference on the breakpoints is performed using package ChangepointsInference Jewell (2023), which enables estimation of the \(p\)-values using a post-selection inference approach. A single breakpoint is estimated in each profile using the CUSUM statistic. The fixed window parameter for testing the estimated breakpoint is set to \(h=5\).
Using the DOS method with \(\alpha=1\), for each chromosome we estimate the number of profiles with breakpoints and compare it to the number of profiles annotated as having a breakpoint. We compare the estimated values to those obtained by Storey's method with \(\lambda=0.5\) (ST-1/2). The results showing the estimated number of affected profiles for each chromosome are presented in Table 3. The values in the rightmost column (ANNOT) are the reported numbers of profiles annotated as having a breakpoint. Although the ground truth is unknown, we observe that the DOS method produces estimates that are closer to the annotated values.
Furthermore, this example motivates using multiple testing methods for estimating the subset of profiles with breakpoints in high-dimensional breakpoint problems. There is limited literature on this topic (Chen et al., 2022; Jirak, 2015; Chen et al., 2023). The existing literature typically assumes that the jumps are large enough to support theoretical statements and enable accurate estimation of the subset of affected coordinates. However, this assumption may not hold for smaller jumps, when coordinates with breakpoints might be indistinguishable from those without breakpoints. By estimating the proportion of affected profiles, our method assesses the sparsity of the high-dimensional breakpoint problem. Moreover, if selecting the subset of affected coordinates is of interest, this can be achieved using the adaptive Benjamini-Hochberg procedure, which controls the number of false discoveries.
## 6 Discussion
Aside from precise proportion estimation, the DOS method can be used for adaptive FDR control. Adaptive FDR control is of interest in modern multiple testing literature, where methods aim to include prior knowledge of \(p\)-values or additional assumptions about their structure. These often build on the classical BH procedure, and adaptiveness can be introduced by incorporating a proportion estimator. Some additional simulation results on the adaptive FDR control can be found in Section B.2 of the Supplementary material. The
Figure 6: \(p\)-value plots for the neuroblastoma data. The black bars denote the \(p\)-values corresponding to the profiles that are annotated as “breakpoint” (with change-point in the copy number data), while the gray background corresponds to the profiles annotated as “normal” (no change in the copy number values).
DOS method outperforms the most commonly used proportion estimators by Benjamini and Hochberg (2000) and Storey and Tibshirani (2003) and it works well for adaptive FDR control under dependency.
The idea behind estimating a single change-point is to estimate the sparsity of the problem and divide the \(p\)-values into two categories, mostly false null, and mostly true null. For future work, it may be of interest to consider different piecewise constant approximations to the \(p\)-value spacing sequence that allow multiple change-points and, in that way, categorize \(p\)-values into multiple groups based on the decreasing frequency of the false null \(p\)-values. Grouping \(p\)-values based on their significance would provide more options for estimating the proportion at different change-points.
## Appendix A Proofs
Before presenting the main theorems we state two lemmas that connect the quantile process of the \(p\)-value distribution to the uniform quantile process. This will allow us to later use some existing results on the almost sure behaviour of the weighted uniform quantile process in the proof of Theorem 1.
#### Notation
We denote with \(\hat{E}_{n}\) the empirical CDF of a sample of size \(n\) from \(U[0,1]\) distribution. The corresponding empirical and quantile processes are respectively:
\[\alpha_{n}(y) =\sqrt{n}(\hat{E}_{n}(y)-y),\] \[u_{n}(y) =\sqrt{n}(\hat{E}_{n}^{-1}(y)-y).\]
For a sample of random variables \(X_{1},\ldots,X_{n}\) with CDF \(F\), we denote its empirical CDF as \(\hat{F}_{n}\), and the corresponding quantile process as
\[q_{n}(y)=\sqrt{n}(\hat{F}_{n}^{-1}(y)-F^{-1}(y)).\]
### Some Useful Lemmas
**Lemma 1**.: _Let \(X_{1},\ldots,X_{n}\) be the sample from a distribution with CDF given as_
\[F(x)=\pi_{1}F_{1}(x)+\pi_{0}x,\]
_where \(F_{1}\) is a continuous weakly concave function. It holds that_
\[q_{n}(y)\leq\frac{1}{\pi_{0}}u_{n}(y),\quad y\in(0,1). \tag{14}\]
Proof.: Let \(X_{k;n}\) be the \(k\)th order statistic of the sample and let \((k-1)/n<y\leq k/n\). It
holds that
\[q_{n}(y) =\sqrt{n}(\hat{F}_{n}^{-1}(y)-F^{-1}(y))\] \[=\sqrt{n}(X_{k;n}-F^{-1}(y))\] \[=\sqrt{n}(F^{-1}(F(X_{k;n}))-F^{-1}(y))\] \[=\sqrt{n}(F^{-1}(U_{k;n})-F^{-1}(y))\] \[=\sqrt{n}\frac{F^{-1}(U_{k;n})-F^{-1}(y)}{U_{k;n}-y}(U_{k;n}-y)\] \[=\frac{F^{-1}(U_{k;n})-F^{-1}(y)}{U_{k;n}-y}\sqrt{n}(\hat{E}_{n}^ {-1}(y)-y)\] \[\leq\sup_{x,y}\frac{F^{-1}(y)-F^{-1}(x)}{y-x}u_{n}(y)\] \[\leq\frac{1}{\pi_{0}}u_{n}(y).\]
The last inequality follows from the concavity assumption as follows. As \(F_{1}\) is a concave function on \([0,1]\), and \(F\) is a linear combination of \(F_{1}\) and a linear function, \(F\) is also a concave function on \([0,1]\). It holds that the inverse of a continuous, concave and increasing function on an interval is convex on the same interval, so it follows that \(F^{-1}\) is a convex function. Since \(F(x)\leq\pi_{1}+\pi_{0}x\) for \(x\in[0,1]\) it holds that for any \(0\leq x<y\leq 1\)
\[\frac{F^{-1}(y)-F^{-1}(x)}{y-x} \leq\frac{1-F^{-1}(x)}{1-x}\] \[\leq\frac{1}{\pi_{0}}.\]
**Lemma 2**.: _Under the assumptions of Lemma 1, it holds that_
\[P\left(\sup_{0<y<1}|q_{n}(y)|\geq x\right)\leq 2\exp\bigl{(}-2x^{2}/C^{2}\bigr{)}.\]
Proof.: To prove this we use the result from Lemma 1 and the relationship between the uniform empirical and the uniform quantile process. By the change of variable argument we have \(\sup_{0<y<1}|u_{n}(y)|=\sup_{0<y<1}|\alpha_{n}(y)|\) (see Remark 1.4.1 in Csorgo (1983)). The result now follows from the Dvoretzky-Kiefer-Wolfowitz inequality, using the tight bound from Massart (1990):
\[P\left(\sup_{0<y<1}|q_{n}(y)|\geq x\right) \leq P\left(\sup_{0<y<1}|u_{n}(y)|C\geq x\right)\] \[=P\left(\sup_{0<y<1}|\alpha_{n}(y)|\geq x/C\right)\] \[\leq 2\exp\bigl{(}-2x^{2}/C^{2}\bigr{)}.\]
### Proof of Theorem 1
Proof.: Let
\[h_{n}(t):=\frac{F_{n}^{-1}(2t)-2F_{n}^{-1}(t)}{t^{\alpha}} \tag{15}\]
The empirical function \(h_{n}(t)\) is approximating the ideal function \(h(t)\) defined in (6) as
\[h(t):=\frac{F^{-1}(2t)-2F^{-1}(t)}{t^{\alpha}}\]
For the sake of notation simplicity, we suppress the dependence of \(h(t)\) and \(h_{n}(t)\) on \(\alpha\) and \(F\). For the DOS sequence it holds that \(d_{\alpha}(i)=h_{n}(i/n)\), for \(i=1,\ldots,n\). Function \(h\) is positive on \((0,1)\) because of the convexity assumption on \(F^{-1}\). We start with the following sequence of inequalities, aiming to upper bound the rate of difference \(\left|h_{n}(t)-h(t)\right|\), uniformly for \(t\in(c_{n},1)\), using strong limit theorems for weighted uniform quantile processes.
\[\left|h_{n}(t)-h(t)\right| =\left|\frac{F_{n}^{-1}(2t)-2F_{n}^{-1}(t)}{t^{\alpha}}-\frac{F^ {-1}(2t)-2F^{-1}(t)}{t^{\alpha}}\right| \tag{16}\] \[\leq 2^{\alpha}\left|\frac{F_{n}^{-1}(2t)-F^{-1}(2t)}{(2t)^{ \alpha}}\right|+2\left|\frac{F_{n}^{-1}(t)-F^{-1}(t)}{t^{\alpha}}\right|\] (17) \[\leq\frac{2^{\alpha}}{\sqrt{n}}\frac{\left|q_{n}(2t)\right|}{(2t) ^{\alpha}}+\frac{2}{\sqrt{n}}\frac{\left|q_{n}(t)\right|}{t^{\alpha}}\] (18) \[\leq\frac{2^{\alpha}+2}{\sqrt{n}}\sup_{t\in(c_{n},1)}\frac{\left| q_{n}(t)\right|}{t^{\alpha}}\] (19) \[\leq\frac{2^{\alpha}+2}{\sqrt{n}}\frac{1}{\pi_{0}}\sup_{t\in(c_{ n},1)}\frac{\left|u_{n}(t)\right|}{t^{\alpha}} \tag{20}\]
In the last inequality we used Lemma 1. To bound the weighted uniform quantile process we use Theorem 2 case (III) from Einmahl and Mason (1988), setting \(\nu=0,a_{n}=\log(n)/n\) using the notation therein, that describes its almost sure behaviour
\[\limsup_{n\to\infty}\sup_{c_{n}\leq t\leq 1}\frac{c_{n}^{\alpha-1/2}|u_{n}(t)| }{t^{\alpha}\sqrt{\log\log n}}\stackrel{{ a.s.}}{{=}}\sqrt{2}. \tag{21}\]
meaning that, for any \(\varepsilon>0\) and large enough \(n\), on a set of probability \(1\), it holds that
\[\frac{1}{\sqrt{n}}\sup_{c_{n}\leq t\leq 1}\frac{|u_{n}(t)|}{t^{\alpha}}\leq \frac{\sqrt{\log\log n}}{\sqrt{n}c_{n}^{\alpha-1/2}}(\sqrt{2}+\varepsilon) \tag{22}\]
Finally, (22) and (20) give a uniform upper bound for \(\left|h_{n}(t)-h(t)\right|\) on \(t\in(c_{n},1]\)
\[\sup_{t\in(c_{n},1]}\left|h_{n}(t)-h(t)\right|\leq C\frac{\sqrt{\log\log n}}{ \sqrt{n}c_{n}^{\alpha-1/2}}, \tag{23}\]
where \(C\) is a constant that for large \(n\) approaches \(\sqrt{2}\). Denote
\[\hat{t}_{n}^{d} :=\frac{1}{n}\operatorname*{argmax}_{1\leq i\leq n}d(i) \hat{t}_{n} :=\operatorname*{argsup}_{t\in[0,1]}h_{n}(t)\] \[\tilde{\tilde{t}}_{n}^{d} :=\frac{1}{n}\operatorname*{argmax}_{1\leq i\leq n}h(i/n) \tilde{\tilde{t}} :=\operatorname*{argmax}_{t\in[0,1]}h(t)\]
Since \(\widetilde{t}^{d}\) is the argmax of a continuous function on an increasingly dense grid, it holds that \(|\widetilde{\widetilde{t}}^{d}_{n}-\widetilde{\widetilde{t}}|\leq 1/n\). Since \(h\) is bounded and continuous on \([0,1]\), it follows that for some \(C^{\prime}>0\)\(|h(\widetilde{\widetilde{t}}^{d}_{n})-h(\widetilde{t})|\leq C^{\prime}/n\). For \(n\) large enough, \(\widetilde{\widetilde{t}}>c_{n}\), the following sequence of inequalities holds with probability 1.
\[h(\widetilde{\widetilde{t}}) \leq h(\widetilde{\widetilde{t}}^{d}_{n})+C^{\prime}/n\] \[\leq|h_{n}(\widetilde{\widetilde{t}}^{d}_{n})|+C\frac{\sqrt{\log \log n}}{\sqrt{n}c_{n}^{\alpha-1/2}}+C^{\prime}/n\] \[\leq|h_{n}(\hat{t}^{d})|+C_{1}\frac{\sqrt{\log\log n}}{\sqrt{n}c _{n}^{\alpha-1/2}}\] \[\leq h(\hat{t}^{d})+2C_{1}\frac{\sqrt{\log\log n}}{\sqrt{n}c_{n} ^{\alpha-1/2}},\]
where \(C_{1}\) gets arbitrarily close to \(\sqrt{2}\). It implies
\[h(\widetilde{\widetilde{t}})-h(\hat{t}^{d})\leq 2C_{1}\frac{\sqrt{\log\log n}}{ \sqrt{n}c_{n}^{\alpha-1/2}} \tag{24}\]
We prove the consistency of \(\hat{t}^{d}\) by contradiction. However, the rate of convergence depends on the differentiability of \(h\), and we separate three different cases.
**Case 1:**: \(h\) has a second derivative at \(\widetilde{\widetilde{t}}\), and \(h^{\prime\prime}(\widetilde{\widetilde{t}})\neq 0\).
**Case 2:**: \(h\) has a second derivative at \(\widetilde{\widetilde{t}}\), and \(h^{\prime\prime}(\widetilde{\widetilde{t}})=0\).
**Case 3:**: \(h\) does not have a second derivative at \(\widetilde{\widetilde{t}}\).
We start with Case 1, and note that a sufficient condition for \(h\) to be twice differentiable is that \(F\) is twice differentiable on \((0,1)\). Assuming that \(|\hat{t}^{d}-\widetilde{\widetilde{t}}|>\frac{\sqrt{\log\log n}}{\sqrt{n}c_{n }^{\alpha-1/2}}\), it holds that
\[\left|h(\widetilde{\widetilde{t}})-h(\hat{t}^{d})\right| =(\hat{t}^{d}-\widetilde{\widetilde{t}})^{2}|h^{\prime\prime}( \widetilde{\widetilde{t}})|+o((\hat{t}^{d}-\widetilde{\widetilde{t}})^{2})\] \[\geq C_{2}\frac{\log\log n}{\sqrt{n}c_{n}^{\alpha-1/2}}.\]
For large \(n\), the last inequality is in contradiction with (24), so it must hold that
\[\left|\hat{t}^{d}-\widetilde{\widetilde{t}}\right|\leq\frac{\sqrt{\log\log n }}{\sqrt{n}c_{n}^{\alpha-1/2}}, \tag{25}\]
which proves the consistency in (7). For Case 2, if \(h^{\prime\prime}(\widetilde{\widetilde{t}})=0\), the consistency still holds, since not all derivatives can be zero, but the rate of convergence is slower accordingly. In Case 3, when \(h\) is not differentiable at \(\widetilde{\widetilde{t}}\), such that left and right derivatives at \(\widetilde{\widetilde{t}}\) are not equal, but lower bounded by a constant larger than zero in an interval around \(\widetilde{\widetilde{t}}\), we can get a better convergence rate:
\[\left|\hat{t}^{d}-\widetilde{\widetilde{t}}\right|\leq C\frac{\log\log n}{nc_{ n}^{2\alpha-1}}.\]
This is the case for example when \(F\) is a mixture of uniform distributions (see Corollary 1). In other cases, we similarly have (25) to hold. We proceed under Case 1, assuming that \(h^{\prime\prime}(\widetilde{\hat{t}})\neq 0\) holds, while the results for other cases can be obtained similarly. The following sequence of inequalities holds almost surely and proves the almost sure convergence in (8):
\[\begin{split}\left|p_{(\hat{k}_{\alpha})}-F^{-1}(\widetilde{\hat{t }})\right|&=\left|\hat{F}_{n}^{-1}(\hat{t}^{d})-F^{-1}( \widetilde{\hat{t}})\right|\\ &\leq\left|\hat{F}_{n}^{-1}(\hat{t}^{d})-F^{-1}(\hat{t}^{d}) \right|+\left|F^{-1}(\hat{t}^{d})-F^{-1}(\widetilde{\hat{t}})\right|\\ &\leq C_{3}\sqrt{\frac{\log\log n}{n}}+C_{4}\sqrt{\frac{\log \log n}{nc_{n}^{2\alpha-1}}}\\ &\leq C_{5}\sqrt{\frac{\log\log n}{nc_{n}^{2\alpha-1}}}.\end{split} \tag{26}\]
For the first term in (26) we use Lemma 1 and then the Chung-Smirnov law of iterated logarithm, to get the inequality which holds almost surely. For the second term we use the fact that \(F^{-1}\) is Lipschitz continuous, and the obtained rate of convergence in (25). The convergence in (9) follows similarly:
\[\begin{split}\left|\hat{\pi}_{1}^{DOS}-\frac{\overset{\approx}{ \widetilde{t}}-F^{-1}(\widetilde{\hat{t}})}{1-F^{-1}(\widetilde{\hat{t}})} \right|&=\left|\frac{\hat{t}^{d}-F^{-1}(\hat{t}^{d})}{1-F^{-1}( \hat{t}^{d})}-\frac{\overset{\approx}{\widetilde{t}}-F^{-1}(\widetilde{\hat{t }})}{1-F^{-1}(\widetilde{\hat{t}})}\right|\\ &\leq\left|\frac{(1-F^{-1}(\hat{t}^{d}))(\hat{t}^{d}-\widetilde{ \hat{t}})+\hat{t}^{d}(F^{-1}(\hat{t}^{d})-F^{-1}(\widetilde{\hat{t}}))}{(1-F^ {-1}(\widetilde{\hat{t}}))(1-F^{-1}(\hat{t}^{d}))}\right|\\ &\leq C_{6}\sqrt{\frac{\log\log n}{n^{\frac{1-\theta}{2}}}}. \end{split}\]
**Remark 1**.: The conditions on \(c_{n}\) in Theorem 1 come from the theorem in Einmahl and Mason (1988). Instead of \(c_{n}\to 0\), we can trivially take \(c_{n}=\varepsilon\in(0,1)\), in which case we can simply use the Chung-Smirnov law stated in the proof of Theorem 1, to get the rate of convergence of \(\frac{\sqrt{\log\log n}}{\sqrt{n}}\). From the work of Einmahl and Mason (1988) we see that the choice of \(c_{n}\) is very important in the theory of uniform quantile processes. As the false-null distribution \(F_{1}\) is unknown, using the result from Lemma 2 we bound the quantile process of distribution \(F\) by a uniform quantile process. This approximation is convenient as most of the results in the theory of quantile processes are given only for the uniform quantile process. However, the behaviour of the weighted uniform quantile process around \(0\) will be more variable than that of weighted quantile process of a distribution \(F\). \(F\) is more concentrated around zero and the sample quantiles will be closer to the true quantiles than in the case of uniform distribution, which reduces the boundary problem.
## References
* Benjamini and Hochberg (1995) Benjamini, Y. and Hochberg, Y. (1995). Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. _Journal of the Royal Statistical Society: Series B_, 57, 289-300.
* Benjamini and Hochberg (2000) Benjamini, Y. and Hochberg, Y. (2000). On the Adaptive Control of The False Discovery Rate in Multiple Testing With Independent Statistics. _Journal of Educational and Behavioral Statistics_, 25, 60-83.
* Benjamini et al. (2006) Benjamini, Y., Krieger, A. M., and Yekutieli, D. (2006). Adaptive Linear Step-Up Procedures That Control the False Discovery Rate. _Biometrika_, 93, 491-507.
* Blanchard et al. (2010) Blanchard, G., Lee, G., and Scott, C. (2010). Semi-Supervised Novelty Detection. _Journal of Machine Learning Research_, 11(99), 2973-3009.
* Blanchard and Roquain (2009) Blanchard, G. and Roquain, E. (2009). Adaptive False Discovery Rate Control Under Independence and Dependence. _Journal of Machine Learning Research_, 10, 2837-2871.
* Broberg (2005) Broberg, P. (2005). A Comparative Review of Estimates of the Proportion Unchanged Genes and the False Discovery Rate. _BMC Bioinformatics_, 6, 199.
* Broberg (2020) Broberg, P. (2020). _SAGx: Statistical Analysis of the GeneChip_. R package version 1.64.0.
* Brodsky and Darkhovsky (1993) Brodsky, B. E. and Darkhovsky, B. S. (1993). _Nonparametric Methods in Change-Point Problems_. Springer Dordrecht.
* Cai et al. (2007) Cai, T. T., Jin, J., and Low, M. G. (2007). Estimation and Confidence Sets for Sparse Normal Mixtures. _The Annals of Statistics_, 35, 2421-2449.
* Celisse and Robin (2010) Celisse, A. and Robin, S. (2010). A Cross-Validation Based Estimation of the Proportion of True Null Hypotheses. _Journal of Statistical Planning and Inference_, 140, 3132-3147.
* Chen et al. (2022) Chen, L., Wang, W., and Wu, W. B. (2022). Inference of Breakpoints in High-dimensional Time Series. _Journal of the American Statistical Association_, 117, 1951-1963.
* Chen et al. (2023) Chen, Y., Wang, T., and Samworth, R. J. (2023). Inference in High-Dimensional Online Changepoint Detection. _Journal of the American Statistical Association_, pages 1-12.
* Csorgo (1983) Csorgo, M. (1983). _Quantile Processes With Statistical Applications._ Society for Industrial and Applied Mathematics.
* Cuomo et al. (2020) Cuomo, A. S. E., Seaton, D. D., McCarthy, D. J., Martinez, I., Bonder, M. J., Garcia-Bernardo, J., Amatya, S., Madrigal, P., Isaacson, A., Buettner, F., Knights, A., Natarajan, K. N., et al. (2020). Single-Cell RNA-Sequencing of Differentiating iPS Cells Reveals Dynamic Genetic Effects on Gene Expression. _Nature Communications_, 11, 810.
* Dickhaus (2013) Dickhaus, T. (2013). Randomized p-Values for Multiple Testing of Composite Null Hypotheses. _Journal of Statistical Planning and Inference_, 143, 1968-1979.
* Dickhaus (2014)
Einmahl, J. H. J. and Mason, D. M. (1988). Strong Limit Theorems for Weighted Quantile Processes. _The Annals of Probability_, 16, 1623-1643.
* Genovese and Wasserman (2004) Genovese, C. and Wasserman, L. (2004). A Stochastic Process Approach to False Discovery Control. _The Annals of Statistics_, 32, 1035-1061.
* Gigante et al. (2022) Gigante, C. M., Korber, B., Seabolt, M. H., Wilkins, K., Davidson, W., Rao, A. K., Zhao, H., Smith, T. G., Hughes, C. M., Minhaj, F., Waltenburg, M. A., et al. (2022). Multiple Lineages of Monkeypox Virus Detected in the United States, 2021-2022. _Science_, 378, 560-565.
* Hoang and Dickhaus (2020) Hoang, A.-T. and Dickhaus, T. (2020). On the Usage of Randomized p-Values in the Schweder-Spjotvoll Estimator. _Annals of the Institute of Statistical Mathematics_, 74, 289-319.
* Hocking et al. (2013) Hocking, T. D., Schleiermacher, G., Janoueix-Lerosey, I., Boeva, V., Cappo, J., Delattre, O., Bach, F., and Vert, J.-P. (2013). Learning Smoothing Models of Copy Number Profiles Using Breakpoint Annotations. _BMC Bioinformatics_, 14, 164.
* Hwang et al. (2014) Hwang, Y. T., Kuo, H. C., Wang, C. C., and Lee, M. F. (2014). Estimating the Number of True Null Hypotheses in Multiple Hypothesis Testing. _Statistics and Computing_, 24, 399-416.
* Ignatiadis and Huber (2021) Ignatiadis, N. and Huber, W. (2021). Covariate Powered Cross-Weighted Multiple Testing. _Journal of the Royal Statistical Society: Series B_, 83, 720-751.
* Jain et al. (2016) Jain, S., White, M., and Radivojac, P. (2016). Estimating the class prior and posterior from noisy positives and unlabeled data. In _Advances in Neural Information Processing Systems_, volume 29.
* Jain et al. (2017) Jain, S., White, M., and Radivojac, P. (2017). Recovering true classifier performance in positive-unlabeled learning. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 31.
* Jeng et al. (2012) Jeng, X. J., Cai, T. T., and Li, H. (2012). Simultaneous Discovery of Rare and Common Segment Variants. _Biometrika_, 100, 157-172.
* Jewell (2023) Jewell, S. (2023). _ChangepointInference: Testing for a Change in Mean After Changepoint Detection_. R package version 0.9.
* Jewell et al. (2022) Jewell, S., Fearnhead, P., and Witten, D. (2022). Testing for a Change in Mean after Changepoint Detection. _Journal of the Royal Statistical Society: Series B_, 84, 1082-1104.
* Jiang and Doerge (2008) Jiang, H. and Doerge, R. (2008). Estimating the Proportion of True Null Hypotheses for Multiple Comparisons. _Cancer Informatics_, 6, 25-32.
* Jirak (2015) Jirak, M. (2015). Uniform Change Point Tests in High Dimension. _The Annals of Statistics_, 43, 2451-2483.
* Jiang et al. (2016)
Klunk, J., Vilgalys, T. P., Demeure, C. E., Cheng, X., Shiratori, M., Madej, J., Beau, R., Elli, D., Patino, M. I., Redfern, R., DeWitte, S. N., et al. (2022). Evolution of Immune Genes is Associated With the Black Death. _Nature_, 611, 312-319.
* Langaas et al. (2005) Langaas, M., Lindqvist, B. H., and Ferkingstad, E. (2005). Estimating the Proportion of True Null Hypotheses, With Application to DNA Microarray Data. _Journal of the Royal Statistical Society: Series B_, 67, 555-572.
* Legut et al. (2022) Legut, M., Gajic, Z., Guarino, M., Daniloski, Z., Rahman, J. A., Xue, X., Lu, C., Lu, L., Mimitou, E. P., Hao, S., et al. (2022). A Genome-Scale Screen for Synthetic Drivers of T Cell Proliferation. _Nature_, 603, 728-735.
* Lei and Fithian (2016) Lei, L. and Fithian, W. (2016). Power of Ordered Hypothesis Testing. In _Proceedings of The 33rd International Conference on Machine Learning_, volume 48.
* Massart (1990) Massart, P. (1990). The Tight Constant in the Dvoretzky-Kiefer-Wolfowitz Inequality. _The Annals of Probability_, 18, 1269-1283.
* Meinshausen and Rice (2006) Meinshausen, N. and Rice, J. (2006). Estimating the Proportion of False Null Hypotheses Among a Large Number of Independently Tested Hypotheses. _Annals of Statistics_, 34, 373-393.
* Patra and Sen (2016) Patra, R. K. and Sen, B. (2016). Estimation of a Two-Component Mixture Model With Applications to Multiple Testing. _Journal of the Royal Statistical Society: Series B_, 78, 869-893.
* Pounds and Morris (2003) Pounds, S. and Morris, S. W. (2003). Estimating the Occurrence of False Positives and False Negatives in Microarray Studies by Approximating and Partitioning the Empirical Distribution of p-Values. _Bioinformatics_, 19, 1236-1242.
* Schweder and Spjotvoll (1982) Schweder, T. and Spjotvoll, E. (1982). Plots of p-values to Evaluate Many Tests Simultaneously. _Biometrika_, 69, 493-502.
* Storey (2002) Storey, J. D. (2002). A Direct Approach to False Discovery Rates. _Journal of the Royal Statistical Society: Series B_, 64, 479-498.
* Storey et al. (2020) Storey, J. D., Bass, A. J., Dabney, A., and Robinson, D. (2020). _qvalue: Q-value Estimation for False Discovery Rate Control._ R package version 2.22.0.
* Storey et al. (2004) Storey, J. D., Taylor, J. E., and Siegmund, D. (2004). Strong Control, Conservative Point Estimation and Simultaneous Conservative Consistency of False Discovery Rates: A Unified Approach. _Journal of the Royal Statistical Society: Series B_, 66, 187-205.
* Storey and Tibshirani (2003) Storey, J. D. and Tibshirani, R. (2003). Statistical Significance for Genomewide Studies. _Proceedings of the National Academy of Sciences_, 100, 9440-9445.
* Swanepoel (1999) Swanepoel, J. W. (1999). The Limiting Behavior of a Modified Maximal Symmetric \(2s\)-spacing With Applications. _The Annals of Statistics_, 27, 24-35.
* Sures
Taquet, M., Smith, S. M., Prohl, A. K., Peters, J. M., Warfield, S. K., Scherrer, B., and Harrison, P. J. (2021). A Structural Brain Network of Genetic Vulnerability to Psychiatric Illness. _Molecular Psychiatry_, 26, 2089-2100.
* Turkheimer et al. (2001) Turkheimer, F. E., Smith, C. B., and Schmidt, K. (2001). Estimation of The Number of "True" Null Hypotheses in Multivariate Analysis of Neuroimaging Data. _NeuroImage_, 13, 920-930.
* Wittenbecher et al. (2022) Wittenbecher, C., Guasch-Ferre, M., Haslam, D. E., Dennis, C., Li, J., Bhupathiraju, S. N., Lee, C.-H., Qi, Q., Liang, L., Eliassen, A. H., Clish, C., Sun, Q., and Hu, F. B. (2022). Changes in Metabolomics Profiles Over Ten Years and Subsequent Risk of Developing Type 2 Diabetes: Results From The Nurses' Health Study. _eBioMedicine_, 75, 103799.
* Xu et al. (2017) Xu, H., Di Antonio, M., McKinney, S., Mathew, V., Ho, B., O'Neil, N. J., Santos, N. D., Silvester, J., Wei, V., Garcia, J., Kabeer, F., et al. (2017). CX-5461 is a DNA G-Quadruplex Stabilizer With Selective Lethality in BRCA1/2 Deficient Tumours. _Nature Communications_, 8, 14432.
* Zhang et al. (2010) Zhang, N. R., Siegmund, D. O., Ji, H., and Li, J. Z. (2010). Detecting Simultaneous Changepoints in Multiple Sequences. _Biometrika_, 97, 631-645.
Supplementary Material for "A Change-Point Approach to Estimating the Proportion of False Null Hypotheses in Multiple Testing"
Anica Kostic
Author for correspondence. [E-mail: [email protected], Address: Department of Statistics, London School of Economics and Political Science, Columbia House, Houghton Street, London, WC2A 2AE, UK]
Piotr Fryzlewicz,
London School of Economics and Political Science
This supplement provides further insights into the theoretical properties of the proposed method and presents additional simulation results. We outline the sections as follows:
* In Section A, we extend the discussion by stating a few remarks regarding Theorem 1.
* Section B contains various additional simulation results. Namely, it involves: 1. Additional simulation results under dependence in Section B.1
2. Comparison of various proportion estimators for adaptive FDR control using the adaptive BH procedure in Section B.2.
3. Numerical results on the limiting behavior of the proposed statistics under the Gaussian model done in Mathematica in Section B.3.
4. Simulation results examining the sensitivity of the DOS method with respect to \(c_{n}\), the proportion of excluded values, in Section B.4.
* Section C contains additional discussion on the meaning of the DOS statistic.
## Appendix A Theoretical Remarks
**Remark 1**.: Examples can be constructed for which the assumption (A2) does not hold. First, if the signal is too weak, \(Q\) does not have a prominent change-point, which results in \(h\) increasing on \([0,0.5]\). An illustration of this can be seen in Figure 1. Another requirement of (A2) is that \(h\) achieves a single local (and global) maximum within the interval \((0,0.5)\). This requirement can be violated if \(h\) is constant on an interval where it achieves its maximum value. We illustrate this by taking a \(p\)-value distribution to be a uniform mixture distribution whose quantile function is piecewise linear with change-points in slope at \(0.1,0.2,0.3,0.4\) and with increasing slopes on the first four segments equal to \(0.1,0.2,0.4,0.9\). The corresponding function \(h(t)\) is constant on the interval \([0.3,0.4]\) where its value is maximal. This example is shown in Figure 2.
**Remark 2**.: Although Theorem 1 is formulated for when the true null \(p\)-values distribution is uniform, it can be applied for other mixtures \(F\). For example, let \(F(x)=\pi_{1}F_{1}(x)+\pi_{0}F_{0}(x)\), \(0\leq x\leq 1\), where \(F_{0}\) is a continuous superuniform distribution on \([0,1]\) and \(F_{1}\) is a concave CDF of a continuous distribution under the alternative that is stochastically smaller than \(U[0,1]\). Assumption (A1) might not hold in that case, which affects the result of Lemma 1. Specifically, the convexity assumption was used in Lemma 1 to bound \(\sup_{x,y}\frac{F^{-1}(y)-F^{-1}(x)}{y-x}\). This expression is still finite, which yields \(|q_{n}(y)|\leq C|u_{n}(y)|\), although the constant \(C\) may not be equal to \(1/\pi_{0}\) as in Lemma 1. Note that \(C\) can get very large if \(F_{0}\) and \(F_{1}\) are very well separated, implying that \(F^{-1}\) very steep. In addition, if assumption (A2) holds, Theorem 1 can be applied to determine the asymptotic behavior of the uDOS estimator. Asymptotically, the uDOS proportion estimator then converges to the ideal change-point location, which is the proposed separation threshold:
\[\hat{\pi}^{\alpha}_{1,\text{uDOS}}\to\underset{0\leq t\leq 1/2}{\text{ argmax}}\frac{F^{-1}(2t)-2F^{-1}(t)}{t^{\alpha}},\quad n\to\infty. \tag{1}\]
Figure 1: An example where assumption (A2) from the main paper is violated as \(h\) increases over \([0,0.5]\). Left: Quantile function of \(p\)-values from the Gaussian model with \(\pi_{1}=0.2\) and \(\mu=1\). Right: Corresponding \(h\) function.
Figure 2: An example illustrating the violation of assumption (A2) from the main paper. The \(p\)-values distribution is modeled as a mixture of several uniforms. Left: Quantile function of the uniform mixture. Right: The function \(h\) is constant on \([0.3,0.4]\) where it reaches its maximum value.
Additional Simulations
### Dependent \(p\)-values
This section provides simulation results of the DOS-Storey and the uDOS estimators under dependence. We now describe the model used to generate the \(p\)-values, similar to the Gaussian mean testing described in Section 4 of the main paper, but incorporating dependence in the \(p\)-value sequence. The test statistics are \(T_{i}=\mu+\varepsilon_{i}\), where \(\mu\) is the mean parameter of the Gaussian distribution we are testing. Under the null hypothesis \(\mu=0\) and under the alternative \(\mu>0\). Random variables \(\varepsilon_{i}\) are defined as
\[\varepsilon_{i}=\sqrt{\rho}U_{i}+\sqrt{1-\rho}Z_{i}. \tag{2}\]
where \(U_{i}\) and \(Z_{i}\) are iid \(N(0,1)\) random variables and \(\rho\in[0,1]\) is a correlation parameter. For \(\rho>0\) this introduces positive correlation in the test statistics, \(\text{corr}(T_{i},T_{j})=\rho\), for \(i\neq j\). The \(p\)-values are calculated as \(p_{i}=1-\Phi(T_{i})\). The simulation results are based on the sample size \(n=100\) and various values for the mean under the alternative and various \(\pi_{0}\) values. The number of repetitions is \(N=1000\).
The simulation results for \(\rho=0.2\) are shown in Table 1 and for \(\rho=0.5\) in Table 2. Both tables show that DOS-Storey methods perform well compared to other methods in the presence of dependence.
We also include simulations under dependence for superuniform \(p\)-values. We adopt the simulation setting used in Hoang and Dickhaus (2020). The mean under the null is \(\mu_{0}=-0.2r\) and under the alternative \(\mu_{1}=1+0.25r\) for \(r\in\{1,...,10\}\). The correlation is introduced as in 2 and the \(p\)-values are calculated as \(p_{i}=1-\Phi(T_{i})\). All the simulations are conducted for \(n=100\). In Figure 3, we illustrate the performance of the uncorrected DOS proportion estimator under dependence alongside the estimator proposed by Hoang and Dickhaus (2020) (HD). We show the estimators' average mean, standard deviation, and RMSE based on \(N=10000\) repetitions. The results are similar to the independent case discussed in the main paper. Both DOS-Storey estimates have negative bias but uniformly smaller MSE than the HD estimates.
Note that although the RMSE of the DOS estimator is smaller than that of the HD, the unbiasedness of the HD estimator plays a more significant role in adaptive FDR control, resulting in more accurate FDR control.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline & DOS1 & DOS05 & ST-1/2 & ST-MED & JD & LLF & LSL & MGF & MR \\ \hline \multicolumn{8}{c}{\(\mu=3,\pi_{1}=0.05\)} \\ \hline BIAS & 4.60 & 7.90 & 12.90 & 7.30 & 15.10 & 24.50 & **-0.60** & 89.30 & 4.30 \\ SD & 12.60 & 13.00 & 22.30 & 13.70 & 22.70 & 32.90 & **8.20** & 1143.10 & 14.80 \\ RMSE & 13.40 & 15.30 & 25.80 & 15.60 & 27.30 & 41.00 & **8.20** & 1146.60 & 15.40 \\ \hline \multicolumn{8}{c}{\(\mu=2,\pi_{1}=0.1\)} \\ \hline BIAS & 1.50 & 3.90 & 8.60 & 3.00 & 11.40 & 20.70 & -4.80 & 4.50 & **-0.80** \\ SD & 13.40 & 13.20 & 22.40 & 14.00 & 23.30 & 32.70 & **9.50** & 16.70 & 14.70 \\ RMSE & 13.40 & 13.70 & 24.00 & 14.30 & 26.00 & 38.70 & **10.60** & 17.40 & 14.80 \\ \hline \multicolumn{8}{c}{\(\mu=3,\pi_{1}=0.1\)} \\ \hline BIAS & 2.40 & 4.10 & 7.60 & 2.20 & 9.30 & 17.50 & -2.40 & 3.70 & **0.10** \\ SD & 11.90 & 12.20 & 23.10 & 14.10 & 22.90 & 30.20 & **10.60** & 17.00 & 14.40 \\ RMSE & **12.10** & 12.90 & 24.40 & 14.20 & 24.70 & 34.90 & 10.80 & 17.40 & 14.40 \\ \hline \multicolumn{8}{c}{\(\mu=2,\pi_{1}=0.2\)} \\ \hline BIAS & -2.30 & -0.70 & 5.00 & -2.10 & 6.70 & 15.90 & -9.10 & **0.20** & -6.50 \\ SD & 14.20 & 13.90 & 24.80 & 15.20 & 25.00 & 32.20 & **12.60** & 18.80 & 16.20 \\ RMSE & 14.40 & **13.90** & 25.30 & 15.40 & 25.90 & 35.90 & 15.60 & 18.80 & 17.40 \\ \hline \multicolumn{8}{c}{\(\mu=3,\pi_{1}=0.2\)} \\ \hline BIAS & **0.60** & 2.20 & 4.50 & -1.40 & 6.30 & 17.30 & -4.30 & 1.60 & -3.50 \\ SD & 10.40 & 10.40 & 22.50 & 14.30 & 22.90 & 27.80 & **9.10** & 17.40 & 12.40 \\ RMSE & 10.40 & 10.60 & 22.90 & 14.40 & 23.80 & 32.70 & **10.00** & 17.40 & 12.90 \\ \hline \multicolumn{8}{c}{\(\mu=3,\pi_{1}=0.3\)} \\ \hline BIAS & -2.10 & -0.70 & **-0.40** & -6.00 & 1.80 & 13.30 & -5.20 & -1.70 & -6.50 \\ SD & 10.00 & **9.70** & 24.90 & 15.50 & 25.80 & 26.80 & 12.40 & 19.70 & 13.10 \\ RMSE & 10.20 & **9.80** & 24.90 & 16.60 & 25.90 & 29.90 & 13.40 & 19.70 & 14.60 \\ \hline \end{tabular}
\end{table}
Table 1: Simulations under dependence, where \(p\)-values follow the model described in Section B.1 with \(\rho=0.2\). Bias, standard deviation, and the RMSE of the estimated number of the false null hypotheses (\(n\times\hat{\pi}_{1}\)), given the proportion of false null \(p\)-values \(\pi_{1}\), and the non-zero mean \(\mu_{1}\), for a sample of size \(n=100\), based on 1000 repetitions. Bold and underlined values correspond to the smallest values in each row.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline & DOS1 & DOS05 & ST-1/2 & ST-MED & JD & LLF & LSL & MGF & MR \\ \hline \multicolumn{8}{c}{\(\mu=3,\pi_{1}=0.05\)} \\ \hline BIAS & 5.1 & 9.1 & 23.4 & 10.0 & 26.7 & 38.0 & **4.7** & 772.2 & 15.6 \\ SD & **16.3** & 17.0 & 34.0 & 16.8 & 34.4 & 44.0 & 25.5 & 3850.2 & 24.3 \\ RMSE & **17.0** & 19.3 & 41.2 & 19.5 & 43.5 & 58.2 & 25.9 & 3926.9 & 28.9 \\ \hline \multicolumn{8}{c}{\(\mu=2,\pi_{1}=0.1\)} \\ \hline BIAS & 1.2 & 3.8 & 18.0 & 4.9 & 21.8 & 34.6 & **0.1** & 516.0 & 10.4 \\ SD & 17.1 & 17.0 & 33.2 & **16.6** & 34.2 & 44.0 & 24.6 & 4331.4 & 24.0 \\ RMSE & **17.1** & 17.4 & 37.8 & 17.3 & 40.6 & 56.0 & 24.6 & 4362.0 & 26.2 \\ \hline \multicolumn{8}{c}{\(\mu=3,\pi_{1}=0.1\)} \\ \hline BIAS & **1.0** & 3.1 & 14.6 & 3.5 & 18.1 & 27.5 & **1.0** & 2184.0 & 7.7 \\ SD & **15.7** & 16.3 & 32.9 & 16.7 & 33.4 & 42.6 & 25.0 & 19327.1 & 23.8 \\ RMSE & **15.7** & 16.6 & 36.0 & 17.0 & 38.0 & 50.8 & 25.0 & 19450.1 & 25.0 \\ \hline \multicolumn{8}{c}{\(\mu=2,\pi_{1}=0.2\)} \\ \hline BIAS & -4.8 & -2.3 & 12.1 & **-2.0** & 16.0 & 28.1 & -5.3 & 2364.8 & **2.0** \\ SD & 18.2 & **17.7** & 34.8 & **17.7** & 35.1 & 43.4 & 27.4 & 21655.1 & 25.1 \\ RMSE & 18.8 & 17.9 & 36.8 & **17.8** & 38.5 & 51.7 & 27.9 & 21783.8 & 25.2 \\ \hline \multicolumn{8}{c}{\(\mu=3,\pi_{1}=0.2\)} \\ \hline BIAS & -2.1 & **-0.6** & 11.1 & -1.9 & 15.9 & 28.3 & -4.5 & 310.3 & 2.6 \\ SD & **15.1** & 15.3 & 32.6 & 17.0 & 33.6 & 41.3 & 21.3 & 2728.7 & 23.4 \\ RMSE & **15.3** & **15.3** & 34.4 & 17.1 & 37.2 & 50.0 & 21.8 & 2746.3 & 23.5 \\ \hline \multicolumn{8}{c}{\(\mu=3,\pi_{1}=0.3\)} \\ \hline BIAS & -6.8 & -5.7 & **4.2** & -9.2 & 8.3 & 19.6 & -7.1 & 32.6 & -4.6 \\ SD & 15.2 & **15.1** & 34.6 & 18.4 & 35.4 & 39.8 & 24.6 & 466.3 & 24.4 \\ RMSE & 16.7 & **16.2** & 34.9 & 20.6 & 36.3 & 44.4 & 25.6 & 467.4 & 24.8 \\ \hline \end{tabular}
\end{table}
Table 2: Simulations under dependence, where \(p\)-values follow the model described in Section B.1 with \(\rho=0.5\). Bias, standard deviation, and the RMSE of the estimated number of the false null hypotheses (\(n\times\hat{\pi}_{1}\)), given the proportion of false null \(p\)-values \(\pi_{1}\), and the non-zero mean \(\mu_{1}\), for a sample of size \(n=100\), based on 1000 repetitions. Bold and underlined values correspond to the smallest values in each row.
### Adaptive FDR Control
In this section, we illustrate the performance of the DOS-Storey proportion estimator for adaptive FDR control. We evaluate its performance under dependence and independence and compare it to adaptive procedures using different estimators.
The FDR-controlling multiple testing procedure proposed by Benjamini and Hochberg (1995) (BH) rejects the hypotheses corresponding to the \(\hat{k}\) smallest \(p\)-values, where \(\hat{k}=\max\left\{k\geq 1:p_{(k)}\leq\alpha\frac{k}{n}\right\}\) and \(p_{(1)}\leq\cdots\leq p_{(n)}\). It achieves effective FDR control at level \(\pi_{0}\alpha\). As suggested by Benjamini and Hochberg (2000), incorporating a proportion estimator into the BH procedure increases its power by adapting to the unknown proportion. This
Figure 3: Mean, standard deviation, and the MSE of different proportion estimators when applied to superuniform true null \(p\)-values, generated as described in Section B.1. The \(x\)-axis represents \(r\), indicating the distance between the true and false null means. From top to bottom: correlation coefficient \(\rho\in\{0.25,0.5,0.75\}\).
adaptation is achieved by increasing the FDR level parameter from \(\alpha\) to \(\alpha^{\prime}=\alpha/\hat{\pi}_{0}\), which results in a higher number of rejections while maintaining approximate FDR control at level \(\alpha\). The simulation study in this section investigates how different proportion estimators affect FDR control and the power of the resulting adaptive procedures. In the context of an adaptive BH procedure, power is defined as the ratio of the number of true discoveries to the total number of false null hypotheses. Note that in this section, we use \(\alpha\) to refer to the level of the FDR control and not the parameter of the DOS sequence from the main paper.
Since the DOS method is unsuitable when the false null proportion is very large, we restrict our analysis to cases where \(\pi_{0}\geq 0.5\) (\(\pi_{1}\leq 0.5\)). We adopt the simulation settings used in Blanchard and Roquain (2009). We consider various values for the mean under the alternative and various \(\pi_{0}\) values. The desired level of the FDR control is set to \(\alpha=0.05\). Figures 4 and 5 show the simulation results of this analysis for sample size \(n=100\), based on \(N=10000\) repetitions. For various adaptive BH procedures, we report the FDR and the power relative to the oracle procedure, which is the adaptive BH that uses the unknown true value of \(\pi_{0}\). In Figure 4, we consider fixed nonzero mean \(\mu_{1}=3\) and changing true null proportion values on the \(x\)-axes. In Figure 5 we consider fixed \(\pi_{0}=0.75\) and changing nonzero mean \(\mu_{1}\) on the \(x\)-axes.
The methods included in simulations defined in the main paper are ST-MED, ST-1/2, and LSL. DOS-1 and DOS-1/2 correspond to our proposed DOS-Storey procedures. ST-alpha corresponds to Storey's proportion estimator with \(\lambda=\alpha\), the desired level of the FDR control, here set to \(0.05\). Simulations in Blanchard and Roquain (2009) show that ST-alpha works particularly well in the presence of dependence. Regarding independence, Blanchard and Roquain (2009) single out ST-1/2 as the best proportion estimator for adaptive FDR control. The approach proposed by Storey and Tibshirani (2003), as implemented in the function piOest from the R package qvalue, requires a larger number of \(p\)-values near \(1\). This requirement makes the method unsuitable for situations involving smaller sample sizes or dependency.
In the top right plot of Figure 4, we see that under independence, and if the signal is sparse, the DOS methods have higher power than the ST-1/2 method. However, the FDR is slightly above \(\alpha=0.05\) in this small sample case. Under dependence, the DOS methods do not strictly control the FDR at the desired level \(\alpha=0.05\); the FDR is controlled at around \(0.06\). The power under dependence is comparable to that of ST-\(\alpha\), although with slightly weaker FDR control. Similar conclusions can be drawn from Figure 5. The DOS-1 method produces slightly more conservative estimates, which results in better FDR control. DOS methods behave stable when dependence is introduced and yield meaningful adaptive BH procedures under both independence and dependence.
We also include simulation results for larger sample sizes \(n\in\{500,1000,5000\}\) under independence, based on \(N=1000\) repetitions. This sample size allows us to include the method by Storey and Tibshirani (2003) (STS), which is most commonly used in the applied literature. The FDR and relative power of various adaptive BH procedures are shown in Figure 6. It can be noted that the FDR control of the STS method is slightly above the desired level \(\alpha=0.05\), which is more prominent in small sample sizes. On the other hand, for large sample sizes, the DOS methods are more conservative and keep the FDR controlled below \(\alpha\).
Figure 4: FDR and power relative to oracle as a function of the true null proportion \(\pi_{0}\), with \(\mu_{1}=3\), for various adaptive BH procedures. From top to bottom: correlation coefficient \(\rho\in\{0,0.2,0.5\}\).
Figure 5: FDR and power relative to oracle as a function of the signal strength under the alternative, where \(\pi_{0}=0.75\), for various adaptive BH procedures. From top to bottom: correlation coefficient \(\rho\in\{0,0.2,0.5\}\).
Figure 6: FDR and power relative to oracle as a function of the true null proportion \(\pi_{0}\), with \(\mu_{1}=3\), for various adaptive BH procedures in case of large sample. From top to bottom: the size of the sample is \(n\in\{500,1000,10000\}\).
### Numerical Results for the Gaussian Mixture Model
In this section, we investigate the ideal asymptotic quantities from Theorem 1 to which the change-point location \(\hat{k}_{\alpha}/n\) and the proportion estimator \(\hat{\pi}_{1}^{\alpha}\) are converging to:
\[\hat{k}_{\alpha}/n\stackrel{{ a.s.}}{{\rightarrow}} \operatorname*{argmax}_{0\leq t\leq 1/2}\frac{F^{-1}(2t)-2F^{-1}(t)}{t^{ \alpha}} \tag{3}\] \[\hat{\pi}_{1}^{\alpha}\stackrel{{ a.s.}}{{\rightarrow}} \frac{\overline{\hat{t}}_{\alpha}-F^{-1}(\widetilde{\hat{t}}_{\alpha})}{1 -F^{-1}(\overline{\hat{t}}_{\alpha})}. \tag{4}\]
We denote the asymptotic quantities on the right-hand side of equations (3) and (4) as \(\widetilde{\hat{t}}_{\alpha}\) and \(\widetilde{\hat{\pi}}_{1}^{\alpha}\) respectively. \(\widetilde{\hat{\pi}}_{1}^{\alpha}\) is generally smaller than the true proportion, and we call this quantity the estimable proportion. To gain insights into the behavior of these asymptotic quantities, we perform computations of \(\widetilde{\hat{t}}_{\alpha}\) and \(\widetilde{\hat{\pi}}_{1}^{\alpha}\) under the Gaussian model. The numerical results are obtained using Mathematica and can be found within the MTCP package. The test statistics have a distribution that is a mixture of Gaussians:
\[T\sim\pi_{1}N(\mu_{1},1)+\pi_{0}N(0,1) \tag{5}\]
where \(\mu_{1}>0\). Denote
\[\Psi_{\mu_{1}}(t) =P(N(\mu_{1},1)\geq t) \tag{6}\] \[\widetilde{F}(t) =P(T\geq t)=\pi_{1}\Psi_{\mu_{1}}(t)+(1-\pi_{1})\Psi_{0}(t). \tag{7}\]
One-sided \(p\)-values have the distribution with the CDF:
\[F(x)=P(p\leq x) =P(\Psi_{0}(T)\leq x) \tag{8}\] \[=P(T\geq{\Psi_{0}}^{-1}(x))\] (9) \[=\widetilde{F}({\Psi_{0}}^{-1}(x))\] (10) \[=\pi_{1}\Psi_{\mu_{1}}({\Psi_{0}}^{-1}(x))+(1-\pi_{1})x \tag{11}\]
Figure 7 shows \(\widetilde{\hat{\pi}}_{1}^{\alpha}/\pi_{1}\) as a function of \(\pi_{1}\) and for different values of the nonzero mean \(\mu_{1}\). This quantity is shown for the two DOS-Storey estimators, for \(\alpha=1/2\) (left-hand plot) and \(\alpha=1\) (right-hand plot). \(\widetilde{\hat{\pi}}_{1}/\pi_{1}\) represents the ratio of the estimable proportion and the true proportion under this model. Similarly, Figure 8 shows \(\widetilde{\hat{t}}_{1}^{\alpha}/\pi_{1}\) as a function of \(\pi_{1}\) and for different values of the nonzero mean \(\mu_{1}\). For larger signal values, the change-point location gets close to the false null proportion.
### Dependence on \(c_{n}\)
In this section, we examine the impact of \(c_{n}\) on the DOS-Storey estimates under the Gaussian model. \(c_{n}\) is the proportion of values excluded from the DOS sequence when looking for a maximum (a change-point). We investigate how varying \(c_{n}\) influences the change-point and proportion estimates for different sample sizes \(n\in 100,1000,10000,100000\), as well as different values of the nonzero mean \(\mu_{1}\) and the false null proportion \(\pi_{1}\).
The results are presented in Figures 9.
Figure 8: The percentage of the false null proportion the DOS method is able to estimate asymptotically in the Gaussian model, for \(\alpha=1/2\) (left) and \(\alpha=1\) (right) and various values of \(\pi_{1}\) and \(\mu_{1}\).
Figure 7: The percentage of the false null proportion the DOS method is able to estimate asymptotically in the Gaussian model, for \(\alpha=1/2\) (left) and \(\alpha=1\) (right) and various values of \(\pi_{1}\) and \(\mu_{1}\).
Figure 9: Averaged DOS change-point estimates (left column) and the DOS-Storey \(\alpha=1\) proportion estimates (right column) as a function of the proportion of excluded values (_c_) from the DOS sequence. The data is simulated from the Gaussian model. Top row: \(\mu_{1}=4,\pi_{1}=0.01\), Middle row: \(\mu_{1}=3,\pi_{1}=0.05\), Bottom row: \(\mu_{1}=2,\pi_{1}=0.2\).
In the left-hand plots, we illustrate the mean estimated change-point location as a function of \(c\), representing the proportion of excluded values. These estimates are averaged over \(N=1000\) repetitions, with each curve in the legend corresponding to a different sample size, \(n\). The right-hand plots display the mean estimated proportion as a function of \(c\), again averaged over \(N=1000\) repetitions, for various sample sizes, \(n\). Furthermore, these plots effectively illustrate the convergence of the estimated change-point location and the estimated proportion towards the ideal values defined in Theorem 1 in the main paper, represented by the solid horizontal grey lines, as \(n\) increases.
Notably, in many of the scenarios we consider, the estimates remain stable. The estimates become sensitive when we exclude "too many" values which in general happens as \(c\) approaches \(\pi_{1}\). Additionally, sensitivity to the parameter \(c\) is more pronounced when the signal is weaker, as evidenced in Figure 10. Furthermore, these results reveal that the estimator performs well for small sample sizes. However, as the sample size grows, if the nonzero mean is not sufficiently large, the quantile function of the \(p\)-values becomes smoother, leading to increased variability in the change-point location.
### Comparing DOS to Other Change-Point Methods
In this section, we compare the performance of the DOS-Storey methods to the other change-point-based estimators in the literature. The simulation settings are as those in Table 1 from the main paper, and the change-point based methods included are by Hwang et al. (2014) (HKWL) and Turkheimer et al. (2001) (TSS). The results are shown in Table 3.
Figure 10: Averaged DOS change-point estimates (left column) and the DOS-Storey \(\alpha=1\) proportion estimates (right column) as a function of the proportion of excluded values (\(c\)) from the DOS sequence. The data is simulated from the Gaussian model with \(\mu=2\) and \(\pi_{1}=0.05\). This plot shows that for weaker effects the estimates are more variable.
The HKWL method operates similarly to the DOS-Storey method. It involves fitting a piecewise linear function with one change-point to the sequence of \(p\)-values and then using the \(p\)-value at the identified change-point location as the parameter \(\lambda\) for Storey's estimator. However, this estimator tends to produce overly conservative estimates. The DOS-Storey estimates consistently outperform the HKWL estimates in nearly all cases. The TSS method, on the other hand, yields more precise estimates compared to the HKWL
\begin{table}
\begin{tabular}{c c c c c} & DOS1 & DOS05 & HKWL & TSS \\ \hline \multicolumn{5}{c}{\(\mu_{1}=3.5,\pi_{1}=0.01,n_{1}=10\)} \\ \hline BIAS & **-0.4** & 9.9 & -7.7 & -0.4 \\ SD & 3.9 & 15.5 & **1.5** & 21.8 \\ RMSE & **4.0** & 18.4 & 7.8 & 21.8 \\ \hline \multicolumn{5}{c}{\(\mu_{1}=3.5,\pi_{1}=0.03,n_{1}=30\)} \\ \hline BIAS & -2.6 & 6.1 & -18.9 & **-1.6** \\ SD & 5.4 & 12.9 & **4.2** & 19.6 \\ RMSE & **6.0** & 14.2 & 19.3 & 19.7 \\ \hline \multicolumn{5}{c}{\(\mu_{1}=3,\pi_{1}=0.1,n_{1}=50\)} \\ \hline BIAS & -8.8 & 4.8 & -38.3 & **-2.7** \\ SD & 7.8 & 16.2 & **5.7** & 19.2 \\ RMSE & **11.8** & 16.9 & 38.7 & 19.4 \\ \hline \multicolumn{5}{c}{\(\mu_{1}=2,\pi_{1}=0.1,n_{1}=100\)} \\ \hline BIAS & -38.8 & **-5.7** & -97.2 & -27.1 \\ SD & 20.0 & 24.0 & **2.6** & 20.3 \\ RMSE & 43.7 & **24.7** & 97.3 & 33.9 \\ \hline \multicolumn{5}{c}{\(\mu_{1}=3,\pi_{1}=0.1,n_{1}=100\)} \\ \hline BIAS & -13.6 & **1.6** & -53.8 & -7.0 \\ SD & 10.4 & 16.7 & **9.2** & 19.1 \\ RMSE & 17.1 & **16.8** & 54.5 & 20.4 \\ \hline \multicolumn{5}{c}{\(\mu_{1}=2,\pi_{1}=0.2,n_{1}=200\)} \\ \hline BIAS & -47.1 & **-15.5** & -179.2 & -49.3 \\ SD & 26.5 & 23.5 & **19.6** & 20.3 \\ RMSE & 54.0 & **28.1** & 180.3 & 53.3 \\ \hline \multicolumn{5}{c}{\(\mu_{1}=3,\pi_{1}=0.2,n_{1}=200\)} \\ \hline BIAS & -20.2 & **-3.3** & -58.2 & -11.1 \\ SD & 12.7 & 16.5 & **12.4** & 18.5 \\ RMSE & 23.9 & **16.8** & 59.5 & 21.5 \\ \hline \multicolumn{5}{c}{\(\mu_{1}=3,\pi_{1}=0.3,n_{1}=300\)} \\ \hline BIAS & -23.5 & **-5.8** & -52.6 & -14.2 \\ SD & 14.1 & 15.5 & **13.7** & 17.5 \\ RMSE & 27.4 & **16.6** & 54.4 & 22.5 \\ \end{tabular}
\end{table}
Table 3: Bias, standard deviation and the RMSE of the estimated number of the false null hypotheses (\(n\times\hat{\pi}_{1}\)), given the proportion of false null \(p\)-values \(\pi_{1}\), and the non-zero mean \(\mu_{1}\), for a sample of size \(n=1000\), based on 200 repetitions. Bold and underlined values correspond to the smallest values in each row.
method. Nonetheless, for small \(\pi_{1}\), both DOS-Storey with \(\alpha=1/2\) and \(\alpha=1\) have smaller Root Mean Square Error (RMSE) values. In cases with larger \(\pi_{1}\), DOS-Storey with \(\alpha=1/2\) consistently outperforms the TSS method.
## Appendix C Interpretations
In this section we analyse how the DOS method relates to the methods available in the literature for estimating the knee/elbow point in a plot, by providing a short review of the papers on that topic. We proceed to give a short interpretation of the objective function \(h\) defined at the beginning of Section 3 of the main document.
### Knee/Elbow Estimation
The methods used for estimating the knee/elbow in a plot are often heuristic, and the mathematical definition of this point is sometimes avoided. While some methods concentrate on estimating the point with the largest second derivative, the most natural approach to defining the knee/elbow is as the point where the curvature is maximized. The DOS method shares similarities with existing knee/elbow estimation methods - more precisely for elbow estimation, as the function of interest is convex. However, from this perspective, our objective function \(h_{F}^{\alpha}\) presents a distinct definition for the elbow, better suited for the discrete nature of the data.
Detecting the knee/elbow in a graph is a problem of interest in model selection problems, such as estimating the number of factors in factor analysis or determining the optimal number of clusters. In their work, Salvador and Chan (2004) review several approaches and propose their method. Standard methods include finding the largest difference or the largest ratio between two consecutive points. The L-method proposed by Salvador and Chan (2004) consists of fitting two straight lines on each side of the candidate knee point and selecting the point with the smallest mean squared error as the estimated knee. In Antunes et al. (2018a) and Antunes et al. (2018b), the authors propose a thresholding-based method to identify the sharp angle in the plot, aiming to find the optimal transition point between the high and low values of the first derivative. Another widely used knee detection algorithm is proposed in Satopaa et al. (2011).
The DOS method can be seen as a method for estimating an elbow in a \(p\)-value plot. The DOS method can also be compared to the method for finding the sharpest angle in the graph of the quantile function. Simple analysis shows that the ideal change-point of the DOS method comes after the point where the sharpest angle is between points \((0,i/n)\) and \((i/n,2i/n)\) on the graph. In that sense, the DOS method may be better seen as a method for estimating the point where the curvature drops significantly rather than the method for finding the point of the maximum curvature.
### Scanning for the Largest Difference
We further analyse the objective function \(h\) defined at the beginning of Section 3 of the main document as:
\[h_{F}^{\alpha}(t)=\frac{F^{-1}(2t)-2F^{-1}(t)}{t^{\alpha}},\quad t\in(0,1/2). \tag{12}\]
Define
\[H^{\alpha}(t,a):=\frac{F^{-1}(t+a)-F^{-1}(t)}{a^{\alpha}}-\frac{F^{-1}(t)-F^{-1 }(t-a)}{a^{\alpha}}, \tag{13}\]
for \(a\in(0,t]\) and \(t\in[0,1]\). For \(a=t\), it holds that \(H^{\alpha}(t,t)=h_{F}^{\alpha}(t)\).
Under the assumption of the existence and continuity of the first two derivatives of \((F^{-1})(t)\) on \(t\in[0,1]\), and considering that \((F^{-1})^{\prime}\) is an increasing function, it follows that \(H^{\alpha}(t,a)\) is increasing in \(a\) for any fixed \(t\). Let \(\overset{\approx}{t}_{\alpha}=\operatorname{argmax}_{t}h_{F}^{\alpha}(t)\). If \(\overset{\approx}{t}_{\alpha}\leq 1/2\), it holds that
\[(\overset{\approx}{t}_{\alpha},\overset{\approx}{t}_{\alpha})=\operatorname{ argmax}_{(t,a)}H^{\alpha}(t,a)=\operatorname{argmax}_{(t,t)}H^{\alpha}(t,t),\]
suggesting that the DOS statistic essentially estimates the point of the largest scaled difference on symmetric intervals of arbitrary length in the quantile function. We note that this interpretation is possible under the concavity assumption and assumption (A1). Scanning through all possible window sizes \(a\) is unnecessary and would introduce additional noise to the estimator, however, this perspective provides an interpretation that resembles the scan statistic commonly used in change-point analysis.
Lastly, we note that for small fixed values of \(a\), the function \(H(t,a)\) can be interpreted in terms of the second derivative \((F^{-1})^{\prime\prime}(t)\). By employing a second-order Taylor expansion, we obtain the following approximation:
\[H(t,a)=a(F^{-1})^{\prime\prime}(t)+o(a).\]
Thus, for small \(a\) and any \(t\in(0,1)\), maximising \(H(t,a)\) is approximately equivalent to maximising the second derivative of \(F^{-1}\). However, for larger \(a\) this approximation does not hold anymore, and the change point depends on the behaviour of the quantile function on the whole interval.
|
2309.04062 | 3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising
and Cross-Modal Distillation | Pretraining molecular representations from large unlabeled data is essential
for molecular property prediction due to the high cost of obtaining
ground-truth labels. While there exist various 2D graph-based molecular
pretraining approaches, these methods struggle to show statistically
significant gains in predictive performance. Recent work have thus instead
proposed 3D conformer-based pretraining under the task of denoising, which led
to promising results. During downstream finetuning, however, models trained
with 3D conformers require accurate atom-coordinates of previously unseen
molecules, which are computationally expensive to acquire at scale. In light of
this limitation, we propose D&D, a self-supervised molecular representation
learning framework that pretrains a 2D graph encoder by distilling
representations from a 3D denoiser. With denoising followed by cross-modal
knowledge distillation, our approach enjoys use of knowledge obtained from
denoising as well as painless application to downstream tasks with no access to
accurate conformers. Experiments on real-world molecular property prediction
datasets show that the graph encoder trained via D&D can infer 3D information
based on the 2D graph and shows superior performance and label-efficiency
against other baselines. | Sungjun Cho, Dae-Woong Jeong, Sung Moon Ko, Jinwoo Kim, Sehui Han, Seunghoon Hong, Honglak Lee, Moontae Lee | 2023-09-08T01:36:58Z | http://arxiv.org/abs/2309.04062v1 | # 3D Denoisers are Good 2D Teachers: Molecular Pretraining via Denoising and Cross-Modal Distillation
###### Abstract
Pretraining molecular representations from large unlabeled data is essential for molecular property prediction due to the high cost of obtaining ground-truth labels. While there exist various 2D graph-based molecular pretraining approaches, these methods struggle to show statistically significant gains in predictive performance. Recent work have thus instead proposed 3D conformer-based pretraining under the task of denoising, which led to promising results. During downstream finetuning, however, models trained with 3D conformers require accurate atom-coordinates of previously unseen molecules, which are computationally expensive to acquire at scale. In light of this limitation, we propose D&D, a self-supervised molecular representation learning framework that pretrains a 2D graph encoder by distilling representations from a 3D denoiser. With denoising followed by cross-modal knowledge distillation, our approach enjoys use of knowledge obtained from denoising as well as painless application to downstream tasks with no access to accurate conformers. Experiments on real-world molecular property prediction datasets show that the graph encoder trained via D&D can infer 3D information based on the 2D graph and shows superior performance and label-efficiency against other baselines.
## 1 Introduction
Molecular property prediction has gained much interest across the machine learning community, leading to breakthroughs in various applications such as drug discovery [19; 29] and material design [52; 42; 50; 43]. As molecules can be represented as a _2D graph_ with nodes and edges representing atoms and covalent bonds, many graph neural networks have been developed with promising results [13; 11; 7; 9; 49; 27]. However, achieving high precision requires accurate ground-truth property labels which are very expensive to obtain. This limitation has motivated adaptation of self-supervised pretraining widely used in natural language processing [12; 6] and computer vision [22; 2] onto molecular graphs with proxy objectives developed to instill useful knowledge into neural networks with unlabeled data. But existing 2D graph-based pretraining frameworks face a fundamental challenge: while the model is trained to learn representations that are invariant under various data augmentations, augmenting 2D graphs can catastrophically disrupt its topology, which renders the model unable to fully recover labels from augmented samples [57]. As a result of this limitation, recent work has shown that existing 2D pretraining approaches do not show statistically meaningful performance improvements in downstream tasks [54].
As an alternative, recent work have proposed incorporating 3D information to the pretraining objective, leveraging large unlabeled datasets of _3D conformers_, or point clouds of atoms floating in the physical space. While a natural task would be to reconstruct the input conformer, this may not induce generalizable knowledge as each conformer only represents a single local minima in a distribution of 3D configurations. On the other hand, the force field that controls the overall stabilization process provides significant chemical information that can be used across many different molecular
properties [39]. This naturally translates to pretraining via denoising conformers under perturbations, an approach that has shown state-of-the-art performance in diverse molecular property prediction benchmarks [62, 36].
Despite great performance, a model trained with denoising requires conformers downstream as well, and obtaining accurate conformers require expensive quantum mechanical computations. While there exist many rule-based [45, 33] as well as deep learning-based approaches [15, 59, 28] for generating conformers, previous work have shown that existing methods fail to generate conformers quickly and accurately enough to be used in a large scale [51].
In light of such limitations, we propose D&D (Denoise and Distill), a self-supervised molecular representation learning framework that enjoys the best of both worlds. Figure 1 shows the overall pipeline of our work. D&D sequentially performs two steps: 1) we pretrain a 3D teacher model that denoises conformers artificially perturbed with Gaussian noise and 2) freeze the 3D teacher encoder and distill representations from the 3D teacher onto the 2D student. When given a downstream task with access to 2D molecular graphs only, the 3D teacher is discarded and the 2D student is finetuned towards the given task. As a result of distillation, D&D encourages the 2D graph encoder to exploit the topology of the molecular graph towards encoding the input molecule similarly to the 3D conformer encoder without any explicit supervision from property labels. Surprisingly, experiments on various molecular property prediction datasets indicate that the 2D graph representations from D&D can generalize to unseen molecules. To the best of our knowledge, our method is the first self-supervised molecular representation learning framework that adopts cross-modal knowledge distillation to transfer knowledge from a 3D denoiser onto a 2D graph encoder. We summarize our main contributions as follows:
* We propose D&D, a two-step self-supervised molecular representation pretraining framework that performs 3D-to-2D cross-modal distillation.
* Pretraining results show that under D&D, the 2D student model can closely mimic representations from the 3D teacher model using graph features and topology. Further analysis shows that the intermediate representations of the 2D student also aligns well with 3D geometry.
* Experiments on the OGB benchmark and manually curated physical property datasets show that D&D leads to significant knowledge transfer, and also performs well in downstream scenarios where the number of labeled training data points is limited.
Figure 1: Comparison between D&D and existing molecular pretraining frameworks. **Top:** 2D graph-based pretraining methods fail to bring significant benefit to downstream molecular property prediction. **Middle:** 3D denoising is effective in predicting molecular properties by approximately learning the force field in the physical space, but cannot be easily applied to downstream tasks where only 2D graphs are available. **Bottom:** Our method D&D allows practitioners to leverage knowledge from 3D denoising in downstream scenarios where only 2D molecular graphs are available without the need to generate 3D conformer via expensive computations or machine learning approaches.
Related Work
In this section, we first discuss previous work on knowledge distillation that inspired our approach. We also cover existing self-supervised pretraining approaches for molecular representation learning.
Knowledge Distillation.Knowledge distillation (KD) was developed under the motivation of transferring knowledge learned by a large _teacher_ model to a much more compact _student_ model, thereby reducing the computational burden while preserving the predictive performance [23]. Example approaches in computer vision include distilling class probabilities as a soft target for classification models [1] or transferring intermediate representations of input images [56]. For dense prediction tasks such as semantic segmentation, it has been shown that a _structured_ KD approach that distills pixel-level features instead leads to improvements in performance [38]. Another extension that is more closely related to our approach is _cross-modal_ KD on unlabeled modality-paired data (e.g. RGB and Depth images), which was proposed to cope with modalities with limited data [18]. Inspired by this work, D&D performs 3D-to-2D cross-modal KD to allow downstream finetuning on 2D molecular graphs while utilizing the feature space refined by 3D conformer denoising. Further information on KD can be found in a recent survey by [17].
Pretraining for Molecular Property Prediction.Inspired by previous work in the NLP domain, there exist many self-supervised pretraining approaches for learning representations of molecular graphs. Similar to masked token prediction in BERT [12], [25] proposed node-attribute masking and context prediction to reconstruct topological structures or predict attributes of masked nodes. GROVER [46] proposed predicting what motifs exist in the molecular graph, under the insight that functional groups determine molecular properties. Contrastive approaches were also proposed, in which the task is to align representations from two augmentations of the same molecule and repel representations of different ones [21; 61]. Despite promising results, it has been shown that obtaining significant gains in performance with existing 2D pretraining methods is non-trivial, as empirical improvements rely heavily on the choice of hyperparameters and other experimental setups [54].
As molecules lie in the 3D physical space, some work have deviated from the 2D graph setting and instead proposed 3D pretraining via denoising conformers perturbed with Gaussian noise, from which empirical results have shown significant knowledge transfer to diverse molecular property prediction tasks [62; 36]. Despite great downstream performance, such 3D approaches necessitate access to accurate 3D conformers of molecules under concern, which are difficult to obtain as it requires expensive quantum mechanical computations such as density functional theory (DFT) [41].
There exist solutions to avoid these drawbacks. 3DInfomax [51] proposed cross-modal contrastive pretraining frameworks that align 2D and 3D representations. In addition to contrastive pretraining, GraphMVP [35] also incorporates generative pretraining which trains the model to reconstruct the 2D graph representation from its 3D counterpart, and vice versa. While these methods use 3D conformers during pretraining only, they do not capture information from molecular force fields, which we conjecture to be helpful for forward knowledge transfer. Our D&D framework, on the other hand, enjoys the same advantages while also leveraging generalizable knowledge obtained through conformer denoising.
## 3 Preliminaries
We first introduce preliminary information on learning representations of 2D molecular graphs and 3D molecular conformers alongside notations that we use in later sections.
2D Molecular Graphs.Let \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) denote the 2D molecular graph with \(N\) atoms represented as nodes in \(\mathcal{V}\), and \(M\) bonds represented as edges in \(\mathcal{E}\). In addition to the graph-connectivity, each node is assigned features based on chemical attributes such as atomic number and aromaticity, and similarly for each edge with features based on bond type and stereo configurations. Given the graph \(\mathcal{G}\), a 2D graph encoder \(f^{\text{2D}}\) typically first returns representations for each node:
\[f^{\text{2D}}(\mathcal{G})=\mathbf{Z}^{\text{2D}}\text{ where }\mathbf{Z}^{\text{2D}} \in\mathbb{R}^{N\times d}. \tag{1}\]
In molecular property prediction settings we need a single representation for each molecular graph. Typical operators used to extract graph-level representations include mean-pooling all node representations or adding a virtual node to the input graph and treating its representation as the graph representation [20].
3D Molecular Conformers.Each molecule can also be represented as a 3D conformer \(\mathcal{C}=(\mathcal{V},\mathbf{R})\) with 3D spatial coordinates of each atom stored in \(\mathbf{R}\in\mathbb{R}^{N\times 3}\). Note that unlike in 2D graphs, 3D conformers have information on the graph connectivity nor covalent bonds in \(\mathcal{E}\), and is instead treated as point cloud data. As with 2D graphs, let \(f^{\text{3D}}\) denote the 3D conformer encoder that takes the conformer \(\mathcal{C}\) and returns representations of each atom:
\[f^{\text{3D}}(\mathcal{C})=\mathbf{Z}^{\text{3D}}\text{ where }\mathbf{Z}^{\text{3D}} \in\mathbb{R}^{N\times d}. \tag{2}\]
Note that how the molecule is oriented in the 3D space naturally does not affect its chemical property. Thus, the encoder \(f_{\text{3D}}\) must return representations that are invariant under rotations and translations on \(\mathbf{R}\) (i.e. \(f^{\text{3D}}((\mathcal{V},\mathbf{R}))=f^{\text{3D}}((\mathcal{V},g(\mathbf{R}))\) for \(g\in SE(3)\)) for efficient weight-tying across SE(3) roto-translations. Since molecular properties are not invariant to chiral orientations, we only respect rotations and translations, but not reflections. There exist many architectures that respect SE(3) symmetry as an inductive bias [14; 5; 16; 48; 55], and any such architecture can be used for \(f^{\text{3D}}\).
## 4 D&D: Denoise and Distill
Here we describe D&D, a molecular pretraining framework that transfers generalizable knowledge from 3D conformer denoising to a 2D graph encoder via cross-modal distillation, thereby allowing painless downstream applications without computing accurate conformers of unseen graphs. The two major steps are as follows: 1) Denoising perturbed conformers with a 3D conformer encoder \(f^{\text{3D}}\), and 2) Distilling representations from the 3D teacher to the 2D graph encoder \(f^{\text{2D}}\). An illustration of the overall pipeline can be found in Figure 2. As our first step of D&D is based upon previous work on conformer denoising [62; 37], we provide a brief outline of the task and refer readers to corresponding papers for further details and theoretical implications.
Step 1: Pretraining via denoising.Given a stabilized ground-truth conformer \(\mathcal{C}=(\mathcal{V},\mathbf{R})\), \(f^{\text{3D}}\) is given as input a perturbed version of the same conformer \(\tilde{\mathcal{C}}=(\mathcal{V},\tilde{\mathbf{R}})\), produced by slightly perturbing the coordinates of each atom with Gaussian noise as
\[\tilde{\mathbf{R}}_{i}=\mathbf{R}_{i}+\sigma\mathbf{\epsilon}_{i}\text{ where }\mathbf{ \epsilon}_{i}\sim\mathcal{N}(0,\mathbf{I}_{3}) \tag{3}\]
with noise scale \(\sigma\) as hyperparameter. Then, we attach a prediction head \(h^{\text{3D}}:\mathbb{R}^{N\times d}\rightarrow\mathbb{R}^{N\times 3}\) to the \(f^{\text{3D}}\) such that the combined model outputs 3-dimensional vectors per atom.
\[h^{\text{3D}}(f^{\text{3D}}(\tilde{\mathcal{C}}))=(\hat{\mathbf{\epsilon}}_{1}, \dots,\hat{\mathbf{\epsilon}}_{N}) \tag{4}\]
Figure 2: Illustration of our D&D framework. First we pretrain a 3D conformer encoding module by denoising perturbed conformers. Next we pretrain a 2D graph encoder by distilling representations from the 3D teacher. We propose two variants: D&D-Graph distills mean-pooled graph representations while D&D-Node distills node representations in a more fine-grained manner. During finetuning, we tune the 2D graph encoder only with the given downstream data.
Lastly, the model is trained to predict the noise that has been injected to create \(\tilde{\mathcal{C}}\) from \(\mathcal{C}\). The denoising loss minimized during training is as follows:
\[\mathcal{L}_{\text{denoise}}=\mathbb{E}_{p(\tilde{\mathcal{C}},\mathcal{C})} \left[\left\|h^{\text{3D}}(f^{\text{3D}}(\tilde{\mathcal{C}}))-(\mathbf{\epsilon} _{1},\dots,\mathbf{\epsilon}_{N})\right\|_{2}^{2}\right] \tag{5}\]
where \(p(\tilde{\mathcal{C}},\mathcal{C})\) denotes the probability distribution induced by the dataset distribution and the noise sampling procedure to create \(\tilde{\mathcal{C}}\). Surprisingly, the denoising objective is equivalent to learning an approximation of the actual force field in the physical space derived by replacing the true distribution of conformers with a mixture of Gaussians [62]. The Gaussian mixture potential corresponds to the classical harmonic oscillator potential in physics, which is a great approximation scheme for linearized equations such as denoising.
For our experiments, we use the TorchMD-NET [55] architecture for \(f^{\text{3D}}\) following Zaidi et al. 62 due to its equivariance to SE(3) roto-translations and high performance on quantum mechanical property prediction. Note that D&D is architecture-agnostic, and any other SE(3)-equivariant architecture can be used as well.
Step 2: Cross-modal distillation.After pretraining via denoising is done, we distill representations from the pretrained \(f^{\text{3D}}\) to a 2D graph encoder model \(f^{\text{2D}}\). We consider two different variants of cross-modal KD, leading to two respective variants of our approach. For the first variant D&D-Graph, we minimize the difference between graph representations from 2D and 3D encoders:
\[\mathcal{L}_{\text{distill-graph}}=\left\|\text{pool}(f^{\text{2D}}(\mathcal{G }))-\text{pool}(f^{\text{3D}}(\mathcal{C}))\right\|_{2}^{2} \tag{6}\]
During training, we freeze the teacher model \(f^{\text{3D}}\) and flow gradients only through the student model \(f^{\text{2D}}\). This effectively trains the 2D encoder to leverage the bond features and graph topology to imitate representations from 3D conformers. To obtain graph representations, we average all node representations inferred by each encoder.
Inspired by structured KD [38], we propose another variant D&D-Node that distillation node-level representations without any pooling:
\[\mathcal{L}_{\text{distill-node}}=\left\|f^{\text{2D}}(\mathcal{G})-f^{\text{ 3D}}(\mathcal{C})\right\|_{2}^{2} \tag{7}\]
Unlike D&D-Graph, D&D-Node makes full use of the one-to-one correspondence between atoms in the molecular graph and atoms in the conformer. Hence \(f^{\text{2D}}\) is trained to align towards representations from \(f^{\text{3D}}\) in a more fine-grained manner.
For the \(f^{\text{2D}}\), we use the TokenGT architecture that theoretically enjoys maximal expressiveness across all possible permutation-equivariant operators on 2D graphs [31]. Due to this flexibility, we expect \(f^{\text{2D}}\) to be trained to align representations from \(f^{\text{3D}}\) as closely as possible, by which we hope to see the effect of distillation to the fullest extent. Furthermore, using an attention-based architecture also allows analysis on the relationship between the attention scores of atom-pairs and their physical distance in the 3D space. Results from which are discussed later in Section 5. Note that similarly with \(f^{\text{3D}}\), however, any other permutation-equivariant graph neural network architecture can be adopted seamlessly.
Downstream finetuning.Assuming the downstream task does not provide accurate conformers as input, we discard \(f^{\text{3D}}\) after the distillation step and finetune \(f^{\text{2D}}\) with molecular graphs only. We use L1 loss and BCE loss for regression and binary-classification tasks, respectively, following previous work [51].
Note that we finetune the entire \(f^{\text{2D}}\) model instead of just the newly attached prediction head on the downstream data. Given that the force fields induced by electron clouds provide knowledge that is generalizable to various molecular properties, we conjecture that D&D provides a good initial point in the parameter space from which finetuning \(f^{\text{2D}}\) entirely leads to a better local optima. This also aligns with previous observations in NLP that pretrained language models outperform models trained from scratch only when the entire model is finetuned [47].
Experiments
For empirical evaluation, we test our D&D pipeline on various molecular property prediction tasks using open-source benchmarks as well as four manually curated datasets. We also stress-test D&D under a downstream scenario where the number of labeled data points is extremely limited. All experiments are run in a remote GCP server equipped with 16 NVIDIA A100 Tensor Core GPUs.
### Experimental Setup
Datasets.For pretraining, we use PCQM4Mv2 [40], a large molecule dataset consisted of 3.7M molecules. Each molecule is paired with a single 3D conformer at the lowest-energy state computed via DFT. In case of D&D, we use the same PCQM4Mv2 dataset for both denoising and distillation steps. Note that even though PCQM4Mv2 provides the HOMO-LUMO energy gap of each molecule as labels, we do not use any supervision from such labels during training, and instead treat the dataset as a collection of unlabeled molecular graph-conformer pairs.
For finetuning, we use ten benchmark datasets published in OGB [26], three of which are regression tasks and the rest are binary classification tasks. We use the same scaffold split provided by the OGB library. As shown in Appendix A, the OGB datasets exhibit different molecule distributions from PCQM4Mv2: some tasks involve atom types that the encoder has never observed during pretraining. We also manually curate four datasets on physical molecular properties: Melting point (MP) is a phase transition temperature from solid to liquid [4]. Boiling point (BP) is also a phase transition temperature from liquid to gas [32]. Refractive index (RI) measures the relative speed of light in medium to a vacuum [60]. LogP is the partition coefficient which indicates the ratio of concentrations of a compound in a mixture of two immiscible solvents, water and octanol, at equilibrium [32]. These four datasets are normalized by mean and standard deviation before training and evaluation. We randomly split the dataset 8:1:1 for training, validation and testing, respectively. The detailed dataset statistics can be found in Appendix A.
Baselines.We compare D&D-Graph and D&D-Node against two baselines. RandInit is a naive baseline that trains \(f^{\text{2D}}\) on each downstream task starting from randomly initialized model weights. 3DInfomax [51] is a contrastive pretraining approach that considers two representations, one from \(f^{\text{2D}}\) and another from \(f^{\text{3D}}\), to be a positive pair if they result from the same molecule, or negative pair otherwise. Given a batch of molecules, it minimizes the NTXent loss introduced in SimCLR [8] to align the positive pairs together and repel the negative pairs within the feature space. For 3DInfomax, we use cosine similarity for the similarity function \(sim(\cdot,\cdot)\) and temperature \(\tau=0.01\) for weighting negative pairs as suggested in the original paper [51].
When finetuning, we consider two pooling operators for extracting graph representations: 1) mean-pooling all node representations (+mp) and 2) using the virtual-node representation as the graph representation (+vN). For consistency, we follow the same featurization step provided by the OGB library across all experiments, which produces a 9 and 3-dimensional feature vector for each atom and bond, respectively. For reproducibility, we provide the list of hyperparameters in Appendix B.
### Pretraining Results
Prior to downstream evaluation, we discuss interesting findings from pretraining with D&D.
The 2D graph encoder can closely mimic representations from 3D conformers using only the molecular graph.Figure 3 shows the training and validation loss curves of the distillation step of D&D. When pretraining with D&D-Node, the distillation loss converges to slightly over \(10^{-2}\) with a very small generalization gap between validation and training. This shows that the 2D molecular graph contains enough information to closely imitate representations from the 3D teacher \(f^{\text{3D}}\). The
Figure 3: Training and validation loss curves during distillation of D&D-Graph and D&D-Node on PCQM4Mv2. All plots are in log-scale. The 2D student is able to closely distill representations from the 3D teacher with small generalization gap.
small gap between training and validation also reflects that the guidance provided via D&D-Node can well-generalize towards unseen molecules. When pretraining with D&D-Graph, we find that the training loss converges to a much lower optima of \(10^{-3}\), but with a much larger generalization gap of approximately \(10^{-3}\). This implies that while the task of distilling mean-pooled representations is easier than distilling node-wise representations, it leads to less generalizable knowledge due to not considering the graph topological structure.
The intermediate encoding procedure of the 2D encoder trained via D&D aligns with 3D geometry.As we use an attention-based architecture for \(f^{\text{2D}}\), we qualitatively assess whether the encoder processes molecular graphs as if 3D geometry without ground-truth conformers by evaluating how it performs attention across atoms during inference (e.g. do atoms nearby in the 3D space tend to attend to each other?). Specifically, we compute the absolute Pearson correlation between the 3D pairwise distances of atoms and the inner product of their features prior to the softmax layer in each attention head, averaged across all molecules in the PCQM4Mv2 validation set. Note that a larger inner product implies a relatively larger exchange of information between the two atoms. The first two figures in Figure 4 show histograms depicting distributions of averaged absolute Pearson correlation values from all attention heads for each layer in the 2D encoder after pretraining by each method. We find that 3DInfomax only leads to a slight increase in correlation compared to RandInit: most correlation values are distributed under 0.3. When pretrained with our D&D-variants, however, many attention heads show correlations that exceed 0.3, a value that is never reached with randomly initialized weights. This implies that our approach provides guidance to the 2D graph encoder towards processing molecular graphs while respecting its 3D geometry.
For further investigation, we also measure the average pairwise distances weighted by the attention scores from D&D-Node with results shown in the rightmost plot in Figure 4. A higher value indicates that the attention head tends to exchange information across atoms that are far apart. Interestingly, the first layer exhibits a diverse range of distances, but the layer that immediately follows uses attention mostly to exchange information across atoms that are geometrically nearby each other, similar to a SE(3)-convolutional layer. Considering that a carbon-carbon single bond has an average length of 1.5 angstroms, this result indicates that \(f^{\text{2D}}\) pretrained with D&D-Node can reason about 3D geometry to exchange information across atoms that are nearby in the 3D conformer, even though they may be far apart in the 2D graph.
### Finetuning Results
Here we first provide empirical observations from downstream evaluation on OGB and Curated datasets. We also experiment on the QM9 dataset using a different 2D GNN model, to show that our pretraining approach is effective with other architectures as well.
D &D transfers knowledge that is generalizable across diverse tasks.Table 1 shows finetuning results on OGB, in which each experiment is averaged across 3 runs with random seeds. Comparing best results from D&D and RandInit, our method shows superior performance in 11 out of 12 tasks, with an average performance increase of 8.8% and 36.9% across OGB and Curated datasets, respectively. Surprisingly, when categorizing all tasks into 3 groups (OGB-regression, OGB-classification, and Curated), the tasks on which D&D shows the largest performance increase against RandInit from each group coincide with properties known to align well with 3D geometrical
Figure 4: Histograms of Pearson correlation values between pre-softmax attention scores vs. 3D pairwise distance during inference on the PCQM4Mv2 validation set for (Left) Contrastive, (Middle) D&D-Graph and D&D-Node. (Right) Average attention score-weighted 3D distances according to network depth from D&D-Node. Each colored dot represents an attention head in the corresponding layer.
properties (28.1% for LIPO, 6.58% for HIV, and 55.99% for logP). For instance, solving the HIV task requires assessment of the 3D shape to estimate its binding affinity with protein pockets. Both LIPO and logP tasks are tightly associated with the overall polarity of electron clouds in the molecule, which is highly associated with the spatial positioning of the atoms. This implies that denoising followed by distillation effectively transfers 3D knowledge.
When compared against 3DInfomax, D&D shows 15.8% better performance for OGB datasets and 37.2% in Curated datasets on average. This suggests that transferring molecular-specific knowledge of force fields is much more effective than contrasting representations as a proxy task; attracting and repelling molecular representations fail to fully capture generalizable similarities and discrepancies in the chemical space. Another limitation of the contrastive approach is that the gain in performance becomes limited when only a single conformer is provided per molecule [35; 51]. This aligns well with our intuition that each conformer can be seen as a distribution of 3D configurations, and that learning a single local optima within the distribution does not provide much information. Meanwhile, D&D can learn and transfer knowledge of the overall distribution with denoising and distilling,
\begin{table}
\begin{tabular}{r|c|c c c c|c c c} \hline \hline & & \multicolumn{4}{c|}{OGB} & \multicolumn{2}{c}{Curated} \\ \hline & Dataset & ESOL & FREESOLV & LIPO & BACE & BBBP & BP & MP \\ & Metric & RMSE(\(\downarrow\)) & RMSE(\(\downarrow\)) & RMSE(\(\downarrow\)) & ROC-AUC(\(\uparrow\)) & ROC-AUC(\(\uparrow\)) & MAE(\(\downarrow\)) & MAE(\(\downarrow\)) \\ \hline RandInit & +MP & 1.0491\(\pm\)0.0033 & 3.6445\(\pm\)0.0074 & 0.9702\(\pm\)0.0018 & 0.6360\(\pm\)0.1058 & 0.6453\(\pm\)0.0153 & 0.3000\(\pm\)0.0027 & 0.3672\(\pm\)0.0009 \\ & +VN & 1.1074\(\pm\)0.0075 & 3.5325\(\pm\)0.0043 & 0.9703\(\pm\)0.0045 & 0.7078\(\pm\)0.0264 & 0.6305\(\pm\)0.0066 & 0.3014\(\pm\)0.0052 & 0.3662\(\pm\)0.0033 \\
3DInfomax & +MP & 2.0037\(\pm\)0.0106 & 4.6854\(\pm\)0.0454 & 1.0167\(\pm\)0.0124 & 0.5701\(\pm\)0.0034 & 0.6238\(\pm\)0.0020 & 0.3257\(\pm\)0.0077 & 0.3862\(\pm\)0.0016 \\ & +VN & 1.5266\(\pm\)0.0007 & 2.4503\(\pm\)0.0072 & 0.9600\(\pm\)0.0026 & 0.6016\(\pm\)0.0060 & 0.6535\(\pm\)0.0088 & 0.3160\(\pm\)0.0055 & 0.371\(\pm\)0.002 \\ \hline D\&D-Graph & +MP & **0.9276\(\pm\)0.0032** & **2.6841\(\pm\)0.0076** & 0.7646\(\pm\)0.0125 & 0.6906\(\pm\)0.139 & 0.6775\(\pm\)0.0117 & 0.2476\(\pm\)0.0035 & 0.356\(\pm\)0.007 \\ & +VN & 0.9934\(\pm\)0.0033 & 2.7004\(\pm\)0.0046 & 0.7533\(\pm\)0.0014 & **0.7271\(\pm\)0.0027** & 0.6714\(\pm\)0.0044 & 0.2389\(\pm\)0.0018 & 0.3096\(\pm\)0.0032 \\ D\&D-Node & +WP & 1.0079\(\pm\)0.0031 & 2.3953\(\pm\)0.0027 & **0.6979\(\pm\)0.0037** & 0.5828\(\pm\)0.027 & 0.6754\(\pm\)0.0028 & **0.2220\(\pm\)0.002** & **0.2937\(\pm\)0.0003** \\ & +VN & 1.0862\(\pm\)0.00387 & 3.4056\(\pm\)0.0221 & 0.7425\(\pm\)0.0113 & 0.6649\(\pm\)0.1460 & **0.6788\(\pm\)0.008** & 0.2296\(\pm\)0.0028 & 0.2965\(\pm\)0.0006 \\ \hline & Dataset & CLINTOK & HIV & SIDER & TOX21 & TOX4ST & RI & logP \\ & Metric & ROC-AUC(\(\uparrow\)) & ROC-AUC(\(\uparrow\)) & ROC-AUC(\(\uparrow\)) & ROC-AUC(\(\uparrow\)) & MAE(\(\downarrow\)) & MAE(\(\downarrow\)) \\ \hline RandInit & +MP & 0.6411\(\pm\)0.00230 & 0.7291\(\pm\)0.0014 & 0.6005\(\pm\)0.0058 & 0.7219\(\pm\)0.0053 & 0.6275\(\pm\)0.0046 & 0.1243\(\pm\)0.0022 & 0.0501\(\pm\)0.0030 \\ & +MN & 0.6834\(\pm\)0.0048 & 0.6758\(\pm\)0.0202 & 0.5930\(\pm\)0.0046 & 0.7190\(\pm\)0.0051 & 0.6330\(\pm\)0.0020 & 0.1250\(\pm\)0.0034 & 0.0503\(\pm\)0.0034 \\
3DInfomax & +MP & **0.6919\(\pm\)0.0049** & 0.7295\(\pm\)0.0064 & 0.5979\(\pm\)0.0066 & 0.6925\(\pm\)0.0114 & 0.5824\(\pm\)0.0009 & 0.1465\(\pm\)0.0066 & 0.0458\(\pm\)0.0030 \\ & +VN & 0.6815\(\pm\)0.0361 & 0.7165\(\pm\)0.0002 & 0.6004\(\pm\)0.0126 & 0.6979\(\pm\)0.0006 & 0.5879\(\pm\)0.0033 & 0.1344\(\pm\)0.0022 & 0.0413\(\pm\)0.0030 \\ \hline D\&D-Graph & +MP & 0.6716\(\pm\)0.0031 & 0.7766\(\pm\)0.0033 & 0.5989\(\pm\)0.0006 & 0.7541\(\pm\)0.0034 & **0.6478\(\pm\)0.0036** & 0.0852\(\pm\)0.0033 & 0.0274\(\pm\)0.0041 \\ & +VN & 0.6710\(\pm\)0.00493 & **0.7771\(\pm\)0.0048** & 0.6117\(\pm\)0.0029 & 0.7554\(\pm\)0.0030 & 0.6363\(\pm\)0.0035 & 0.0806\(\pm\)0.0034 & 0.0215\(\pm\)0.0049 \\ D\&D-Node & +MP & 0.5825\(\pm\)0.00257 & 0.7672\(\pm\)0.0042 & 0.6017\(\pm\)0.0113 & 0.7549\(\pm\)0.0055 & 0.6432\(\pm\)0.0033 & 0.0688\(\pm\)0.0017 & **0.0220\(\pm\)0.0006** \\ & +VN & 0.6594\(\pm\)0.0052 & 0.7645\(\pm\)0.0113 & **0.6257\(\pm\)0.0046** & **0.7556\(\pm\)0.0008** & 0.6421\(\pm\)0.0011 & **0.666\(\pm\)0.0044** & 0.0226\(\pm\)0.0000 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Average performance and standard deviations for OGB and Curated datasets: 12 tasks on the left are from OGB, and 4 tasks on the right are from manually curated data. Best results are in **bold**.
Figure 5: Experimental results demonstrating the label-efficiency of D&D on each OGB (upper two rows) and Curated (bottom-row) task. X-axes indicate the percentage of training data used for finetuning, and Y-axes show performance measurements on the respective test sets.
relieving practitioners from the need to obtain multiple low-energy conformers per molecule for pretraining.
D&D enables label-efficient finetuning.To evaluate how well D&D performs under the downstream scenario with limited number of labels, we also perform finetuning on a smaller randomly sampled subset of the original training data for each task. Figure 5 shows quantitative results of RandInit and D&D-Node. In 4 out of 10 OGB tasks and all 4 Curated datasets, D&D-Node trained with only 10% of training data shows accuracy comparable to that of RandInit trained on the full dataset. This demonstrates the utility of our approach in preventing overfitting to small data by leveraging generalizable knowledge from denoising. For BACE and SIDER, we find that the gain of pretraining with D&D is relatively limited compared to other tasks. While results from other OGB tasks imply that our approach can generalize towards molecules with sizes that deviate from those in PCQM4Mv2 (average 14.1 nodes per molecule for PCQM4Mv2 vs. up to 27.0 for other OGB tasks), we conjecture that too large a difference in size (vs. 34.1 for BACE and 33.6 for SIDER) between molecules may hamper generalization.
D&D is also effective on other datasets and GNN architectures.In addition to experiments above, we also run D&D on the QM9 benchmark [44] with 134K molecules using a different GNN architecture for the 2D student model to test whether our approach works well in a model-agnostic fashion. Specifically, we pretrain TorchMD-NET [55] on the QM9 dataset via denoising, and distill its representations from QM9 onto PNA [10], a message-passing GNN model previously used in [51]. We then finetune the pretrained 2D student to each of the properties in QM9. Note that we do not perform hyperparameter tuning for this task, but instead use the same default setting for PNA chosen by [51] including the same set of random seeds for fair comparison. More details on the experimental setup can be found in Appendix C.
In Table 2, we observe that D&D shows performs competitively against 3DInfomax on 7 out of 8 datasets. More interestingly, D&D-Node is especially more effective than D&D-Graph on QM9, significantly outperforming 3DInfomax on \(\alpha\) and gap properties. Considering that molecules in QM9 are labeled with energy-related properties through the same DFT computation used to solve their 3D atom coordinates, we conjecture that the structured-distillation approach is more well-suited to such properties by transferring atom representations in a fine-grained manner. As we propose two variants of D&D, it would be interesting to investigate whether our comparison between D&D-Node vs. D&D-Graph in empirical performance aligns with domain-specific interpretations of each chemical property, which we leave as future work.
## 6 Conclusion
In this paper, we propose D&D, a novel self-supervised molecular representation learning framework that allows use of models pretrained via 3D conformer denoising towards downstream tasks that only provide 2D molecular graphs as input. As molecular force fields provide chemically generalizable information across various tasks, D&D demonstrates significant knowledge transfer to diverse molecular property prediction tasks. Additional analyses show that D&D is also highly label-efficient, showing significant performance boosts under downstream settings where the number of labeled training data is limited. As future work, we hope to extend D&D towards a multitask setting [38] to see if we can finetune a single model that performs well simultaneously across multiple molecular properties. Another exciting direction is to explore use of generative models [15; 59; 28] for molecular property prediction. Inspired by use of image diffusion models for semantic segmentation [3], it would be interesting to test whether intermediate representations inferred by diffusion-based generative models can be leveraged downstream towards generalizable knowledge transfer.
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline Target & \(\mu\) & \(\alpha\) & HOMO & LUMO & GAP & R2 & ZPVE & \(c_{e}\) \\ Metric & MAE(\(\downarrow\)) & MAE(\(\downarrow\)) & MAE(\(\downarrow\)) & MAE(\(\downarrow\)) & MAE(\(\downarrow\)) & MAE(\(\downarrow\)) & MAE(\(\downarrow\)) & MAE(\(\downarrow\)) \\ \hline RandInit & 0.4133\(\pm\)0.003 & 0.3972\(\pm\)0.014 & 82.10\(\pm\)0.38 & 85.72\(\pm\)1.62 & 123.08\(\pm\)3.98 & 22.14\(\pm\)0.21 & 15.08\(\pm\)2.83 & 0.1670\(\pm\)0.004 \\
3DInfomax & **0.3507\(\pm\)0.005** & 0.3268\(\pm\)0.006 & **68.96\(\pm\)0.32** & **69.51\(\pm\)0.54** & 101.71\(\pm\)2.03 & **17.39\(\pm\)0.54** & **7.966\(\pm\)1.87** & 0.1306\(\pm\)0.009 \\ \hline D&D-Graph & **0.3512\(\pm\)0.005** & 0.2903\(\pm\)0.029 & 70.36\(\pm\)2.70 & 71.72\(\pm\)2.17 & 98.82\(\pm\)1.09 & **17.61\(\pm\)0.68** & 12.88\(\pm\)3.42 & **0.1248\(\pm\)0.006** \\ D&D-Node & 0.3552\(\pm\)0.004 & **0.2807\(\pm\)0.045** & **69.32\(\pm\)1.81** & **69.63\(\pm\)0.62** & **98.79\(\pm\)0.59** & **17.75\(\pm\)0.43** & 10.19\(\pm\)1.77 & 0.1429\(\pm\)0.015 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average performance and standard deviations from the QM9 quantum property prediction. Baseline results are results reported by [51]. Best results within one standard deviation are in **bold**. |
2310.00367 | AutomaTikZ: Text-Guided Synthesis of Scientific Vector Graphics with
TikZ | Generating bitmap graphics from text has gained considerable attention, yet
for scientific figures, vector graphics are often preferred. Given that vector
graphics are typically encoded using low-level graphics primitives, generating
them directly is difficult. To address this, we propose the use of TikZ, a
well-known abstract graphics language that can be compiled to vector graphics,
as an intermediate representation of scientific figures. TikZ offers
human-oriented, high-level commands, thereby facilitating conditional language
modeling with any large language model. To this end, we introduce DaTikZ, the
first large-scale TikZ dataset consisting of 120k TikZ drawings aligned with
captions. We fine-tune LLaMA on DaTikZ, as well as our new model CLiMA, which
augments LLaMA with multimodal CLIP embeddings. In both human and automatic
evaluation, CLiMA and LLaMA outperform commercial GPT-4 and Claude 2 in terms
of similarity to human-created figures, with CLiMA additionally improving
text-image alignment. Our detailed analysis shows that all models generalize
well and are not susceptible to memorization. GPT-4 and Claude 2, however, tend
to generate more simplistic figures compared to both humans and our models. We
make our framework, AutomaTikZ, along with model weights and datasets, publicly
available. | Jonas Belouadi, Anne Lauscher, Steffen Eger | 2023-09-30T13:15:49Z | http://arxiv.org/abs/2310.00367v2 | # AutomataTikZ: Text-Guided Synthesis of Scientific Vector Graphics with TikZ
###### Abstract
Generating bitmap graphics from text has gained considerable attention, yet for scientific figures, vector graphics are often preferred. Given that vector graphics are typically encoded using low-level graphics primitives, generating them directly is difficult. To address this, we propose the use of TikZ, a well-known abstract graphics language that can be compiled to vector graphics, as an intermediate representation of scientific figures. TikZ offers human-oriented, high-level commands, thereby facilitating conditional language modeling with any large language model. To this end, we introduce DaTixZ the first large-scale TikZ dataset, consisting of 120k TikZ drawings aligned with captions. We fine-tune LLaMA on DaTixZ, as well as our new model CLiMA, which augments LLAMA with multimodal CLIP embeddings. In both human and automatic evaluation, CLIMA and LLaMA outperform commercial GPT-4 and Claude 2 in terms of similarity to human-created figures, with CLiMA additionally improving text-image alignment. Our detailed analysis shows that all models generalize well and are not susceptible to memorization. GPT-4 and Claude 2, however, tend to generate more simplistic figures compared to both humans and our models. We make our framework, AutomaTixZ, along with model weights and datasets, publicly available.1
Footnote 1: [https://github.com/potamides/AutomaTikZ](https://github.com/potamides/AutomaTikZ)
## 1 Introduction
Recent advancements in text-to-image generation have facilitated the generation of detailed images from simple natural language descriptions (Esser et al., 2021; Ramesh et al., 2021, 2022; Saharia et al., 2022; Rombach et al., 2022; Zhang et al., 2023a). Models like Stable Diffusion (Rombach et al., 2022) and DALL-E (Ramesh et al., 2021, 2022) often yield results comparable to real photographs or human-created artworks. However, these models primarily generate _raster graphics_, typically at low resolutions, which are not ideal for _scientific figures_. Researchers use scientific figures to convey complex ideas or present critical findings, making them central to scientific research (Tufte, 1992; Hsu et al., 2021). Consequently, they demand a high degree of geometric precision and legible text, even at small font sizes, areas where raster graphics fall short. As a result, many research conferences advocate the use of _vector graphics_,2 which decompose information into geometric shapes, allow searchable text, and usually have smaller file sizes.
Footnote 2: [https://acl-org.github.io/ACLPUB/formatting.html](https://acl-org.github.io/ACLPUB/formatting.html)
Automated vector graphics generation is a growing research area as well (Lopes et al., 2019; Carlier et al., 2020; Aoki & Aizawa, 2022; Ma et al., 2022; Frans et al., 2022; Jain et al., 2023; Wu et al., 2023), but current methods have their own share of limitations. Specifically, they mainly generate low-level path elements of the Scalable Vector Graphics (SVG) format, either failing to maintain
accurate geometric relations (Ma et al., 2022; Frans et al., 2022; Jain et al., 2023) or only generating outputs of limited complexity such as single icons or font characters (Lopes et al., 2019; Carlier et al., 2020; Aoki and Aizawa, 2022; Wu et al., 2023).
To address these limitations, we explore the use of _graphic languages_, which abstract from lower-level vector graphics formats by providing high-level constructs that can be compiled to such formats (Van Zandt, 2007; Hobby, 2014; Tantau, 2023). Language models show potential in learning these languages to solve simple tasks (Bubeck et al., 2023; Zhang et al., 2023b), but the depth of this capability, i.e., whether it can produce scientific figures, remains unexplored. Due to its expressiveness and emphasis on science, which enables the creation of complex figures with only a few commands, we focus on the graphics language _TkZ_ in this work (Tantau, 2023). We aim to understand whether language models can capture the nuances of Ti_KZ_ and automatically generate scientific figures based on image captions, analogous to text-to-image generation. This could not only enhance productivity and foster inclusiveness (aiding researchers less versed in programming-like languages, such as social scientists), but also aid education by creating tailored Ti_KZ_ examples. The use case for this is demonstrated by the TeX Stack Exchange3, where nearly 10% of the asked questions pertain to Ti_KZ_, making it the most frequently discussed topic on the platform. Our key contributions are as follows:
Footnote 3: [https://tex.stackexchange.com](https://tex.stackexchange.com)
1. As part of our AutomataT_kZ_ project, we create DaT_kZ_, the first large-scale Ti_kZ_ dataset to our knowledge, featuring approximately 120k paired Ti_kZ_ drawings and captions.
2. We fine-tune the large language model (LLM) LLaMA (Touvron et al., 2023a) on DaT_kZ_ and compare its performance to general-purpose LLMs, specifically GPT-4 (OpenAI, 2023) and Claude 2 (Anthropic, 2023). Both automatic and human evaluation agree that scientific figures generated by fine-tuned LLaMA resemble human-created figures more closely.
3. We further develop CLlMA, a variant of LLaMA augmented with multimodal CLIP embeddings (cf. Figure 1). This enhancement allows CLlMA to visually interpret input captions, thereby improving text-image alignment. It also enables the use of images as supplementary inputs, leading to a further boost in performance.
4. In addition, we demonstrate that all models exhibit few memorization problems and generate novel outputs. However, GPT-4 and Claude 2 tend to generate simpler outputs than LLaMA and CLaMA, sometimes producing degenerate solutions that maximize text-image similarity by visibly copying the input caption into the output image.
## 2 Related Work
Our work connects to several distinct but interrelated fields, namely scientific figure understanding, text-to-image generation, vector graphics generation, and code generation. For each field, we provide a comprehensive review of the most relevant prior work.
Figure 1: Exemplary scientific figures generated with CLlMA. CLlMA takes the captions as input, processes them with CLIP and LLaMA, and generates Ti_K_Z_ drawings that compile to vector graphics.
Scientific Figure UnderstandingDespite the surprisingly scarce amount of approaches dedicated to generating scientific figures, scientific figure _understanding_ is a subject of extensive research. Arguably, the task that is inverse to ours is the captioning of scientific figures. Expanding on prior work in scientific visual question-answering (Siegel et al., 2016; Kahou et al., 2018; Kafle et al., 2018), Chen et al. (2019, 2020) train a captioning model using a corpus of synthesized figure-caption pairs. Hsu et al. (2021) extend this approach to real scientific figures, noting substantial challenges. To improve performance, Yang et al. (2023) reformulate the task, augmenting captions with OCR tokens and paragraphs referencing the figures. Singh et al. (2023) take a different approach and utilize reinforcement learning to consider the quality of the captions during training. In addition to such task-oriented models, recent advancements in multimodal large language modeling (MLLM; Liu et al., 2023; Dai et al., 2023; Yin et al., 2023) allow for generalized visual reasoning about scientific figures (Ye et al., 2023; Zhang et al., 2023; Horawalavithana et al., 2023).
Text-to-Image & Vector Graphics GenerationThe evolution of text-to-image generation can be characterized by three development stages: Generative Adversarial Networks (Reed et al., 2016; Zhang et al., 2017; Brock et al., 2019; Kang et al., 2023), auto-regressive models (Ramesh et al., 2021; Esser et al., 2021; Ding et al., 2021; Chang et al., 2023), and diffusion models (Rombach et al., 2022; Ramesh et al., 2022; Saharia et al., 2022; Zhang et al., 2023). Despite their differences, these approaches share a common limitation in that they can only generate raster graphics.
Vector graphics generation has evolved as a parallel field. Building upon innovative work in sketch generation (Ha and Eck, 2018), Lopes et al. (2019) generate single SVG font characters made up of straight lines and Bezier curves. Carlier et al. (2020) extend this approach to include SVG _icons_. However, none of these models support text conditioning. Another branch of research focuses on vectorizing text-to-image models (Ma et al., 2022; Frans et al., 2022; Jain et al., 2023). Although these methods enable text conditioning, text-to-image models typically have difficulties producing flat-colored SVG-style images, and the vectorized results tend to have imprecise geometric relations and jagged paths (Wu et al., 2023). Addressing these challenges, Wu et al. (2023) sequentialize and tokenize the path elements of SVG icons, allowing for auto-regressive language modeling on SVG representations with text conditioning. While this approach successfully captures the aesthetic of vector graphics, it is limited to generating monochrome icons of limited complexity.
Code GenerationAs graphics languages are a subset of programming languages, our work is closely related to code generation (Xu et al., 2022). At its core is the ongoing research on pre-training or fine-tuning LLMs on code (Roziere et al., 2023; Li et al., 2023; Fried et al., 2023; Li et al., 2022; Chen et al., 2021), commonly with a multitask objective (Fried et al., 2023) consisting of causal language modeling and infilling prediction (Bavarian et al., 2022). Despite the significant amount of recent progress, the primary focus of code generation remains on high-resource programming languages such as Python, Java, or JavaScript (Zan et al., 2023). Ti\(\&\)Z commands, in comparison, are invoked as TeX macros, and the TeX programming language is considered low-resource and typically overlooked in model evaluations. Yet, TeX may still exist in training corpora, as evidenced by GPT-4's ability to comprehend TeX and Ti\(\&\)Z (Bubeck et al., 2023; Zhang et al., 2023). As far as we know, there has been no comprehensive evaluation of this capability, which we also address in this work.
## 3 The DaTixZ Dataset
DaTixZ is, to our best knowledge, the first large-scale dataset of Ti\(\&\)Z drawings with corresponding captions. Ti\(\&\)Z is well-known within the TeX community, but its resources are scattered across the internet, making the creation of a large dataset a fundamental challenge of our work. Consequently, we gather Ti\(\&\)Z drawings and their captions from a variety of online resources, as detailed below.
### Data Acquisition
We collect the data from dedicated websites, online repositories, the TeX Stack Exchange, arXiv papers, and also artificial examples. A comprehensive overview of our data collection is provided in Table 1. We gather Ti\(\&\)Z drawings created between November 2006 and June 2023 that can be successfully compiled with TeX Live 2023.4
Curated ExamplesSeveral websites and online repositories5 focus on collecting and sharing Ti_KZ_ drawings for educational purposes and to showcase high-quality examples. Through web scraping, we retrieve any Ti_K_Z drawings from these sites that have associated captions.
Footnote 5: [https://example.net](https://example.net), [https://tikz.net](https://tikz.net), [https://pgfplots.net](https://pgfplots.net) & [https://github.com](https://github.com) projects
TeX Stack ExchangeWe also source Ti_KZ_ drawings from TeX Stack Exchange (cf. SS1). We examine the quarterly data dump and extract questions tagged with Ti_K_Z and relevant answers with a minimum score of 1. To convert textual questions into image captions, we utilize WizardLM Xu et al. (2023), an LLM trained to follow arbitrary instructions. Using the title and body of a question as context, we task WizardLM with creating a descriptive caption for the figure provided in the answer. More details on the caption generation procedure can be found in Appendix B.
ArXiv PapersArXiv6 is a widely-used open-access archive for scientific articles. As arXiv encourages researchers to upload their papers alongside their source files, it serves as a valuable resource for obtaining Ti_K_Z_ drawings. Initially, we retrieve all papers with TeX source files and retain those that use the Ti_K_Z_ package. Subsequently, we expand any include directives and extract all Ti_K_Z_ environments using regular expressions. To ensure compilability, we additionally preserve all required preamble elements. For that, we first establish a set of rules by analyzing documents obtained from other sources that determine which package imports and configuration options should be retained. We then parse all macro definitions and keep for each Ti_K_Z_ drawing the macros it uses. Finally, we exclude any Ti_K_Z_ drawings that fail to compile after this extraction process (around 120k).
Footnote 6: [https://www.org/open2i.org](https://www.org/open2i.org)
Artificial ExamplesGPT-4 has demonstrated the emergent ability to generate simple, tangible objects (e.g., unicorns) in Ti_K_Z_(Bubeck et al., 2023). While not the primary focus of this work, we seek to transfer this ability to our models through knowledge distillation Bucila et al. (2006). To this end, we compile a diverse set of object descriptions derived from the object categories in the MS COCO, LVIS, and VISOR datasets Lin et al. (2015); Gupta et al. (2019); Gokhale et al. (2023). Moreover, we sample emoj descriptions from the OpenMoji database.7 Following this, we instruct GPT-4 to generate a Ti_K_Z_ drawing for each description, using a chain-of-thought prompt Wei et al. (2023) we adopt from Zhang et al. (2023b), as detailed in Appendix B.
Footnote 8: In this work, we use the Moses tokenizer Koehn et al. (2007) to count tokens.
### Data Augmentation
Prior research indicates a correlation between caption length and caption quality Gelman et al. (2002); Hartley (2003); Huang et al. (2023). Notably, Huang et al. (2023) propose a heuristic to judge scientific captions with less than 30 tokens as being of poor quality. Given the recent advancements in MLLM, and notably in MLLM with a focus on science (cf. SS2), we propose the automatic augmentation of such captions Belouadi and Eger (2023);b).
Specifically, we leverage LLAVAR Zhang et al. (2023), instructing it to generate short descriptions for Ti_K_Z_ drawings with captions containing fewer than 30 tokens (cf. Appendix B).9 Inspired by the CapFilt method Li et al. (2022), we generate five candidate descriptions and rank them based on their text-image similarity using CLIPScoreHessel et al. (2021). The final augmented caption is then formed by concatenating the original caption with the top-ranked description. For GPT-4, we cannot rely on the heuristic, as the captions used are not scientific. Instead, we augment all captions to increase diversity in our dataset while retaining the original captions as well. Table 1 displays the percentage of augmented captions in our dataset. On average, this method increases the CLIPScore of captions with originally fewer than 30 tokens from 24.76 to 29.12, a substantial improvement in text-image similarity, especially considering that CLIPScore typically ranges between zero and 40 Hessel et al. (2021). The CLIPScore for original captions exceeding 30 tokens is 27.06.
\begin{table}
\begin{tabular}{l r r} \hline
**Source** & **Size** & **Augmented** \\ \hline Curated Examples & 981 & 63.20\% \\ TeX Stack Exchange & 29,238 & 51.31\% \\ ArXiv Papers & 85,656 & 67.75\% \\ Artificial Examples & 3,914 & 50.00\% \\ \hline All & 119,789 & 62.71\% \\ \hline \end{tabular}
\end{table}
Table 1: Detailed breakdown of DaT_K_Z_ showing size and percentage of augmented data for the whole dataset and each source individually.
Methods
We leverage LLaMA (Touvron et al., 2023a) as the base model in most experiments, using captions from DaTixZ as model input and TixZ code as ground truths. Since TixX source files from arXiv were included in LLaMA's pre-training data, it may have prior knowledge beneficial for this task. We choose the original LLaMA release over its updated revisions, LLaMA 2 (Touvron et al., 2023b) and CodeLLaMA (Roziere et al., 2023), as their training data is not as clearly specified. This uncertainty and their more recent release would make it difficult to create a test set without training-to-test data leakage. We also experiment with GPT-4 and Claude 2, as earlier research hints at inherent potential for our task (cf. SS3.1 and SS2), and employ the same chain-of-thought approach outlined in SS3.1. However, as they are proprietary, we can only address data leakage for LLaMA (Aiyappa et al., 2023).
### CliMA
A potential drawback of vanilla LLaMA is that it may not understand visual concepts, given that it was not designed to process image data. However, this ability could significantly enhance the creation of scientific figures. Therefore, we modify LLaMA by combining it with a CLIP ViT-H/14 model (Cherti et al., 2023; Radford et al., 2021). CLIP is frequently employed to establish a bridge between vision and natural language, thereby facilitating the creation of MLLMs (Yin et al., 2023).
Unlike most MLLM methods, however, we utilize the _multimodal_ projection layer of CLIP, enabling us to extract visual information from both text and images within a common embedding space (cf. Figure 1). This approach is akin to text-to-image models like DALL-E and CLIP-GEN (Wang et al., 2022), that make use of this duality to generate raster graphics. In our case, our primary objective is to provide LLaMA with a visual interpretation of the input caption, anticipating that this adjustment will boost the alignment with generated TixZ drawings. In addition to that, it also enables us to experiment with supplying rasterized scientific figures as an additional input (cf. SS5). As this new model can be described as using **CLIP** inside LLaMA, we refer to it as _CLiMA_.
We accomplish this integration by connecting CLIP's output with LLaMA's input via soft prompting (Lester et al., 2021); i.e., we prepend CLIP's embedding vector to LLaMA's input embeddings of the caption. This requires adding a feed-forward layer with dimensions \(\delta_{\text{ViT-H/14}}\times\delta_{\text{LLaMA}}\) to connect image features of dimension \(\delta_{\text{ViT-H/14}}\) with LLaMA's word embedding space of dimension \(\delta_{\text{LLaMA}}\). Following insights from Liu et al. (2023), we pre-train this adaption layer on a dataset of 595K generic text-image pairs for one epoch while keeping both CLIP and LLaMA frozen during the process.
### Error Handling & Correction
A general issue with our language modeling approach to TixZ generation is that outputs may violate the syntactic and semantic rules of Tix, potentially leading to errors and uncompilable documents. While there are constrained decoding algorithms that can force models to form valid programs (Poesia et al., 2022; Scholak et al., 2021), they depend on parse trees and are only useful for languages with a context-free grammar. Tix, however, has a flat, unstructured syntax that is generally impossible to parse (Erdweg and Ostermann, 2010), rendering these methods unsuitable for our approach.
As an alternative, we propose an _iterative resampling_ method, leveraging the diagnostic data produced during compilation. If an error arises during compilation, we analyze the logfile to identify its source. Rather than resampling from the start, we then reverse the generation process to just before the error line and continue sampling from there. If the error persists, we infer that the origin of the problem lies earlier in the code and reverse further back, specifically \(4^{(i-1)}\) lines above the error, with \(i\) denoting the current iteration. While this method does not guarantee error-free results, it provides a more efficient and targeted strategy than simply reinitiating sampling from the beginning.
## 5 Experiments
Before fine-tuning our models on DaTixZ, we extract a sample of 1k _human-created_ items to serve as our test set. As LLaMA's training started in December 2022, we only sample items introduced after this date to avoid data leakage. We conduct both automatic (SS5.1) and human evaluations (SS5.2). Additional instances of generated TixZ drawings are available in Appendix E.
Model SizesIn terms of model size, we fine-tune LLaMA\({}_{7\texttt{h}}\) and CLaMA\({}_{7\texttt{h}}\), each with 7 billion parameters (7b), as well as LLAMA\({}_{13\texttt{h}}\) and CLaMA\({}_{13\texttt{h}}\) with 13 billion parameters (13b), respectively. During inference, we additionally evaluate CLiMA\({}_{13\texttt{h}}\) with CLIP receiving compiled human-created Ti\(K\)Z drawings as input instead of captions, which we refer to as CLiMA\({}_{\texttt{m}\texttt{o}}\) for clarity (cf. SS4.1).
TrainingGiven the size of these models, we introduce trainable low-rank adaption weights (LoRA; Hu et al., 2022) while keeping the base model weights frozen and in half precision (Micikevicius et al., 2017). Following Dettmers et al. (2023), we apply LoRA to all linear layers. In addition, we find that training the embedding layer and language modeling heads is crucial for successful fine-tuning. Since we are not aware of any studies applying LoRA to these layers, we make them fully trainable and leave this investigation to future work. In line with Liu et al. (2023), we train for 12 epochs with AdamW (Loshchilov and Hutter, 2017) and a batch size of 128, but increase the learning rate to 5e\(-4\) as this leads to faster convergence. As a form of data augmentation only possible for CLiMA, we randomly replace the captions forwarded to CLIP with the reference image in 50% of the cases.
### Automatic Evaluation
We use a variety of automatic evaluation metrics to evaluate the performance of our models on our test set in terms of code, image, and caption-image similarity. In particular, we use the following metrics:
* calculates the similarity between image and text, as outlined in SS3.2. We utilize it to evaluate the correlation between a rasterized Ti\(K\)Z drawing and its corresponding caption.
* is technically the same metric as CLIPScore, but with human-made Ti\(K\)Z drawings as a reference input. Therefore, it assesses the similarity of two images rather than an image and a caption.
* **Inception Distance (KID)**: assesses the quality of generated Ti\(K\)Z drawings by comparing their distribution with the distribution of real images in the test set (Binkowski et al., 2018). This comparison helps determine how realistic the generated images appear in general. We extract image features using the same CLIP model utilized in CLIPScore.
* is an n-gram-based metric designed to measure textual similarity (Eghbali and Pradel, 2023). As a variant of BLEU (Papineni et al., 2002), optimized for evaluating code, we employ it to assess the similarity between human-created and machine-produced Ti\(K\)Z code.
* is a metric dedicated to assessing string similarity (Stanchev et al., 2019), much like CrystalBLEU. We utilize it to determine the minimum number of operations needed to convert the machine-generated code into the reference code.
* measures how frequently we need to sample from a model to yield compilable Ti\(K\)Z code that outputs an image. This is crucial as some metrics depend on images. With LLAMA and CLiMA, we use iterative resampling (cf. SS4.2) and account for partial samples. This is not feasible with GPT-4 and Claude 2 due to their chain-of-thought prompt, which generates code across multiple steps. We take a relaxed stance, counting a sample as successful if it results in an image, even if there are errors.
ResultsWe compute the above metrics for each model and present the system-level scores in Figure 2. The radar chart on the left illustrates that there are small but noticeable score differences between the LLAMA and CLiMA models, revealing some intriguing trends. Aligning with the intuitive expectation that larger models yield better performance (Kaplan et al., 2020), the 13b models clearly outperform the 7b models on all string-similarity and image-based metrics by 0.2\(-\)0.5pp (percentage points). A notable exception is CSR, where all models perform comparably. This shows that all models require approximately 1.2 samples per caption to generate a compilable Ti\(K\)Z drawing.
Within model sizes, CLiMA\({}_{7\texttt{h}}\) outperforms LLaMA\({}_{7\texttt{h}}\) on CrystalBLEU, EED, CLIPScore, and CLIPScore\({}_{\texttt{m}\texttt{o}}\) by up to 0.4pp, suggesting that, even when only text inputs are involved, integrating CLIP into the model has a predominantly positive effect. CLiMA\({}_{13\texttt{h}}\) continues this trend, showing a 0.1pp higher CLIPScore than LLaMA\({}_{13\texttt{h}}\). However, we also see that this does not necessarily have to increase the similarity with a reference image as well, as LLAMA\({}_{13\texttt{h}}\) has a 0.1pp higher CLIPScore\({}_{\texttt{m}\texttt{o}}\). On CrystalBLEU and EED, CLiMA\({}_{13\texttt{h}}\) again fares better, although, with 0.1pp, the gap is not as pronounced as for the 7b models, possibly due to diminishing returns (Hong et al., 2023).
The right radar chart compares our best text-only model, CLiMA\({}_{13\text{a}}\), with CLiMA\({}_{\text{MAG}}\), GPT-4, and Claude 2. As before, all models perform roughly the same on CSR, except for Claude 2, which needs noticeably more samples. As expected, CLiMA\({}_{\text{MAG}}\), having access to reference images, improves upon CLiMA\({}_{13\text{a}}\) in CLiPScore\({}_{\text{End}}\) by 1.2pp. However, this does not lead to an improvement in CLiPScore, echoing our earlier observation that image and caption-image similarity do not always correlate. It also does not improve KID, demonstrating that the overall quality of the images remains constant. Nevertheless, the string-based metrics are 0.1-0.4pp higher, indicating that conditioning on reference images positively impacts code similarity.
We also observe that Claude 2 performs much worse than GPT-4, and both perform noticeably worse than both CLiMA\({}_{13\text{a}}\) and CLiMA\({}_{\text{ndeg}}\) on most metrics. The drastically lower CrystaBLEU and EED (up to 3.9pp) suggest that GPT-4 and Claude 2 generate fundamentally different code (in Appendix A we show that it exhibits a lower level of complexity). The up to 6.6pp lower CLiPScore\({}_{\text{MAG}}\) and over six times as large KID indicate that not only do the generated images look different from human ones, but also that the general quality of images is much different from the human distribution. However, most strikingly, both models achieve an up to 2.1pp higher CLiPScore. Upon investigation, we find that both models tend to produce degenerate images, which visibly copy the input caption into the output image. Since the outputs of CLIP (and by extension CLiPScore) can be controlled with _images of text_(Ramesh et al., 2022), Claude 2, and in particular GPT-4, essentially employ such typographic attacks to achieve exceptional caption-image similarities. We further explore this phenomenon in SS6.
Overall, CLiMA\({}_{7\text{n}}\) and CLiMA\({}_{13\text{a}}\) outperform their respective LLAMA models in five out of seven metrics each, with Claude 2 and GPT-4 substantially underperforming all of them. While CLiMA\({}_{\text{MAG}}\) unsurprisingly improves upon CLiMA\({}_{13\text{a}}\), CLiMA\({}_{13\text{a}}\) is the best model with only textual inputs.
### Human Evaluation
To further evaluate the effectiveness of our models, we conduct a human annotation campaign using _best-worst scaling_(BWS; Louviere et al., 2015). As a form of comparative annotation, BWS yields high-quality results even when the number of annotations is low (Kiritchenko & Mohammad, 2016;
Figure 2: Automatic evaluation results for LLAMA\({}_{7\text{/}13\text{a}}\), CLiMA\({}_{7\text{/}13\text{a}/\text{ndeg}}\), GPT-4, and Claude 2. Axes representing metrics where lower values are better (CSR, EED, and KID) have been inverted. Detailed scores are provided in Appendix C for further reference.
2017). Within this method, annotators are tasked to compare tuples of \(n=4\) items, identifying the best and the worst item based on a given property. Real-valued scores, ranging from -1 (poor) to 1 (excellent), are then computed by subtracting the fraction of times an item is chosen as the worst from the fraction of times it is chosen as the best (Orme, 2009).
In this work, we focus on _caption similarity_ (CS) and _reference similarity_ (RS). In CS, annotators evaluate image tuples based on text-image similarity with captions (similar to CLIPSCORE). We construct 4-tuples consisting of our two leading text-only models from automatic evaluation (CLMA\({}_{13\text{a}}\) and LL\({}_{\text{a}}\)MA\({}_{13\text{a}}\)), GPT-4, and human reference images. In RS, the human reference images are used as the standard of comparison instead (similar to CLIPSCORE\({}_{\text{mg}}\)), so we replace them in the tuples with CLMA\({}_{\text{mg}}\), while leaving the other models unchanged. Each property is then annotated by four unique expert annotators with relevant research experience (cf. Appendix D).9 To ensure a manageable workload for the annotators, we create our tuples from a subset of 100 items sampled from our test set. We assess the consistency of the annotators using _split-half reliability_ (SHR; Kiritchenko & Mohammad, 2017). This method involves randomly splitting all annotations into two sets, calculating scores for each set, and then determining the correlation between them using Spearman's \(\rho\).
Footnote 9: We tried crowdsourcing as well, but due to low agreement with experts, we concluded that crowdworkers lack the necessary expertise for our tasks (cf. Appendix D).
ResultsFor CS, the SHR is \(\rho=0.6\), indicating a moderate but adequate consensus among annotators. Figure 3 (left) exhibits kernel density estimates for the computed scores, with marked modes and expected values. Unsurprisingly, humans perform best with a mode near 1. CLMA\({}_{13\text{a}}\) is the only other model with a mode above 0, followed by LL\({}_{\text{a}}\)MA\({}_{13\text{a}}\), while GPT-4 lags behind. This indicates that when sampling once with a given caption, CL\({}_{\text{a}}\)MA\({}_{13\text{a}}\) is most likely to generate the best image. Since CL\({}_{\text{a}}\)MA\({}_{13\text{a}}\) and LL\({}_{\text{a}}\)MA\({}_{13\text{a}}\) retain their earlier CLIPSCORE ranking, but GPT-4 drops substantially, we hypothesize that human annotators are not as prone to typographic attacks as metrics. However, we still observe a slight bias towards images of text. In 75% of cases where GPT-4 is selected as the best model, it copies more n-grams from the caption into the image than the worst-ranked image, potentially leading to outliers and thus a slightly higher expected value than CL\({}_{\text{a}}\)MA\({}_{13\text{a}}\) or LL\({}_{\text{a}}\)MA\({}_{13\text{a}}\).
Regarding RS, we record a similar SHR, with \(\rho=0.58\). For LL\({}_{\text{a}}\)MA\({}_{13\text{a}}\), CLMA\({}_{13\text{a}}\), and CL\({}_{\text{a}}\)MA\({}_{\text{mg}}\), the distributions in Figure 3 (right) are approximately normally distributed, with the mode and expected value being almost identical. As with CLIPSCORE\({}_{\text{mg}}\), LL\({}_{\text{a}}\)MA\({}_{13\text{a}}\) is ranked marginally higher than CL\({}_{\text{a}}\)MA\({}_{13\text{a}}\), indicating CLIPSCORE\({}_{\text{mg}}\) correlates well with human rankings. On a similar scale, CL\({}_{\text{a}}\)MA\({}_{\text{mg}}\) outperforms LL\({}_{\text{a}}\)MA\({}_{13\text{a}}\). In contrast, GPT-4 follows a nearly uniform distribution, with a slight downward trend for better scores. Therefore, its mode is noticeably lower than for the other models. The expected value, albeit only slightly, is also the lowest.
In summary, our human evaluation aligns well with our automatic metrics, with the added benefit of lower susceptibility to typographic attacks. CL\({}_{\text{i}}\)MA\({}_{13\text{a}}\) outperforms LL\({}_{\text{a}}\)MA\({}_{13\text{a}}\) on CS, while CL\({}_{\text{a}}\)MA\({}_{\text{mg}}\) surpasses LL\({}_{\text{a}}\)MA\({}_{13\text{a}}\) on RS. GPT-4 shows peculiar distributions, with the mode (and also the expected value for RS) lagging behind, highlighting the effectiveness of our models.
Figure 3: Distributions of BWS scores per model for caption and reference similarity. Scores span from -1 (poor) to 1 (excellent). The “\(\star\)” markers denote expected values, and “\(\star\)” signifies the mode.
## 6 Analysis
The issue of language models memorizing and copying training data is a prevalent concern (McCoy et al., 2023; Carlini et al., 2023; Raunak and Menezes, 2022; Meehan et al., 2020). Similarly, we discovered in SS5.1 that GPT-4 and Claude 2 tend to perform typographic attacks by memorizing and copying input captions. In this section, we analyze the extent of these issues on our test set using the concept of _n-gram novelty_(McCoy et al., 2023). Specifically, to measure _code novelty_, we determine the proportion of n-grams in the model-generated TikZ code that are _not_ found in the training data. To measure _caption copying_, we calculate the proportion of n-grams from the caption that were copied verbatim into the output code. For comparison, we also calculate both metrics for human references.
Figure 4 displays the results for both metrics (\(n\in[1,10]\)) after filtering code comments. In terms of code novelty, models tend to generate less novel code than humans for smaller n-grams (\(n<4\)). However, for \(n>6\), models become more novel, with more than 80% of all model n-grams being novel for \(n>8\). McCoy et al. (2023) observe the same phenomenon in all datasets investigated and conclude that this ratio is the normal case when a model is not affected by memorization of the training data. Regarding caption copying, GPT-4 and Claude 2 copy considerably more n-grams from the caption than our models. For 1-grams (i.e., \(n=1\)), CLMA\({}_{13\text{\tiny{Na}}}\) and LLaMA\({}_{13\text{\tiny{Na}}}\) copy around 6.5% of n-grams, while GPT-4 and Claude 2 copy more than 10%. For \(n>5\), CLIMA\({}_{13\text{\tiny{Na}}}\), LLaMA\({}_{13\text{\tiny{Na}}}\), and humans practically stop copying, but Claude 2 and especially GPT-4 continue with an almost linear trend, hinting at instances where they might copy the entire caption (cf. Appendix E for examples). This reinforces our hypothesis from SS5.1 and points towards CLIPScore\({}_{\text{\tiny{BB}}}\) as a more robust metric for assessing the visuals of text-rich images since it seems less susceptible to typographic attacks.
## 7 Conclusion & Future Work
In this work, we present AutomataTikZ, a project for automatically generating TikZ drawings based on natural language descriptions. As part of AutomataTikZ, we release DaTikZ, a pioneering dataset of aligned TikZ drawings and captions, and CLIMA, a novel model architecture which integrates multimodal CLIP embeddings into LLaMA. By fine-tuning CLIMA on DaTikZ, we demonstrate that it outperforms LLaMA on several metrics and also surpasses proprietary GPT-4 and Claude 2. In addition, CLIMA can also process images, potentially extending its application to vectorization and conditioning on sketches. Important funds are that (i) integrating CLIP can lead to improvements, even when only text inputs are involved, provided the task relates to visual concepts, and that (ii) attention should be paid to typographic attacks when evaluating models that generate text-rich images.
In future research, we aim to incorporate insights from the caption generation community and enrich our input texts with other figure-mentioning sections of the source documents (cf. SS2). We also plan to enhance our extraction pipeline, especially since we had to exclude over 120k TikZ images from arXiv that failed to compile. We hope that these modifications will bring us a step closer to bridging the gap to human performance.
Figure 4: Proportion of unique code n-grams (\(n\in[1,10]\)) that do not appear in the training data (left), and proportion of caption n-grams that were copied into the output image (right).
## 8 Ethics Statement
We ensure that the TikZ drawings we gather from online sources are licensed in a manner that permits us to copy and redistribute them. Most sources license their content under a Creative Commons attribution license,10 the GNU Free Documentation License,11 or the MIT license.12 ArXiv is an exception in that, even though it allows licensing under a Creative Commons license, the majority of papers are published under a non-exclusive license, which does not grant us permission to redistribute.13 As a result, we exclude any TikZ drawings from arXiv that use this license in the public release of DaTukZ. Nevertheless, we do release AutomataLifeZ in conjunction with the dataset generation code, enabling anyone to recreate the full version of DaTukZ themselves. As for auto-generated samples, OpenAI prohibits the use of GPT-4 for creating competing services, restricting this part of our dataset to non-commercial applications.14
Footnote 10: [https://creativecommons.org/licenses](https://creativecommons.org/licenses)
Footnote 11: [https://www.gnu.org/licenses/fdl-1.3.en.html](https://www.gnu.org/licenses/fdl-1.3.en.html)
Footnote 12: [https://opensource.org/licenses/mil](https://opensource.org/licenses/mil)
Footnote 13: [http://arxiv.org/licenses/nonexclusive-distrib/1.0](http://arxiv.org/licenses/nonexclusive-distrib/1.0)
Footnote 14: [https://openai.com/policies/terms-of-use](https://openai.com/policies/terms-of-use)
Apart from our dataset, our models should not be used as a substitute for human judgment and critical thinking. They may carry any biases, flaws, or gaps that exist in the base models and training data and could potentially misinterpret the input, fabricate non-existent details, or overlook crucial information. Users should be aware of potential differences between the results they expect and the output the model generates.
Furthermore, while our models are designed to aid in the production of legitimate scientific figures, they could potentially be used to generate disinformation and fake science in the hands of malicious actors.
## Acknowledgments
We gratefully thank, in no particular order, Timm Dill, Yanran Chen, Daniil Larionov, Jiwoo Kim, Vivian Fresen, Martin Kerscher, Christoph Leiter, and Ran Zhang for their help with our human evaluation campaign, proofreading, discussions, and comments on our work. We further thank the BMBF for its support via the grant Metrics4NLG. The last author is supported by DFG grant EG 375/5-1. Any icons used in this work were designed by OpenMoji, the open-source emoji and icon project.
|
2309.12600 | Multiply Robust Federated Estimation of Targeted Average Treatment
Effects | Federated or multi-site studies have distinct advantages over single-site
studies, including increased generalizability, the ability to study
underrepresented populations, and the opportunity to study rare exposures and
outcomes. However, these studies are challenging due to the need to preserve
the privacy of each individual's data and the heterogeneity in their covariate
distributions. We propose a novel federated approach to derive valid causal
inferences for a target population using multi-site data. We adjust for
covariate shift and covariate mismatch between sites by developing
multiply-robust and privacy-preserving nuisance function estimation. Our
methodology incorporates transfer learning to estimate ensemble weights to
combine information from source sites. We show that these learned weights are
efficient and optimal under different scenarios. We showcase the finite sample
advantages of our approach in terms of efficiency and robustness compared to
existing approaches. | Larry Han, Zhu Shen, Jose Zubizarreta | 2023-09-22T03:15:08Z | http://arxiv.org/abs/2309.12600v1 | # Multiply Robust Federated Estimation of Targeted Average Treatment Effects
###### Abstract
Federated or multi-site studies have distinct advantages over single-site studies, including increased generalizability, the ability to study underrepresented populations, and the opportunity to study rare exposures and outcomes. However, these studies are challenging due to the need to preserve the privacy of each individual's data and the heterogeneity in their covariate distributions. We propose a novel federated approach to derive valid causal inferences for a target population using multi-site data. We adjust for covariate shift and covariate mismatch between sites by developing multiply-robust and privacy-preserving nuisance function estimation. Our methodology incorporates transfer learning to estimate ensemble weights to combine information from source sites. We show that these learned weights are efficient and optimal under different scenarios. We showcase the finite sample advantages of our approach in terms of efficiency and robustness compared to existing approaches.
## 1 Introduction
Compared to single-site studies, federated or multi-site studies confer distinct advantages, such as the potential for increased generalizability of findings, the opportunity to learn about underrepresented populations, and the ability to study rare exposures and outcomes. However, deriving valid causal inferences using multi-site data is difficult due to numerous real-world challenges, including _heterogeneity of site populations_, _different data structures_, and _privacy-preserving constraints_ stemming from policies such as the General Data Protection Regulation (GDPR) and Health Insurance Portability and Accountability Act (HIPAA) that prohibit direct data pooling.
Recent methodological developments have focused on privacy-preserving estimation strategies. These strategies typically involve sharing summary-level information from multiple data sources [26; 27; 12; 13; 18]. However, they often require restrictive assumptions such as homogeneous data structures and model specifications (e.g., a common set of observed covariates measured using a common data model), which are not realistic in practice.
To address these methodological gaps, we propose a _multiply robust_ and _privacy-preserving_ estimator that leverages multi-site information to estimate causal effects in a _target population of interest_.
Compared to existing approaches, our method allows investigators from different sites to incorporate _site-specific covariate information and domain knowledge_ and provides _increased protection against model misspecification_. Our method allows for flexible identification under different settings, including systematically missing covariates and different site-specific covariates (termed _covariate mismatch_). Our proposed method adopts an _adaptive ensembling approach that optimally combines estimates from source sites_ and serves as a data-driven metric for the transportability of source sites. Moreover, the proposed method relaxes the assumption of homogeneous model specifications by adopting a class of multiply robust estimators for estimating the nuisance functions.
### Related Work and Contributions
The current literature on multi-site causal inference typically assumes that a common set of confounders is observed in all sites [7; 6; 12; 13]. However, this assumption is rarely met due to variations in local practices, e.g., differing data collection standards and coding practices. In particular, the target site often lacks data on certain covariates available in the source sites, and ignoring them can result in biased and inefficient inference [31]. Recently, [28] proposed a method to address covariate mismatch by integrating source samples with unmeasured confounders and a target sample containing information about these confounders. However, they assume that the target sample is either a simple random sample or is sampled according to a known procedure from the source population. [11] extended the method to the setting where the average treatment effect (ATE) is not identifiable in some sites by constructing control variates. However, their approach is limited to addressing selection biases in case-control studies and cannot be easily extended to other outcome types. [31] extended the framework by [7; 6] to handle covariate mismatch by regressing predicted conditional outcomes on effect modifiers and then taking the means of these regression models evaluated on target site samples. Our work extends [31] to the multi-site and federated data setting by utilizing an adaptive weighting approach that optimally combines estimates from source sites.
Most existing approaches in the generalizability and transportability literature deal with heterogeneous covariate distributions by modeling site selection processes [5; 1; 25; 7; 6]. For instance, covariate shift can be accounted for via inverse probability of selection weights [5; 1], stratification [25], or augmentation [7; 6], which require pooling individual-level information across sites. Our work differs from those in that we preserve individual data privacy, sharing only summary-level information about the target site. Specifically, we adopt density ratio models [22; 8] that only share covariate moments of the target samples. Under certain specifications, these density ratio models are equivalent to logistic regression-based selection models for adjusting heterogeneity between target and source populations. Our approach shares similarities with calibration weighting methods, but we employ semi-parametric efficiency theory to enable a closed-form approximation of variance rather than relying on the bootstrap.
Further, when data sources are heterogeneous, it would be beneficial for investigators at different sites to incorporate site-specific knowledge when specifying candidate models. However, to the best of our knowledge, existing methods require common models to be specified across sites, which may not be realistic or flexible enough [26; 27; 12; 13; 18]. We relax this requirement by adopting a multiply robust estimator, allowing investigators in each site to propose multiple, different outcome and treatment models. The estimator is consistent if any one of the multiple outcome or treatment models is correctly specified. Our work builds on [20], which established an equivalence between doubly robust and multiply robust estimators using mixing weights determined by predictive risks of candidate models [16; 14; 15; 2; 3; 4]. Further, to avoid negative transfer due to non-transportable source estimates, we adopt a data-adaptive ensembling approach [12; 13] that guarantees that the federated estimator achieves improved precision compared to an estimator using target site data alone when at least one source estimate is sufficiently similar to the target estimate.
## 2 Preliminaries
We consider data from \(K\) sites, where each site has access to its individual-level data but is prohibited from sharing this data with any other site. The set of sites will be denoted by \(\mathcal{K}=\{1,2,...,K\}\). Without loss of generality, we define the target site to be the first site, i.e., \(T=\{1\}\) and the source sites as the remaining sites, i.e., \(\mathcal{S}=\mathcal{K}\setminus T=\{2,...,K\}\).
For individual \(i\), let \(Y_{i}\) denote an observed outcome, which can be continuous or discrete. \(X_{i}\in\mathbb{R}^{p}\) represents the \(p\)-dimensional baseline covariates in source site \(k\in\mathcal{S}\). \(V_{i}\in\mathbb{R}^{q}\) represents the (partial) baseline covariates in the target site \(T\) such that \(V_{i}\subseteq X_{i}\). To simplify the presentation, we assume an identical set of covariates across all source sites, although our method can accommodate scenarios where distinct covariate sets are present among the source sites. Let \(A_{i}\) represent a binary treatment indicator, with \(A_{i}=1\) denoting treatment and \(A_{i}=0\) denoting control. \(R_{i}\) is a site indicator with \(R_{i}=k\) if patient \(i\) is from the site \(k\). We observe \(n_{T}\) target samples, \(D_{T}=\{Y_{i},V_{i},A_{i},R_{i}=T,1\leq i\leq n_{T}\}\) and \(n_{k}\) source samples, \(D_{k}=\{Y_{i},X_{i},A_{i},R_{i}=k,1\leq i\leq n_{k}\}\) for each \(k\in\mathcal{S}\). The total sample size is \(N=\sum_{k\in\mathcal{K}}n_{k}\). Under the potential outcomes framework [21, 23], we denote the counterfactual outcomes under treatment and control as \(\{Y_{i}(1),Y_{i}(0)\}\), and only one of them is observed: \(Y_{i}=A_{i}Y_{i}(1)+(1-A_{i})Y_{i}(0)\)[24]. The data structure is illustrated in Figure 1.
Our goal is to estimate the average treatment effect in the target population,
\[\Delta_{T}=\mu_{1,T}-\mu_{0,T}\quad\text{ where }\quad\mu_{a,T}=E\left\{Y_{i}(a )\mid R_{i}=T\right\}\text{ for }a\in\{0,1\}, \tag{1}\]
where \(\mu_{a,T}\) is the mean potential outcome under treatment \(a\) in the target population. To identify this quantity, we consider the following assumptions:
* (Consistency): For every individual \(i\), if \(A_{i}=a\), then \(Y_{i}=Y_{i}(a)\).
* (Mean exchangeability over treatment assignment in the target population): \(E\{Y_{i}(a)\mid V_{i}=v,A_{i},R_{i}=T\}=E\left\{Y_{i}(a)\mid V_{i}=v,R_{i}=T\right\}\).
* (Positivity of treatment assignment in the target population): \(0<P(A_{i}=1\mid V_{i}=v,R_{i}=T)<1\) for any \(v\) s.t. \(P(V_{i}=v\mid R_{i}=T)>0\).
* (Mean exchangeability over treatment assignment in the source populations): \(E\{Y_{i}(a)\mid X_{i}=x,A_{i},R_{i}=k\}=E\left\{Y_{i}(a)\mid X_{i}=x,R_{i}=k\right\}\), \(k\in\mathcal{S}\).
* (Positivity of treatment assignment in the source populations): \(0<P(A_{i}=1\mid X_{i}=x,R_{i}=k)<1\) for any \(x\) s.t. \(P(X_{i}=x\mid R_{i}=k)>0\), \(k\in\mathcal{S}\).
* (Mean exchangeability over site selection): \(E\{Y_{i}(a)\mid V_{i}=v,R_{i}=k\}=E\left\{Y_{i}(a)\mid V_{i}=v\right\}\), \(k\in\mathcal{K}\).
* (Positivity of site selection): \(0<P(R_{i}=k\mid V_{i}=v)<1\) for \(k\in\mathcal{S}\) and any \(v\) s.t. \(P(V_{i}=v)>0\).
Assumption (A1) is the stable unit treatment value assumption (SUTVA), requiring no interference between individuals. Assumption (A2) (Assumption (A4)) states that the mean counterfactual outcome under treatment \(a\) is independent of treatment assignment, conditional on baseline covariates in the target (source) populations. For Assumption (A2) and (A4) to hold, we require all effect modifiers to be measured in \(V\). Assumption (A3) (Assumption (A5)) states that each individual in the target (source) populations has a positive probability of receiving each treatment. Assumption (A6) states that the mean counterfactual outcome is independent of site selection, conditional on covariates in the target population. For Assumption (A6) to hold, we require all covariates that are distributed differently between target and source populations (shifted covariates) to be measured in \(V\). Thus, if these effect modifiers are measured in \(V\), Assumption (A2), (A4) and (A6) automatically hold. Assumption (A7) requires that in each stratum defined by \(V\), the probability of being in a source
Figure 1: Schematic of the data structure in the multi-site setting.
population for each individual is positive. Theorem 1 shows that under Assumption (A1), (A4) - (A7), the mean counterfactual outcome for the target can be identified in the sources. Since these assumptions may not hold in practice, we devise a data-adaptive ensembling procedure in Section 4 to screen out sites that significantly violate these assumptions.
**Theorem 1**.: _If Assumptions (A1) - (A3) hold, the mean counterfactual outcomes in the target population can be identified using the target sample._
\[\mu_{a,T}=E\left\{Y_{i}(a)\mid R_{i}=T\right\}=E\left\{E\left\{Y_{i}\mid V_{i} =v,A_{i}=a,R_{i}=T\right\}\mid R_{i}=T\right\}. \tag{2}\]
_If Assumptions (A1), (A4) - (A7) hold, the mean counterfactual outcomes in the target population can be identified using the source samples._
\[\mu_{a,T} =E\left\{Y_{i}(a)\mid R_{i}=T\right\}\] \[=E\left\{E\left\{E\left\{Y_{i}\mid X_{i}=x,A_{i}=a,R_{i}=k\right\} \mid V_{i}=v,R_{i}=k\right\}\mid R_{i}=T\right\}. \tag{3}\]
## 3 Site-specific Estimators
For the target site \(k=\{T\}\), a standard AIPW estimator is used for \(\mu_{a,T}\) as follows
\[\widehat{\mu}_{a,T}=\frac{1}{n_{T}}\sum_{i=1}^{n}\biggl{[}\frac{I(A_{i}=a,R_ {i}=T)}{\widehat{\pi}_{a,T}(V_{i})}\Bigl{\{}Y_{i}-\widehat{m}_{a,T}(V_{i}) \Bigr{\}}+\widehat{m}_{a,T}(V_{i})\biggr{]}, \tag{4}\]
where \(\widehat{m}_{a,T}(V_{i})\) is an estimator for \(E\left\{Y_{i}\mid V_{i}=v,A_{i}=a,R_{i}=T\right\}\), the outcome model in the target population, and \(\widehat{\pi}_{a,T}(V_{i})\) is an estimator for \(P(A_{i}=1\mid V_{i}=v,R_{i}=T)\), the probability of receiving treatment \(a\) in the target population.
For each source site \(k\in\mathcal{S}\), we propose an estimator for \(\mu_{a,T}\) as follows
\[\widehat{\mu}_{a,k}= \frac{1}{n_{k}}\sum_{i=1}^{n}\biggl{[}\frac{I(A_{i}=a,R_{i}=k)}{ \widehat{\pi}_{a,k}(X_{i})}\widehat{\xi}_{k}(V_{i})\Bigl{\{}Y_{i}-\widehat{m}_ {a,k}(X_{i})\Bigr{\}}\biggr{]}\] \[+ \frac{1}{n_{k}}\sum_{i=1}^{n}\biggl{[}I(R_{i}=k)\widehat{\zeta}_ {k}(V_{i})\Bigl{\{}\widehat{m}_{a,k}(X_{i})-\widehat{\tau}_{a,k}(V_{i}) \Bigr{\}}\biggr{]}+\frac{1}{n_{T}}\sum_{i=1}^{n}I(R_{i}=T)\widehat{\tau}_{a,k} (V_{i}), \tag{5}\]
where \(\widehat{\tau}_{a,k}(V_{i})\) is an estimator for \(E\left\{m_{a,k}(x)\mid V_{i}=v,R_{i}=k\right\}\) and \(\widehat{m}_{a,k}(X_{i})\) is an estimator for \(E\left\{Y_{i}\mid X_{i}=x,A_{i}=a,R_{i}=k\right\}\). \(\widehat{\zeta}_{k}(V_{i})\) estimates \(f(V_{i}\mid R_{i}=T)/f(V_{i}\mid R_{i}=k)\), the density ratios of covariate distributions in the target population \(T\) and source population \(k\in\mathcal{S}\). \(\widehat{\pi}_{a,k}(X_{i})\) estimates \(P(A_{i}=1\mid X_{i}=x,R_{i}=k)\), the probability of receiving treatment \(a\) in source \(k\in\mathcal{S}\).
Compared to the transportation estimators in [7; 6], we introduce two additional nuisance functions, \(\zeta_{k}(V_{i})\) and \(\tau_{a,k}(V_{i})\). Specifically, \(\zeta_{k}(V_{i})\) accounts for covariate shift across sites, while \(\tau_{a,k}(V_{i})\) is introduced to address covariate mismatch across sites. We provide estimation procedures for these nuisance functions in the following subsections, and the theoretical guarantees of the estimator are presented in Section 5.
### Density Ratio Weighting
Most existing methods for adjusting for heterogeneity of site populations rely on inverse probability of selection weighting, which requires pooling target and source samples. However, such pooling is often restricted to protect individuals' data privacy. We propose a density ratio weighting approach, which offers equivalent estimation without the need for direct data pooling (see Appendix A).
Formally, we model the density ratios of covariate distributions in the target \(T\) and source \(k\in\mathcal{S}\) by specifying an exponential tilt model [22; 8]; \(\zeta_{k}(V_{i};\gamma_{k})=f(V_{i}\mid R_{i}=T)/f(V_{i}\mid R_{i}=k)=\exp \left\{-\gamma_{k}^{\gamma}\psi(V_{i})\right\}\) where \(f(V_{i}\mid R_{i}=T)\) and \(f(V_{i}\mid R_{i}=k)\) are density functions of covariates \(V_{i}\) in the target \(T\) and source \(k\in\mathcal{S}\), respectively, and \(\psi(V_{i})\) is some \(d\)-dimensional basis with \(1\) as its first element. With this formulation, \(\zeta_{k}(V_{i};\gamma_{k})=1\) for \(\gamma_{k}=0\) and \(\int\zeta_{k}(V_{i};\gamma_{k})f(V_{i}\mid R_{i}=k)dx=1\). If we choose \(\psi(V_{i})=V_{i}\), we can recover the entire class of natural exponential family distributions.
If we include higher-order terms, the exponential tilt model has greater flexibility in characterizing the heterogeneity between two populations [9]. We solve for \(\widehat{\gamma}_{k}\) with the following estimating equation
\[\frac{1}{n_{T}}\sum_{i=1}^{N}I\left(R_{i}=T\right)\psi\left(V_{i}\right)=\frac{ 1}{n_{k}}\sum_{i=1}^{N}I\left(R_{i}=k\right)\psi\left(V_{i}\right)\exp\left\{- \gamma_{k}^{\top}\psi(V_{i})\right\}. \tag{6}\]
This procedure preserves individual privacy; choosing \(\psi(V_{i})=V_{i}\), the target site only needs to share its covariate means with the source sites; each source site then solves (6) with its own data to obtain the density ratios.
### Multiply Robust Estimator
We relax the assumption of homogeneous model specifications across sites and allow each site to propose multiple models for nuisance functions. Our proposal follows the construction of multiply robust estimators for nuisance functions via a model-mixing approach [20].
Formally, for each site \(k\in\mathcal{K}\), we consider a set of \(J\) candidate treatment models for the propensity scores \(\{\pi_{a,k}^{j}\left(x\right):j\in\mathcal{J}=\{1,...,J\}\}\). Let \(\widehat{\pi}_{a,k}^{j}(x)\) be the estimator of \(\pi_{a,k}^{j}(x)\) obtained by fitting the corresponding candidate models on the data, which can be parametric, semiparametric, or nonparametric machine learning models. \(\widehat{\pi}_{a,k}(X_{i})=\sum_{j=1}^{J}\widehat{\Lambda}_{j}\widehat{\pi}_{ a,k}^{j}(X_{i})\) denotes the weighted predictions of propensity scores, with weights \(\widehat{\Lambda}_{j}\) assigned to predictions by each candidate model \(j\). To calculate the weights \(\widehat{\Lambda}_{j}\), we adapt a model-mixing algorithm developed in [29] and [20] based on the cumulative predictive risks of candidate models.
First, we randomly partition the data within each site into a training set \(D_{k}^{\text{train}}\) of units indexed by \(\{1,...,n_{k}^{\text{train}}\}\) and a validation set \(D_{k}^{\text{val}}\) of units indexed by \(\{n_{k}^{\text{train}}+1,...,n_{k}\}\). Then, each candidate treatment model is fit on \(D_{k}^{\text{train}}\) to obtain \(\widehat{\pi}_{a,n_{k}^{\text{train}}}^{j}\) for \(j\in\mathcal{J}\). The model-mixing weights are determined by the models' predictive risks assessed on \(D_{k}^{\text{val}}\) according to the Bernoulli likelihood. Specifically,
\[\widehat{\Lambda}_{j} =\left(n_{k}-n_{k}^{\text{train}}\right)^{-1}\sum_{i=n_{k}^{ \text{train}}+1}^{n_{k}}\widehat{\Lambda}_{j,i}\quad\text{and}\] \[\widehat{\Lambda}_{j,i} =\frac{\Pi_{q=n_{k}^{\text{train}}+1}^{i-1}\widehat{\pi}_{a,n_{k }^{\text{train}}}^{j}\left(X_{q}\right)^{A_{q}}\left\{1-\widehat{\pi}_{a,n_{k }^{\text{train}}}^{j}\left(X_{q}\right)\right\}^{1-A_{q}}}{\sum_{j^{\prime}=1 }^{J}\Pi_{q=n_{k}^{\text{train}}+1}^{i-1}\widehat{\pi}_{a,n_{k}^{\text{train }}}^{j^{\prime}}\left(X_{q}\right)^{A_{q}}\left\{1-\widehat{\pi}_{a,n_{k}^{ \text{train}}}^{j^{\prime}}\left(X_{q}\right)\right\}^{1-A_{q}}}\quad\text{ for}\quad n_{k}^{\text{train}}+2\leq i\leq n_{k}, \tag{7}\]
where \(\widehat{\Lambda}_{j,n_{k}^{\text{train}}+1}=1/J\). The model mixing estimators are consistent if one of the \(j\in\mathcal{J}\) candidate models is correctly specified [20]. A similar strategy extends for conditional outcomes \(m_{a,k}(X_{i})\) by combining a set of \(L\) candidate outcome models \(\{m_{a,k}^{l}\left(x\right):l\in\mathcal{L}=\{1,...,L\}\}\). We obtain \(\widehat{m}_{a,k}(X_{i})=\sum_{l=1}^{L}\widehat{\Omega}_{l}\widehat{m}_{a,k}^ {l}(X_{i})\) as the predicted outcomes with weights \(\widehat{\Omega}_{l}\) of candidate outcomes models under treatment \(a\) in site \(k\). Further details are provided in Appendix B.
### Handling Covariate Mismatch
To account for covariate mismatch, we adapt the approach in [31], introducing the nuisance function \(\tau_{a,k}(V_{i})=E\{m_{a,k}(x)\mid V_{i}=v,R_{i}=k\}\), where \(m_{a,k}(x)\) is the outcome regression for treatment \(a\) in site \(k\). First, we estimate \(m_{a,k}(X_{i})\) by regressing the outcome \(Y_{i}\) on covariates \(X_{i}\) among units receiving treatment \(a\) in site \(k\). We then regress \(\widehat{m}_{a,k}(X_{i})\), the estimates from the previous step, on \(V_{i}\) in the source site \(k\) to obtain \(\widehat{\tau}_{a,k}(x)\). By doing so, we project all site-specific estimates of conditional outcomes to a common hyperplane defined by \(V_{i}\). If all effect modifiers that are distributed differently between target and source populations are measured in \(V_{i}\), then the information contained in the projected site-specific estimates can be transported to the target site. Finally, we take the mean of \(\widehat{\tau}_{a,k}(x)\) over the target sample, which gives us the transported estimate \(\widehat{\tau}_{a,k}(V_{i})\) for the mean counterfactual outcomes under treatment \(a\) in the target population.
## 4 Federated Global Estimator
Let \(\widehat{\mu}_{a,T}\) denote the estimate of \(\mu_{a,T}\) based on target data only and \(\widehat{\mu}_{a,k}\) be the estimates of \(\mu_{a,T}\) using source data \(k\in\mathcal{S}\). We propose a general form of the federated global estimator as follows
\[\widehat{\mu}_{a,G}=\widehat{\mu}_{a,T}+\sum_{k\in\mathcal{K}} \widehat{\eta}_{k}\left\{\widehat{\mu}_{a,k}-\widehat{\mu}_{a,T}\right\}, \tag{8}\]
where \(\widehat{\eta}_{k}\geq 0\) is a non-negative weight assigned to site-specific estimates and \(\sum_{k\in\mathcal{K}}\widehat{\eta}_{k}=1\). The role of \(\eta_{k}\) is to determine the ensemble weight given to the site-specific estimates. We can employ diverse weighting methods by selecting appropriate values of \(\eta_{k}\). For example, if \(\eta_{k}=0\), the global estimator is simply the estimator based on target data only; if \(\eta_{k}=n_{k}/N\), the global estimator combines site-specific estimates by their sample sizes; if \(\eta_{k}=(1/\sigma_{k}^{2})/\sum_{j\in\mathcal{K}}(1/\sigma_{j}^{2})\) where \(\sigma_{k}^{2}=\text{Var}(\widehat{\mu}_{a,k})\), the global estimator is the inverse variance weighting estimator, which is known to be appropriate when working models are homogeneous across sites [27]. To control for bias due to non-transportable site estimates while achieving optimal efficiency, we estimate \(\eta_{k}\) data-adaptively by a penalized regression of site-specific influence functions [12; 13]. This strategy ensembles the site-specific estimates for higher efficiency if they are sufficiently similar to the target estimates; if source estimates are significantly different, their weights will be shrunk toward zero with high probability.
We denote the data-adaptive weights as \(\eta_{k,L_{1}}\), obtained as the solutions to a penalized regression of the site-specific influence functions as follows
\[\widehat{\eta}_{k,L_{1}}=\arg\min_{\eta_{k}\geq 0}\sum_{i=1}^{N} \left[\widehat{\xi}_{T,i}(a)-\sum_{k\in\mathcal{K}}\eta_{k}\left(\widehat{ \xi}_{T,i}(a)-\widehat{\xi}_{k,i}(a)-\widehat{\delta}_{k}\right)\right]^{2}+ \lambda\sum_{k\in\mathcal{K}}|\eta_{k}|\,\widehat{\delta}_{k}^{2}, \tag{9}\]
where \(\widehat{\xi}_{T,i}(a)\) and \(\widehat{\xi}_{k,i}(a)\) are the estimated influence functions for the target and source site estimators (see Appendix D.3 for the exact form of the influence functions). The estimated difference \(\widehat{\delta}_{k}=\widehat{\mu}_{a,k}-\widehat{\mu}_{a,T}\) quantifies the bias between the estimate from source \(k\in\mathcal{S}\) and the estimate from the target \(T\). The tuning parameter \(\lambda\) determines the penalty imposed on source site estimates and in practice, is chosen via cross-validation. Specifically, we create a grid of values of \(\lambda\) and iteratively train and evaluate the model using different \(\lambda\) values, selecting the one with the lowest average validation error after multiple sample splits.
We estimate the variance of \(\widehat{\mu}_{a,G}\) using the estimated influence functions for \(\widehat{\mu}_{a,T}\) and \(\widehat{\mu}_{a,k}\). By the central limit theorem, \(\sqrt{N}(\widehat{\mu}_{a,G}-\bar{\mu}_{a,G})\overset{d}{\to}\mathcal{N}(0,\Sigma)\), where \(\Sigma=E\{\sum_{k\in\mathcal{K}}\bar{\eta}_{k}\xi_{k,i}(a)\}^{2}\) and \(\bar{\mu}_{a,G}\) and \(\bar{\eta}_{k}\) denote the limiting values of \(\widehat{\mu}_{a,G}\) and \(\bar{\eta}_{k}\) respectively. The standard error of \(\bar{\mu}_{a,G}\) is estimated as \(\sqrt{\widehat{\Sigma}/N}\) where \(\widehat{\Sigma}=N^{-1}\sum_{k\in\mathcal{K}}\sum_{i=1}^{n_{k}}\{\widehat{\eta }_{k}\widehat{\xi}_{k,i}(a)\}^{2}\). A two-sided \((1-\alpha)\times\)100% confidence interval for \(\mu_{a,G}\) is
\[\widehat{\mathcal{C}}_{\alpha}=\left[\widehat{\mu}_{a,G}-\sqrt{ \widehat{\Sigma}/N}\mathcal{Z}_{\alpha/2},\quad\widehat{\mu}_{a,G}+\sqrt{ \widehat{\Sigma}/N}\mathcal{Z}_{\alpha/2}\right], \tag{10}\]
where \(\mathcal{Z}_{\alpha/2}\) is the \(1-\alpha/2\) quantile for a standard normal distribution.
## 5 Theoretical Guarantees
### Site-specific Estimator
We first establish the theoretical properties of the site-specific estimators constructed with the multiply robust model-mixing approach. Define \(\overline{\pi}_{a,k}^{j}\), \(\overline{m}_{a,k}^{l}\), \(\overline{\tau}_{a,k}\) and \(\overline{\zeta}_{k}\) as non-stochastic functionals that the corresponding estimators \(\widehat{\pi}_{a,k}^{j}\), \(\widehat{m}_{a,k}^{l}\), \(\widehat{\tau}_{a,k}\) and \(\widehat{\zeta}_{k}\) converge to. That is,
\[\|\widehat{\pi}_{a,k}^{j}-\overline{\pi}_{a,k}^{j}\|=o_{p}(1), \quad\|\widehat{m}_{a,k}^{l}-\overline{m}_{a,k}^{l}\|=o_{p}(1),\quad\|\widehat{ \tau}_{a,k}-\overline{\tau}_{a,k}\|=o_{p}(1),\quad\|\widehat{\zeta}_{k}- \overline{\zeta}_{k}\|\quad=o_{p}(1).\]
As shown in Lemmas D.1 and D.2 in Appendix D.2, the \(L_{2}\) risks of the model mixing estimators \(\widehat{\pi}_{a,k}\) and \(\widehat{m}_{a,k}\) are bounded by the smallest risks of all candidate models plus a remainder term that vanishes at a faster rate than the risks themselves.
**Theorem 2**.: _Suppose that the conditions in Lemmas D.1 and D.2 hold, and that \(\widehat{\pi}^{j}_{a,k}\), \(\widehat{m}^{l}_{a,k}\), \(\widehat{\zeta}_{k}\), \(\widehat{\tau}_{a,k}\), \(\bar{\pi}^{j}_{a,k}\), \(\bar{m}^{l}_{a,k}\), \(\bar{\zeta}_{k}\) and \(\bar{\tau}_{a,k}\) are uniformly bounded for all treatment models \(j\in\mathcal{J}\) and for all outcome models \(l\in\mathcal{L}\). Consider the following conditions:_
1. \(\overline{\pi}^{j}_{a,k}=\pi_{a,k}\) _for some_ \(j\in\mathcal{J}\)_,_ 2. \(\overline{m}^{l}_{a,k}=m_{a,k}\) _for some_ \(l\in\mathcal{L}\)_,_ 3. \(\overline{\zeta}_{k}=\zeta_{k}\)_,_ 4. \(\overline{\tau}_{a,k}=\tau_{a,k}\)_._
_Then, under Assumptions (A1) - (A7), and if one of (B1) or (B2) and one of (C1) or (C2) hold,_
\[\left\|\widehat{\mu}_{a,k}-\mu_{a,T}\right\|=O_{p}\left(n^{-1/2}+\|\widehat{ \pi}_{a,k}-\pi_{a,k}\|\|\widehat{m}_{a,k}-m_{a,k}\|+\|\widehat{\zeta}_{k}- \zeta_{k}\|\|\widehat{\tau}_{a,k}-\tau_{a,k}\|\right). \tag{11}\]
_Further, if the nuisance estimators satisfy the following convergence rate_
\[\|\widehat{m}_{a,k}-m_{a,k}\|\left\|\widehat{\pi}_{a,k}-\pi_{a,k}\right\|=o_{ p}(1/\sqrt{n}),\quad\|\widehat{\zeta}_{k}-\zeta_{k}\|\left\|\widehat{\tau}_{a,k}- \tau_{a,k}\right\|=o_{p}(1/\sqrt{n}),\]
_then \(\sqrt{n}(\widehat{\mu}_{a,k}-\mu_{a,T})\) asymptotically converges to a normal distribution with mean zero and asymptotic variance equal to the semiparametric efficiency bound. The derivation of the result is provided in the Appendix._
### Federated Global Estimator
**Theorem 3**.: _Under Assumptions (A1) - (A7) and the regularity conditions specified in the Appendix, the federated global estimator of \(\Delta_{T}\), given by \(\widehat{\Delta}_{G}=\widehat{\mu}_{1,G}-\widehat{\mu}_{0,G}\), is consistent and asymptotically normal,_
\[\sqrt{N/\widehat{\mathcal{V}}}\left(\widehat{\Delta}_{G}-\Delta_{T}\right) \overset{d}{\rightarrow}\mathcal{N}(0,1), \tag{12}\]
_with the variance estimated consistently as \(\widehat{\mathcal{V}}\). The variance of \(\widehat{\Delta}_{G}\) is no larger than that of the estimator based on target data only, \(\widehat{\Delta}_{T}=\widehat{\mu}_{1,T}-\widehat{\mu}_{0,T}\). Further, if there exist some source sites with consistent estimators of \(\Delta_{T}\) and satisfy conditions specified in the Appendix, the variance of \(\widehat{\Delta}_{G}\) is strictly smaller than \(\widehat{\Delta}_{T}\)._
## 6 Experiments
We evaluate the finite sample properties of five different estimators: (i) an augmented inverse probability weighted (AIPW) estimator using data from the target site only (Target), (ii) an AIPW estimator that weights each site proportionally to its sample size (SS), (iii) an AIPW estimator that weights each site inverse-proportionally to its variance (IVW), (iv) an AIPW estimator that weights each site with the \(L_{1}\) weights defined in (9) (AIPW-\(L_{1}\)), and (v) a multiply robust estimator with the \(L_{1}\) weights defined in (9) (MR-\(L_{1}\)).
Across different settings, we examine the performance of each estimator in terms of bias, root mean square error, and coverage and length of 95% confidence intervals (CI) across \(500\) simulations.
We consider a total of five sites and fix the first site as the target site with a relatively small sample size of \(300\). The source sites have larger sample sizes of \(\{500,500,1000,1000\}\). We model heterogeneity in the covariate distributions across sites with skewed normal distributions and varying levels of skewness in each site, \(X_{kp}\sim\mathcal{S}\mathcal{N}\left(x;\Xi_{kp},\Omega_{kp}^{2}\), \(\mathrm{A}_{kp}\right)\), where \(k\in\{1,...,5\}\) indexes each site and \(p\in\{1,...,4\}\) indexes the covariates; \(\Xi_{kp}\), \(\Omega_{kp}^{2}\) and \(\mathrm{A}_{kp}\) are the location, scale, and skewness parameters, respectively. Following [17], we also generate covariates \(Z_{kp}\) as non-linear transformation of \(X_{kp}\) such that \(Z_{k1}=\exp(X_{k1}/2)\), \(Z_{k2}=X_{k2}/\{1+\exp(X_{k1})\}+10\), \(Z_{k3}=(X_{k1}X_{k3}/25+0.6)^{3}\) and \(Z_{k4}=(X_{k2}+X_{k4}+20)^{2}\).
For the MR-\(L_{1}\), we adaptively mix two outcome models and two treatment models. We specify the first model with the covariates \(X_{kp}\), and the second model with the covariates \(Z_{kp}\). The AIPW-
estimator requires a common model to be specified across sites, so we specify the outcome and treatment models using covariates \(X_{kp}\).
The tuning parameter \(\lambda\) is selected through cross-validation using a grid of values \(\{0,10^{-3},10^{-2},0.1,0.5,1,2,5,10\}\). To perform cross-validation, the simulated datasets in each site are split into two equally sized training and validation datasets.
### No Covariate Mismatch
We first consider the setting where there is no covariate mismatch, i.e. \(p=4\) for both target and source sites. For each unit, we generate potential outcomes as
\[Y_{k}(a)=210+X_{k}\beta_{x}+Z_{k}\beta_{z}+\varepsilon_{k} \tag{13}\]
where \(\beta_{x}=\beta_{z}=(27.4,13.7,13.7,13.7)\). For units in the target site, we generate outcomes with \(X_{k}\) only by setting \(\beta_{z}=0\); for units in the source sites, either \(X_{k}\) or \(Z_{k}\) is used to generate outcomes. If \(\beta_{x}\neq 0\), then \(\beta_{z}=0\) and vice versa. Similarly, the treatment is generated as
\[A_{k}\sim\mathrm{Bernoulli}\left(\pi_{k}\right)\quad\pi_{k}=\mathrm{expit}(X_ {k}\alpha_{x}+Z_{k}\alpha_{z}) \tag{14}\]
where \(\alpha_{x}=\alpha_{z}=(-1,0.5,-0.25,-0.1)\). For units in the target site, we generate treatments with \(X_{k}\) only by setting \(\alpha_{z}=0\); for units in the source sites, either \(X_{k}\) or \(Z_{k}\) is used to generate treatments. If \(\alpha_{x}\neq 0\), then \(\alpha_{z}=0\) and vice versa. With this data generation scheme, the true ATE is \(\Delta_{T}=0\).
We compare the performance of the five estimators described above under the following settings:
**Setting 1** (\(C=0\)): outcomes and treatments in all source sites are generated with \(Z_{k}\). However, all source sites misspecify both models with \(X_{k}\). The target site correctly specifies both models.
**Setting 2** (\(C=1/2\)): outcomes and treatments are generated with \(X_{k}\) in Sites 2 and 4, but with \(Z_{k}\) in Sites 3 and 5; thus, the outcome and treatment models are misspecified in Sites 3 and 5, and only half of the source sites correctly specify the models.
**Setting 3** (\(C=1\)): outcomes and treatments in all source sites are generated with \(X_{k}\), so all source sites correctly specify outcome and treatment models with \(X_{k}\).
The results in Table 1 indicate that the MR-\(L_{1}\) estimator has lower RMSE than the Target estimator when some source sites have correctly specified models (\(C=1/2\) and \(C=1\)). Relative to the MR-\(L_{1}\) estimator, the SS and IVW estimators demonstrate larger biases and RMSE, and lower coverage when some source sites have misspecified models (\(C=0\) and \(C=1/2\)). The MR-\(L_{1}\) estimator shows reduced biases and RMSE compared to the AIPW-\(L_{1}\) estimator, while maintaining similar coverage; this improvement can be attributed to the inclusion of an additional model that closely resembles the true model. When all source sites correctly specify working models (\(C=1\)), the IVW estimator performs optimally with the shortest confidence interval as expected.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & Target & SS & IVW & AIPW-\(L_{1}\) & MR-\(L_{1}\) \\ \cline{2-6} \(C=0\) & & & & \\ MAE & 0.109 & 1.933 & 0.177 & 0.110 & 0.050 \\ RMSE & 0.141 & 1.987 & 0.219 & 0.144 & 0.061 \\ Cov. & 0.950 & 0.998 & 0.826 & 0.936 & 0.960 \\ Len. & 0.551 & 7.035 & 0.567 & 0.547 & 0.234 \\ \hline \(C=1/2\) & & & & & \\ MAE & 0.109 & 1.111 & 0.107 & 0.109 & 0.050 \\ RMSE & 0.141 & 1.189 & 0.139 & 0.140 & 0.062 \\ Cov. & 0.950 & 1.000 & 0.942 & 0.950 & 0.962 \\ Len. & 0.551 & 6.010 & 0.540 & 0.547 & 0.242 \\ \hline \hline \(C=1\) & & & & & \\ MAE & 0.109 & 0.036 & 0.035 & 0.050 & 0.049 \\ RMSE & 0.141 & 0.045 & 0.044 & 0.064 & 0.063 \\ Cov. & 0.950 & 0.968 & 0.956 & 0.958 & 0.960 \\ Len. & 0.551 & 0.195 & 0.191 & 0.260 & 0.253 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean absolute error (MAE), root mean squared error (RMSE), coverage (Cov.), and length (Len.) of \(95\%\) CIs based on 500 simulated data sets in three (mis)specification settings.
### Covariate Mismatch
To demonstrate that our proposed MR-\(L_{1}\) estimator can handle covariate mismatch across sites, we modify the data-generating process in the following way: only two covariates are used in the outcome and treatment generation processes in the target site. Specifically, the generating models remain the same as in (13) and (14), using covariates \(X_{k}\). However, for units in the target site, we set \(\beta_{x}=(27.4,13.7,0,0)\) for outcome generation and \(\alpha_{x}=(-1,0.5,0,0)\) for treatment generation.
The AIPW-\(L_{1}\) estimator, which requires common models across sites, only uses the shared covariates (\(X_{k1}\) and \(X_{k2}\)) to specify outcome and treatment models for all sites. On the other hand, our MR-\(L_{1}\) estimator allows for different covariates in different sites, so we utilize both shared covariates with the target site and unique covariates to specify the outcome and treatment models in the source sites.
In Table 2, we observe that the AIPW-\(L_{1}\) estimator exhibits similar bias, RMSE, coverage, and length of confidence intervals as the Target estimator while outperforming the SS and IVW estimators. This is because relying solely on shared covariates leads to significant biases in all source sites (Figure 2, left panel), and the AIPW-\(L_{1}\) estimator assigns nearly all of the ensemble weight to the target site so as to reduce bias.
In contrast, the MR-\(L_{1}\) estimator outperforms the AIPW-\(L_{1}\) estimator by exhibiting substantially smaller bias, lower RMSE, and better coverage. This improvement can be attributed to the inclusion of unique covariates from the source sites, which allows for the recovery of true models in those sites and contributes to a more accurate estimation of \(\Delta_{T}\) (Figure 2, right panel). These findings suggest that neglecting covariate mismatch by solely relying on shared covariates can lead to highly biased results.
## 7 Conclusion
We have proposed a novel federated approach for _privacy-preserving_, _multiply robust_, and _flexible_ estimation of causal effects. Compared to existing federated methods, our proposed approach accommodates covariate shift and covariate mismatch across sites, while guaranteeing efficient estimation and preserving privacy in the sense that only covariate means of the target samples are shared in a single round of communication. Our proposal allows investigators in each site to have greater flexibility in specifying candidate models by utilizing site-specific information. Moreover,
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & Target & SS & IVW & AIPW-\(L_{1}\) & MR-\(L_{1}\) \\ \cline{2-6} MAE & 0.108 & 4.331 & 0.150 & 0.107 & 0.053 \\ RMSE & 0.136 & 4.401 & 0.186 & 0.134 & 0.067 \\ Cov. & 0.946 & 1.000 & 0.882 & 0.950 & 0.944 \\ Len. & 0.538 & 26.024 & 0.553 & 0.536 & 0.253 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean absolute error (MAE), root mean squared error (RMSE), coverage (Cov.), and length (Len.) of \(95\%\) CIs based on 500 simulated data sets in covariate mismatch settings.
Figure 2: Estimates of the TATE based on \(500\) simulated data sets with covariate mismatch comparing the site-specific estimators with nuisance functions estimated by AIPW (left) and by multiply robust model-mixing (right).
our method utilizes adaptive ensemble weights to avoid negative transfer in the federation process. The limitations of the current proposal provide opportunities for further research. To handle high-dimensional covariates, future research can explore ways to jointly model the propensity score and density ratio to reduce the dimension of parameters for population balancing.
|
2309.12166 | Dark Sector Effective Field Theory | We introduce the effective field theory of two different light dark particles
interacting with the standard model (SM) light states in a single vertex,
termed dark sector effective field theory (DSEFT). We focus on the new light
particles with spin up to 1 and being real in essence, namely, new real scalars
$\phi$ and $S$, Majorana fermions $\chi$ and $\psi$, and real vectors $X_\mu$
and $V_\mu$. In the framework of low energy effective field theory with QED and
QCD symmetry, the DSEFT can be classified into six categories, including the
scalar-scalar-SM ($\phi S$-SM), fermion-fermion-SM ($\chi\psi$-SM),
vector-vector-SM ($X V$-SM), scalar-fermion-SM ($\phi \chi$-SM),
scalar-vector-SM ($\phi X$-SM), and fermion-vector-SM ($\chi X$-SM) cases. For
each case, we construct the effective operator basis up to canonical dimension
7, which will cover most interesting phenomenology at low energy. As a
phenomenological example, we investigate the longstanding neutron lifetime
anomaly through the neutron dark decay modes $n \to \chi \phi \text{ or } \chi
X$ from the effective interactions in the fermion-scalar-SM or
fermion-vector-SM case. When treating the light fermion as a dark matter
candidate, we also explore the constraints from DM-neutron annihilation signal
at Super-Kamiokande. We find the neutron dark decay in each scenario can
accommodate the anomaly, at the same time, without contradicting with the
Super-Kamiokande limit. | Jin-Han Liang, Yi Liao, Xiao-Dong Ma, Hao-Lin Wang | 2023-09-21T15:24:53Z | http://arxiv.org/abs/2309.12166v2 | # Dark Sector Effective Field Theory
###### Abstract
We introduce the effective field theory of two different light dark particles interacting with the standard model (SM) light states in a single vertex, termed dark sector effective field theory (DSEFT). We focus on the new light particles with spin up to 1 and being real in essence, namely, new real scalars \(\phi\) and \(S\), Majorana fermions \(\chi\) and \(\psi\), and real vectors \(X_{\mu}\) and \(V_{\mu}\). In the framework of low energy effective field theory with QED and QCD symmetry, the DSEFT can be classified into six categories, including the scalar-scalar-SM (\(\phi S\)-SM), fermion-fermion-SM (\(\chi\psi\)-SM), vector-vector-SM (\(XV\)-SM), scalar-fermion-SM (\(\phi\chi\)-SM), scalar-vector-SM (\(\phi X\)-SM), and fermion-vector-SM (\(\chi X\)-SM) cases. For each case, we construct the effective operator basis up to canonical dimension 7, which will cover most interesting phenomenology at low energy. As a phenomenological example, we investigate the longstanding neutron lifetime anomaly through the neutron dark decay modes \(n\rightarrow\chi\phi\) or \(\chi X\) from the effective interactions in the fermion-scalar-SM or fermion-vector-SM case. When treating the light fermion as a dark matter candidate, we also explore the constraints from DM-neutron annihilation signal at Super-Kamiokande. We find the neutron dark decay in each scenario can accommodate the anomaly, at the same time, without contradicting with the Super-Kamiokande limit. |
2310.00401 | Learning High-level Semantic-Relational Concepts for SLAM | Recent works on SLAM extend their pose graphs with higher-level semantic
concepts like Rooms exploiting relationships between them, to provide, not only
a richer representation of the situation/environment but also to improve the
accuracy of its estimation. Concretely, our previous work, Situational Graphs
(S-Graphs+), a pioneer in jointly leveraging semantic relationships in the
factor optimization process, relies on semantic entities such as Planes and
Rooms, whose relationship is mathematically defined. Nevertheless, there is no
unique approach to finding all the hidden patterns in lower-level factor-graphs
that correspond to high-level concepts of different natures. It is currently
tackled with ad-hoc algorithms, which limits its graph expressiveness.
To overcome this limitation, in this work, we propose an algorithm based on
Graph Neural Networks for learning high-level semantic-relational concepts that
can be inferred from the low-level factor graph. Given a set of mapped Planes
our algorithm is capable of inferring Room entities relating to the Planes.
Additionally, to demonstrate the versatility of our method, our algorithm can
infer an additional semantic-relational concept, i.e. Wall, and its
relationship with its Planes. We validate our method in both simulated and real
datasets demonstrating improved performance over two baseline approaches.
Furthermore, we integrate our method into the S-Graphs+ algorithm providing
improved pose and map accuracy compared to the baseline while further enhancing
the scene representation. | Jose Andres Millan-Romera, Hriday Bavle, Muhammad Shaheer, Martin R. Oswald, Holger Voos, Jose Luis Sanchez-Lopez | 2023-09-30T14:54:31Z | http://arxiv.org/abs/2310.00401v2 | # Better Situational Graphs by Inferring High-level
###### Abstract
Recent works on SLAM extend their pose graphs with higher-level semantic concepts exploiting relationships between them, to provide, not only a richer representation of the situation/environment but also to improve the accuracy of its estimation. Concretely, our previous work, Situational Graphs (_S-Graphs_), a pioneer in jointly leveraging semantic relationships in the factor optimization process, relies on semantic entities such as _wall surfaces_ and _rooms_, whose relationship is mathematically defined. Nevertheless, excerpting these high-level concepts relying exclusively on the lower-level factor-graph remains a challenge and it is currently done with ad-hoc algorithms, which limits its capability to include new semantic-relational concepts.
To overcome this limitation, in this work, we propose a Graph Neural Network (GNN) for learning high-level semantic-relational concepts that can be inferred from the low-level factor graph. We have demonstrated that we can infer _room_ entities and their relationship to the mapped _wall surfaces_, more accurately and more computationally efficient than the baseline algorithm. Additionally, to demonstrate the versatility of our method, we provide a new semantic concept, i.e. _wall_, and its relationship with its _wall surfaces_. Our proposed method has been integrated into _S-Graphs+_, and it has been validated in both simulated and real datasets. A docker container with our software will be made available to the scientific community.
## I Introduction
Incorporating higher-level semantic-relational entities enhances the situational awareness [2] of a robot and hence enriches the built model of the world. Furthermore, it provides advantageous information for successive tasks such as planning [3].
Graph-based SLAM optimizes a graph with real-world objects observed from measurements and only geometric and temporal information. During recent years, 3D Semantic Scene Graphs [4, 5, 6] emerged as a promising framework to associate the SLAM graph structure with higher-level semantic-relational concepts. [7] goes beyond and optimizes the conjunction graph leveraging loop closures.
Our previous work _S-Graphs_[1] fully integrates the SLAM graph with the scene graph optimizing them as a unified entity. However, _S-Graphs_ only extracts a few semantic-relational entities and these semantic-relational entities are extracted with ad-hoc algorithms per entity type, limiting the capability of _S-Graphs_ to generalize to different and complex type of semantic entities.
To address these limitations, we present a framework to enhance _S-Graphs_ with relational and generalization capabilities over semantic entities based on GNNs [8, 9]. Our framework can infer pairwise relationships among _wall surfaces_ belonging to the same higher-level entities either _walls_ or _rooms_. These relationships are subsequently processed and clustered in the set of nodes relating to the new entity.
Furthermore, these newly generated entities along with their relationships are incorporated as nodes and edges into the appropriate layers of the four-layered optimizable _S-Graph_. The results over several simulated and real struc
Fig. 1: **System Overview.** We augment the original _S-Graph_ from our SLAM system [1] with additional constraints derived from higher-level information like relations between walls and rooms. Taking low-level concepts like _wall surfaces_ as graph nodes, a trained graph neural network (GNN) classifies which relations belong to the _same room_ or _same wall_ and decides which constraints are then added to the _S-Graph_ to improve the quality of the map and the estimated camera trajectory.
tured indoor environments demonstrate that our method improves the baseline in detection time, expressiveness, and the number of entities detected. Refer to Fig. 1 for a visual representation of our system.
To summarize, the primary contributions of our paper are:
* A GNN-based framework to generate high-level semantic entities (concretely, _rooms_ and _walls_) and their relationships from the low-level entities (i.e. _wall surfaces_) in a computationally efficient and versatile manner.
* Integration of the algorithm within the four-layered optimizable _S-Graphs_ framework [1] along with validation in simulated and real datasets with ablation studies.
## II Related work
### _Semantic Scene Graphs for SLAM_
Scene graphs serve as graph models that encapsulate the environment as structured representations. This graph comprises entities, their associated attributes, and the interrelationships among them. In the context of 3D scene graphs, [4] has pioneered the development of an offline, semi-autonomous framework. This framework relies upon object detections derived from RGB images, creating a multilayered hierarchical representation of the environment and its constituent elements such as cameras, objects, rooms, and buildings. Additionally, [5] employs a sequence of images for visual questioning and answering and planning.
GNNs [8, 9] have been proposed to deduce scene graphs from images [10, 11], where the entities within the scene constitute the nodes of the graph, specifically object instances. Scene graph prediction necessitates the incorporation of relationships between these instances. [12] presented an initial large-scale 2D scene graph dataset, while [13, 14, 15], have extended the concept to generate 3D semantic scene graph annotations within indoor and outdoor environments, including object semantics, rooms, cameras, and the relationships interconnecting these entities. [16] extends this model to dynamic entities as humans. Furthermore, [6] does not need any prior scene knowledge and segments instances, their semantic attributes, and the concurrent inference of relationships, all in real-time.
While all these frameworks run SLAM in the background, they do not utilize the scene graph to enhance the SLAM process. Hydra [7] focuses on real-time 3D scene graph generation and optimization using loop closure constraints. [17] introduces the concept of Neural Trees, _H-Tree_, as an evolution from GNNs where message-passing nodes within the graph correspond to subgraphs within the input graph, and they are organized hierarchically, effectively enhancing the expressive capacity of GNNs. The extension of Hydra in [18] introduces _H-Tree_ to enhance the characterization of specific building areas, like kitchens. However, they do not integrate the SLAM state with the scene graph for simultaneous optimization.
Our prior works _S-Graphs_[1, 19], successfully bridged the gap by demonstrating the potential of tightly integrating SLAM graphs and scene graphs. _S-Graphs_ creates a four-layered hierarchical optimizable graph while concurrently representing the environment as a 3D scene graph, achieving remarkable performance even in complex settings. Furthermore, we have expanded its capabilities by hierarchically selecting regions of the graph to optimize [20], incorporating prior architectural information [21], visual fiducial markers [22], or collaborative data from multiple robots [23]. It has also been employed to formulate a navigation problem [3]. In this work, our objective is to harness the capabilities of GNNs to enhance all these research directions by generating more reliable, versatile, and comprehensive higher-level representations within _S-Graphs_.
### _Room and Wall Detection_
The first step in the generation of higher-level concepts resides in comprehending the interrelations among fundamental geometric entities. The identification of structural configurations corresponding to _wall surfaces_ which collectively form _rooms_ and _walls_, is crucial. Various methods have been explored to address this challenge, encompassing the utilization of pre-existing 2D LiDAR maps [24, 25, 26], the utilization of 2D occupancy maps within complex indoor environments [27], and pre-established 3D maps [28, 29, 30]. It should be noted, however, that these approaches exhibit inherent performance constraints and lack real-time operational capabilities. [7] introduce a real-time room segmentation approach designed to classify different places into _rooms_. In [1], we leverage the _wall surfaces_ contained in the _S-Graphs_+ to instantaneously define _rooms_ in real-time, while concurrently incorporating these findings into the optimizeable graph. To the best of our knowledge, no analogous methodologies exist for the automated identification of _wall_ entities.
## III Methodology
The pipeline of our method is illustrated in Fig. 2. First, the low-level layer of S-graph, i.e. the mapped _keyframe_ and _wall surface_, is received, extracting only the _wall surface_ nodes. Then, the features of those nodes are preprocessed to build a proximity graph and define the initial embedding of nodes and edges, as described in Sec. III-A. Next, node and edge embeddings are updated to infer a classification in every edge, as presented in Sec. III-B. Later, the inferred edges are clustered, and new nodes for the new _wall_ and _room_ entities are generated, introducing along with them, a link with the _wall surface_ nodes, following the method proposed in Sec. III-C. Finally, the new nodes and relationships are integrated as the high-level layers of S-Graphs.
### _Initial Graph_
Initially, raw _wall surfaces_ in _S-Graphs_+ are defined as a point cloud and a normal, which describes the observation side. To simplify this representation, points are flattened and assimilated to a 2D line. Subsequently, overlapping lines are filtered out and intersecting lines are split to overcome the issue of a unique long _wall surface_ belonging to various rooms. Finally, the initial node embedding \(v_{i}^{0}\) is defined as
\([w_{i},n_{i}]\) where \(w_{i}\) is the width of the _wall surface_ and \(n_{i}\) is the normal from the observed side.
At this point, we have a set of clean nodes without a graph structure. Hence, new directed edges are artificially created in the message-passing graph by node proximity. See Fig. 2.B. The initial embedding of those new edges, \(e_{ij}^{0}\), is defined as \([c_{j}-c_{i},cd_{ij}]\) where \(cd_{ij}\) is the closest distance and \(c_{i}\) is the centroid of the \(i_{th}\) node.
### _Embedding Update and Classification_
Inspired by [6], the classification process follows an encoder-decoder fashion. The encoder updates the node and edge embeddings separately but interleaved using the latest updates as in Eq. (1) and Eq. (2).
\[v_{i}^{l+1} =g_{v}\big{(}[v_{i}^{l},\max_{j\in\mathcal{N}(i)}(\mathrm{GAT}(v _{i}^{l},e_{ij}^{l},v_{j}^{l}))]\big{)} \tag{1}\] \[e_{ij}^{l+1} =g_{e}([v_{i}^{l},e_{ij}^{l},v_{j}^{l}]) \tag{2}\]
where \(g_{v}(\cdot)\) and \(g_{e}(\cdot)\) are linear layers [31], \(\mathcal{N}(i)\) are the neighbors of \(i_{th}\) node and \(\mathrm{GAT}(\cdot)\) is a Graph Attention Network [8, 9]. Encoder hyperparameters are maintained across the classification of both classes. Two layers of Eq. (1) and Eq. (2) are used.
As opposed to encoder, decoder hyperparameters are specific to the target class. The outcome of the Encoder is passed through a multi-layer perceptron as in Eq. (3) before the final binary classification of a specific edge.
\[c_{ij}=g_{d}([v_{i}^{L},e_{ij}^{L},v_{j}^{L}]) \tag{3}\]
where \(g_{d}(\cdot)\) are three linear layers and \(L\) is the last layer of the encoder.
```
Input: wall surface nodes and same room edges graph \(\mathcal{G}_{WS}\) Output: Clustering as list of node lists \(\boldsymbol{\mathcal{V}_{R}}\) \(\boldsymbol{\mathcal{V}_{R}}\leftarrow[]\) for\(l_{i}\in\boldsymbol{N_{WS}}\)do \(\boldsymbol{\mathcal{E}_{c}}\gets find\_cycles\_of\_size\_l(\mathcal{G}_{WS},l_{i})\)\(\boldsymbol{\mathcal{V}_{ini}}\leftarrow\mathcal{E}_{c,j}.nodes\ \forall\ \mathcal{E}_{c,j}\in\boldsymbol{\mathcal{E}_{c}}\) \(\mathcal{I}_{count}\gets count\_set\_repetitions(\boldsymbol{\mathcal{V}_{ini}})\)\(\boldsymbol{\mathcal{V}_{sort}}\gets sort\_descendent(\boldsymbol{\mathcal{V}_{ini}},\mathcal{I}_{count})\) for\(V_{sort,j}\in\boldsymbol{\mathcal{V}_{sort}}\)do if\(all(\mathcal{V}_{sort,j}\notin\boldsymbol{\mathcal{V}_{R}})\)then \(\boldsymbol{\mathcal{V}_{R}}.append(\mathcal{V}_{sort,j})\)\(\mathcal{G}_{WS}.remove(\mathcal{V}_{sort,j})\) end if end for end for
```
**Algorithm 1**_same room_ edge clustering
### _Subgraph Generation_
Each set of predicted relations, _same room_ and _same wall_, follows a different clustering process to generate _rooms_ and _walls_ nodes. On one hand, since _wall_ is composed of two _wall surfaces_, only one inferred _same wall_ relation is required to obtain a cluster of two _wall surface_ nodes.
On the other hand, for _same room_ relationship, the existence of cycles is leveraged to select _wall surface_ nodes that appear in the higher number of cycles, as explained in Algorithm. 1. We assume all _wall surface_ nodes forming a _same room_ relationship are connected through at least one cycle. As rooms can vary in the number of _wall surfaces_ contained, cycles of that length are found in a descendent order, prioritizing bigger rooms. For each length, a set of _wall
Fig. 2: **System Architecture. The entire process from geometric entities reception to the inclusion of new higher-level entities to _S-Graph_.****a)** The low-level layer of S-Graph is received. **b)** Only nodes of interest are selected i.e. the wall surfaces. **c)** A proximity graph for message passing is created amongst the nodes and the initial features of edges and nodes are computed from incoming node definitions. **d)** A GNN for nodes and linear layers for edges are employed to update embeddings via message sharing. **e)** Both embedding types are used to classify edges through linear layers. **f)** Edges are consistently grouped. **g)** For each group, a new node is created and linked with the clustered nodes. **h)** The new semantic nodes and links are added to the original _S-Graph_.
_surface_ nodes defined by the largest cycle are prioritized. However, even though any number of _wall surfaces_ could be associated with a _room_, only sets linked to two and four are generated, since currently, _S-Graphs_+ only contains those factor types. Since the same node cannot be part of two different rooms, every selected set of nodes is removed from the initial graph and the process is repeated for smaller rooms. This overcomes the issue of the existence of false positives that may lead to the merging of _wall surfaces_ of two different _rooms_.
Finally, the newly generated nodes (_rooms_, _walls_) along with the respective _wall surface_ nodes are incorporated into the _Rooms layer_ of the optimizable _S-Graph_. For _room_ nodes, the factor already present in _S-Graphs_ as described in [1] is utilized. Whereas for _wall_ nodes, the factor employed is presented in [21].
## IV Training with synthetic dataset
Due to the lack of an existing tagged dataset in the literature that provides the targeted entities and relationships, we have developed a synthetic one focused on replicating the common _wall surface_ structure of usual indoor environments. To create replicate data as similar as possible to the observed _S-Graphs_+, the data is augmented with several layers of randomization in size, position, and orientation when creating rooms and wall surfaces. Ground truth edges are automatically tagged as _same room_ or _same wall_ and included along with negative tagged message passing edges with the 15 closest neighbor nodes for each node. During the training process, 800 different layouts are used for backpropagation during each one of the 35 epochs. Xavier uniform initialization [32] is used for the learnable parameters. Fig. 3 shows an example of inference in this dataset after training. After the initial training with the synthetic dataset, the model is used on real data without further training, with no additional tuning of parameters and the same normalization applied.
## V Experimental Results
### _Methodology_
Our work is validated across multiple construction sites and office spaces, both simulated and real-world scenarios, as detailed in Tab. I and [1]. We compare it with the ad-hoc room detection algorithm presented in _S-Graphs_+ [1] which we name _Free Space_. Furthermore, we ablate our method tagged as _Ours (rooms only)_ standing for inference without _walls_. VLP-16 LiDAR data is utilized across all datasets. The example scenarios are tagged with either \(s\) or \(r\) to differentiate simulation and real respectively.
In all the experiments, no fine-tuning of the specified network hyper-parameters is applied, as the empirically chosen ones suffice for all cases.
**Simulated Data.** We performed a total of five experiments on simulated datasets denoted as \(CIF1s\), \(CIF2s\), \(SE1s\), \(SE2s\), and _SE3s_. \(CIF1s\) and \(CIF2s\) are generated from the 3D meshes of two floors of actual architectural plans, while _SE1_, _SE2s_ and _SE23s_ simulate typical indoor environments with varying room configurations. In order to assess the tracking and mapping performance of our pipeline with and without our _S-Graph_ augmentation, our reported metrics include Average Trajectory Error (ATE) and Map Matching Accuracy (MMA), and the evaluations are conducted against the ground truth provided by the simulator. Due to the absence of odometry from robot encoders, the odometry is solely estimated from LiDAR measurements in all simulated experiments.
**Real Dataset.** We conducted four experiments in two different construction sites. \(CIF1r\) and \(CIF2r\) are conducted on two floors of a construction site whose plan is used in the simulated dataset. \(C2F2r\) and \(C3F2r\) are conducted in two other construction sites. To validate the accuracy of each method in all real-world experiments, we report the MMA of the estimated 3D maps in comparison to the actual 3D map generated from the architectural plan. We employ robot encoders for odometry estimation.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Scene** & World & Description \\ \hline C1F1s & Simulated & 1 L-shaped, 1 squared rooms and 2 corridors. \\ C1F2s & Simulated & 5 squared, 2 elongated rooms and 1 corridor. \\ SE1s & Simulated & 6 squared rooms and 3 corridors. \\ SE23s & Simulated & 5 squared and 2 elongated rooms. \\ SE3s & Simulated & 22 squared rooms and 4 corridors. \\ C1F1r & Real & 1 L-shaped, 1 squared rooms and 2 corridors. \\ C1F2r & Real & 5 squared, 2 elongated rooms and 1 corridor. \\ C2F2r & Real & 7 squared, 2 L-shaped and 3 elongated rooms. \\ C3F2r & Real & 9 squared, 3 L-shaped rooms and 2 corridors. \\ \hline \hline \end{tabular}
\end{table} TABLE I: **Scenes description.** Enumeration of all simulated and real scenes included in the validation. _Room_ shapes can be squared, L-shaped, elongated or corridors.
Fig. 3: **Training performance of our approach on our synthetic dataset.** Green lines represent edges inferred as positive for the respective relation. Black lines represent wall surfaces not included in a cluster. For the rest of the entities, those with the same color are grouped in the same cluster. Note that some _wall surfaces_ are omitted to simulate occlusions.
### _Results and Discussion_
Average Trajectory Error.The ATE for the simulated experiments is presented in Tab. II. _S-Graphs_+ with our approach for _room_ and _wall_ detection demonstrates an improvement of 6.8% with respect to _S-Graphs_+ with _Free Space_ for _rooms_. The ablation of _walls_ in _Ours (rooms only)_ shows an improvement of 5.2% even though no new entity types are leveraged. Note that in construction sites more complex and similar to real data, _C1F1s_, _C1F2s_ and _SE3_, our approach is capable of dealing better with edge cases. See examples in Fig. 5. However, when complexity and size decrease, _Free Spaces_ presents a better performance.
**Map Matching Accuracy.** Tab. III presents the point cloud RMSE for simulated and real experiments. _S-Graphs_ with our approach for _room_ and _wall_ detection presents an improvement of 1.8% with respect to _S-Graphs_ with _Free Space_ for _rooms_. In this case, the ablation of _walls_ in _Ours (rooms only)_ still represents an improvement of 0.3% when only the same entity type of the baseline is used. The results demonstrate that, even the MMA was already low in the baseline, in inclusion of better and new factors to _S-Graphs_+ still represents a notable improvement.
**Precision/Recall.** Fig. 4 showcases the performance on the detection of _rooms_ for simulated and real scenarios. _Free Space_ is compared with our ablated module _(rooms only)_. On average, precision is maintained slightly over _Free Space_. Note that it is over 80% in all simulated sites and all real ones but one. Recall average is also maintained, although both methods experience a descent in performance. This is due to detections being heavily dependent on the layout of the building and the GNN could not generalize properly enough from the training dataset.
On its side, _wall_ detection can not be compared due to the lack of an analogous baseline. The results in the performance are presented in Tab. IV. It is worth mentioning that precision is maintained at 1.00 along every simulated and real scene. Recall is over 75% in all scenes but in a simulated one. On the contrary to _rooms_, the shape of all _walls_ present a low variability. It reduces the complexity for the generalization increasing the performance.
_Free Space_ is compared with our ablated module _(rooms only)_, demonstrating a drastic average improvement of 84.3% in the simulated dataset and 62.7% in the real dataset. Note that the detection time drastically decreases in all cases. This is due to the need of _Free Space_ to observe many points until a cluster is inferred while our method succeeds in the first observation half of the time.
**Limitations.** Although our experiments demonstrate that the augmentation of the _S-Graph_ with semantic relation improves both map and trajectory estimation performance, the training for identifying related walls and rooms is separate from these improvement goals. The exact dependency between a better labeling performance of the GNN and the final mapping and tracking scores is not fully explored.
## VI Conclusion
We presented a novel approach for the detection of _rooms_ and _walls_ using Graph Neural Networks to enrich the scene graph utilized by _S-Graphs+_ in the context of SLAM. Our method unfolds in several steps: (a) Edge Inference: Initially, we infer _same room_ and _same wall_ edges among the _wall surface_ nodes already present in _S-Graphs+_. (b) Clustering: Subsequently, we process these inferred edges to cluster nodes corresponding to each higher-level concept. (c) Subgraph Creation: Finally, we represent these clusters in the form of a subgraph, incorporating it into the existing factors employed in _S-Graphs+_, and seamlessly integrate it into the scene graph for optimization.
In comparison to the current algorithm used for _rooms_, and given that _walls_ are not yet automatically detected, our approach exhibits notable enhancements in terms of detection time, expressiveness, and generalization attributes. Importantly, these improvements do not compromise the performance of trajectory estimation and mapping.
In future research, our primary objective is to learn the optimal new graph in an end-to-end manner, leveraging the actual performance of the SLAM optimization as feedback. This approach would generate the entire new subgraph, factors included, as the outcome of the GNN process, thereby obviating the necessity for manually crafted rules. We also aim to expand the range of relations we can infer and hence augment the node and edge types integrated into _S-Graphs_.
\begin{table}
\begin{tabular}{l|c c c c c|c} \hline \hline & \multicolumn{3}{c|}{**Dataset**} \\ \hline & \multicolumn{3}{c|}{**Computation Time** (mean) [ms]} \\ \hline
**Module** & \(C1F1s\) & \(C1F2s\) & \(SE1s\) & \(SE2s\) & \(SE3s\) & Avg \\ \hline Free Space [1] (baseline) & 76.0 & 19.0 & 19.0 & 115.0 & 160.0 & 77.8 \\ _Ours_ & **2.7** & **2.8** & **10.2** & **8.2** & **37.1** & **12.2** \\ \hline & \(C1F1r\) & \(C1F2r\) & \(C2F2r\) & \(C3F2r\) & Avg \\ \hline Free Space [1] (baseline) & 19.0 & 25.5 & 367.0 & 308.0 & 179.9 \\ _Ours_ & **1.7** & **2.6** & **54.0** & **210.2** & **67.1** \\ \hline \hline \end{tabular}
\end{table} TABLE V: **First Detection Time (FDT)** [s] of rooms employing _S-Graphs+_ with different detection modules on simulated and real data. Our method is substantially faster than the baseline. Best results are boldfaced.
Fig. 5: **Complete final _S-Graph_ after _Free Space_[1] and _Ours_ higher-level entity detections in real and simulated environments. In both of the generations of _our_ method, the _room_ (pink, orange and green cubes) density is higher. Furthermore, only in _our_ approach, _wall_ nodes (blue cubes) are generated. The first two examples show the performance in simulated scenes and the last two, in real scenes. Note the difference from _our_ approach (right column) compared to _free space_ (left column) in the density of inferred _rooms_. Also, note that _walls_ are only inferred by _our_ method.** |
2309.04704 | Analysis of Disinformation and Fake News Detection Using Fine-Tuned
Large Language Model | The paper considers the possibility of fine-tuning Llama 2 large language
model (LLM) for the disinformation analysis and fake news detection. For
fine-tuning, the PEFT/LoRA based approach was used. In the study, the model was
fine-tuned for the following tasks: analysing a text on revealing
disinformation and propaganda narratives, fact checking, fake news detection,
manipulation analytics, extracting named entities with their sentiments. The
obtained results show that the fine-tuned Llama 2 model can perform a deep
analysis of texts and reveal complex styles and narratives. Extracted
sentiments for named entities can be considered as predictive features in
supervised machine learning models. | Bohdan M. Pavlyshenko | 2023-09-09T07:10:19Z | http://arxiv.org/abs/2309.04704v1 | # Analysis of Disinformation and Fake News Detection Using Fine-Tuned Large Language Model
###### Abstract
The paper considers the possibility of fine-tuning Llama 2 large language model (LLM) for the disinformation analysis and fake news detection. For fine-tuning, the PEFT/LoRA based approach was used. In the study, the model was fine-tuned for the following tasks: analysing a text on revealing disinformation and propaganda narratives, fact checking, fake news detection, manipulation analytics, extracting named entities with their sentiments. The obtained results show that the fine-tuned Llama 2 model can perform a deep analysis of texts and reveal complex styles and narratives. Extracted sentiments for named entities can be considered as predictive features in supervised machine learning models.
Keywords: fake news detection, Twitter, news trends, frequent itemsets, transformers, deep learning, users' communities, Large Language Model, Llama 2, LLM fine-tuning, PEFT, LoRA.
###### Contents
* 1 Introduction
* 2 Methods of Fake News Detection on Twitter
* 3 Parameter-efficient Fine-Tuning Llama 2 LLM
* 4 Testing Fine-Tuned Llama 2 Model
* 5 Conclusion
* 6 Disclaimer
* A Analysis of Fake News And Extracting Named Entities
* A.1 Analysis of News
* A.2 Extracting Named Entities
## 1 Introduction
Disinformation and fake news are amongst the top main problems nowadays. There are different approaches to producing disinformation, e.g. using not real or not essential facts
which lead to specified conclusions; using offensive, emotional style with incorrect and biased accents on facts and other approaches. News has an essential impact in many areas of society, politics and business. That is why one can see a lot of attempts to produce manipulative and fake news to get a specified response in the society. One of horrible world events is Russian invasion of Ukraine on February 24, 2022. It causes a large informational news flow on social networks, including producing manipulative and fake news to shape a specified explanation and justification of invasion.
Large Language Models (LLM) due to their transformer structure with attention mechanism can help analyse complex texts and reveal text styles. LLM such as ChatGPT show high efficiency in the analysis of complex texts. Nowadays, we can observe the emerging of many new smaller open source LLMs, e.g. Llama, Falcon, GPT4All, GPT-J, etc. Open source LLMs can be fine-tuned for specific custom problems and deployed on custom servers, e.g. in cloud computing services such as AWS, GCP. LLMs have some new features as compared to conventional language models based on transformers. One of them is zero-shot and few-shot learning, which consists in good performance of the model when we show it only few training examples or even no examples at all, but only the instructions describing what should be done. Another important feature is the reasoning when a model can generate new patterns and conclusions which are based on an input prompt and facts known by the model and which were not included into it directly during a training process. So, the model can generate analytical texts with unexpected but useful chains of thoughts. One of the approaches of using LLMs is based on retrieval augmented generation (RAG), which uses the results from other services, e.g. relational database, semantic search, graph database in the input prompt for LLM. In this case, the response can be treated as the combination of external results and LLM knowledge. In [1], different approaches for the analysis of news trends on Twitter have been considered. The obtained results show that an effective system for detecting fake and manipulative news can be developed using combined neural network which consists of three concatenated subnetworks. Discussions on social networks about companies' behavior have some impact on their business and their stock prices on the stock market. To analyze such an impact and make risk assessment, Bayesian regression can be used. Using the theory of frequent itemsets and association rules along with thematic fields of keywords, makes it possible to reveal the semantic structure for entities in news messages. LLMs are being effectively used in analysing financial data and news. In [2], we use the fine-tuned Llama 2 LLM model for financial news analytics. We study the possibility to fine-tune Llama 2 Large Language Model (LLM) for the multitask analysis of financial news. For fine-tuning, the PEFT/LoRA based approach was used. The obtained results show that the fine-tuned Llama 2 model can perform a multitask financial news analysis with a specified structure of response, part of response can be a structured text and another part of data can have JSON format for further processing.
In this study, we are going to consider and test the fine-tuned Llama 2 LLM model [3] on news datasets and propaganda narratives, highlighting the main points of a text, summarizing this text and extracting named entities with appropriate sentiments. The main goal of this study is to consider the possibility of using a fine-tuned Large Language Model (LLM) for detecting and analysing disinformation, fake news, propaganda narratives and manipulations in news messages.
Methods of Fake News Detection on Twitter
Let us consider some of our previous results on the methods of informational trends analytics and fake news detection in tweets. In the paper [1], we consider different approaches to the analysis of news trends and detecting fake news on Twitter. For the analysis and case study, informational trends on Twitter caused by Russian invasion of Ukraine in 2022 have been studied. One of the goals claimed by Russia was the 'denazification' of Ukraine. One of the allegations of Russian propaganda was that Ukraine was developing the biological weapon in special laboratories. A deep learning approach for fake news detection has been analyzed. The use of the theory of frequent itemsets and association rules, graph theory for news trends analytics has been considered.
Tweets, the messages of Twitter microblogs, have high density of semantically important keywords. Different studies on fake news detection in Twitter are considered in the papers [4, 5, 6]. In [7, 8, 9], we study different approaches for the analysis of messages on Twitter, as well as the use of tweet features for forecasting different kinds of events. The work [10] considers a number of approaches for forming different predictive features of tweet data sets and using them in the predictive analysis for the decision-making support.
As fake news, we will consider the news information which is not true as well as the information which can contain real facts, but with incorrectly specified accents, and the focuses that lead to distorted conclusion and incorrect understanding of underlying processes. For our analysis, we considered informational trends caused by Russian invasion of Ukraine in 2022. In the study, we also consider the possible impact of informational trends on different companies working in Russia during this conflict.
Figures 1-3 show the time series for tweet counts for different queries. As the results show, for the 'ukraine nazi' thematic field, the discussion of underlying problems rose dramatically after February 24, the date of Russian invasion of Ukraine. The amount of tweets related to this theme before that date was at the minimum level. That itself leads to the conclusion that the problem with nazi in Ukraine was just a formal reason to justify the invasion. Another claim of Russian propaganda was about biological weapons that were allegedly being developed in Ukrainian laboratories (Figure 3). For instance, it was claimed that a special virus was being developed and it was meant to be distributed through bats and migratory birds.
Let us consider the deep learning approach we used for fake news detection in tweets. Fake news can be detected and revealed by analyzing facts and comparing them with reality and other news sources. But for manipulative news, it is typical to amplify them artificially in different ways, e.g. by retweeting manipulative tweets many times using different users' accounts. Some accounts can be bots which were artificially created, others can belong to real users. It makes it possible to detect fake news using an approach which analyzes the patterns of users' behavior. Also, fake news have specific patterns in the text of messages. Both users' behavior and text patterns can be captured by deep machine learning algorithms. As the features for a predictive model, we used tweet texts and the list of users' usernames who retweeted those tweets. The ML model consists of several concatenated neural subnetworks: subnetwork with DistilBERT transformer which ingests tokens of tweet texts, subnetwork with the embedding layer with averaging which ingests the mixture of encoded words of tweet texts and lists of usernames of retweeters, as well as a subnetwork for the components of truncated singular value decomposition of TF-IDF matrix for the list of usernames of retweeters. Figure 4 shows the structure of the deep learning model for fake and manipulative news detection. For our case study, the loaded tweets with the thematic fields 'ukraine nazi' and 'ukraine biological weapon' were used.
Figure 3: Time series of tweets for the thematic field ’ukraine biological weapon’.
Figure 2: Time series of tweets for the thematic field ’ukraine nazi’.
Figure 1: Time series of tweets for the query ’ukraine’.
For the model training and evaluation, the tweet datasets with a specified list of users who retweeted those tweets were created. For the analysis, only the tweets with a specified threshold for retweet counts were included. The dataset was labeled using an appropriate tweet id, usernames, hashtags of tweets which can be treated as fake or manipulative. On testing this model, f1 score on the validation dataset was 0.95 [1].
In the paper [1], we also consider the use of frequent itemsets and and associative rules for analysing tweets. The frequent itemsets and associative rules can be used in a text data analysis to identify and analyze certain sets of objects, which are often found in large arrays and are characterized by certain features. Figure 5 shows the graph of frequent itemsets which describes the semantic structure of entities for a specified thematic field. Figure 6 shows similar calculation for the thematic field 'ukraine biological weapon'.
The relationships among users can be considered as a graph, where vertices denote users and edges denote their connections. Using graph mining algorithms, one can detect user communities and find ordered lists of users by various characteristics, such as _Hub, Authority, PageRank, Betweenness_. To identify user communities, we used the _Community Walktrap_ algorithm and to visualize them we used _Fruchterman-Reingold_ algorithm, which are implemented in the package _'igraph'_[11] for the \(R\) programming language environment. The _Community Walktrap_ algorithm searches for related subgraphs, also called communities, by random walk [12]. A graph which shows the relationships between users can be represented by Fruchterman-Reingold algorithm [13]. The qualitative structure of user's connections can be used for aggregating different quantitative time series and, in such a way, creating new features for predictive models which can be used, for example, for predicting target variables. Figure 7 shows users' connections and revealed communities for the subset of tweets which are related to the trends under consideration. The results show that some communities marked by different colors are highly isolated and have only few connections outside. This kind of communities can be treated as suspicious, since artificially created communities for amplifying manipulative news are also highly isolated and their activity is often concentrated on amplification by retweeting tweets from a limited set of users. Therefore, the numerical characteristics of users communities can have a predictive potential.
Figure 4: Deep learning model structure.
Figure 5: Graph of semantic frequent itemsets
Figure 6: Graph of semantic frequent itemsets.
Figure 7: Graph of users’ connections.
Parameter-efficient Fine-Tuning Llama 2 LLM
Llama 2 model is considered in the work [3] as a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. These models outperform open-source chat models on most benchmarks they have been tested, and based on human evaluations for helpfulness and safety, they may be a suitable substitute for closed-source models. The paper [3] provides a detailed description of the approach to fine-tuning and safety improvements of Llama 2-Chat. Full fine-tuning is applicable in the case when we need to ingest millions of documents into LLM. But in the case of much smaller data, we can use a PEFT/LoRA approach which consists in fine-tuning a much smaller number of model parameters. These parameters are saved in the model adapter which is used for full model modification before using it for the model text response generation. To optimize GPU usage, 4bit or 8bit quantization of LLM can be chosen for model fine-tuning. State-of-the-art Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters. Fine-tuning large-scale PLMs is often prohibitively costly. In this regard, PEFT methods only fine-tune a small number of (extra) model parameters, thereby greatly decreasing the computational and storage costs. Recent State-of-the-Art PEFT techniques achieve the performance comparable to that of full fine-tuning [14]. The paper [15] considers Low-Rank Adaptation, or LoRA, which freezes pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. PEFT/LoRA approach approach makes it possible to fine-tune LLMs with sizes near 7B parameters, using Google Colab. Along with the text data for fine-tuning, it is important to use prompt instructions which show how to process input prompts. Instructions can be created by human experts and augmented by other LLM models. LLM generates complex output texts on prompts which can be optimized in different ways. One possible way is selecting appropriate instructions for fine-tuning models. Another way is using a method called Reinforcement Learning from Human Feedback (RLHF) [16]. In this approach, human experts estimate and rate LLM output and then using this rates as target lables, LLM can be fine-tuned by supervised training.
Let us consider fine-tuning Llama 2 using a PEFT/LoRA approach. The instruction for fine-tuning can be created in different ways, e.g. by experts or using LLMs like Chat-GPT, GPT-4, Llama with appropriate prompts which specify LLM response. The dataset of fake news was taken from Kaggle repository [17, 18]. To the training datasets, we also added Russian propaganda narratives which were taken from Vox Ukraine [19]. For testing the fine-tuning, we used the dataset with prompt instructions for analysing a text, highlighting the main points of a text, summarizing a text and extracting named entities with appropriate sentiments. The dataset was split into training and validation datasets, the relative part of validation dataset was 0.25. The approaches for using PEFT/LoRA for fine-tuning Llama 2 are described in [20, 21]. For the model fine-tuning the trainer SFTTrainer from package trl [21] was used. The following training arguments were set up:
'model_name':'meta-llama/Llama-2-7b-chat-hf', learning_rate':5e-4, 'num_train_epochs':10, 'max_seq_length':2048,
'gradient_accumulation_steps':2, 'load_in_4bit': True, 'bnb_4bit_quant_type':'nf4', 'lr_scheduler_type':'linear'
## 4 Testing Fine-Tuned Llama 2 Model
Let us consider the results of testing the fine-tuned Llama 2 model. The prompt for analysing the text using fine-tuned LLama 2 LLM has the following structure:
<s>[INST] <<SYS>> You are an expert in fake news and disinformation revealing and analytics. <</SYS>>
{ prompt } [/INST]
where { prompt } - is the prompt which consists of a question and text for the analysis. For example, the following question can be used for text analytics: 'Please analyse the following text from the perspective of revealing disinformation and manipulation'. We tested Llama 2 LLM model on the analysis of an input text with the following tasks: analysing a text on revealing disinformation and propaganda narratives, fact checking, fake news detection, manipulation analytics, extracting named entities with their sentiments. For the analysis, we have chosen several recent statements, claims and thesis of well-known people concerning Russian war in Ukraine.
Hungarian Prime minister Victor Orban gave an interview to Tacker Carlson [22]. The following are the prompt based on thesis and statements of this interview and the response of fine-tuned Llama 2 model:
Prompt:
<s>[INST] <<SYS>> You are an expert in fake news and disinformation revealing and analytics. <</SYS>>
Please analyse input text from perspective of revealing disinformation narratives. Input text: Hoping Ukraine will beat Russia "is not just a misunderstanding. It is a lie. It's impossible. Ukraine has no chance of winning the war against Russia. We missed the historic opportunity" to admit Ukraine to NATO, Russia is getting "stronger and stronger." Ukraine's admission to NATO "is not a realistic proposal at this moment, so forget about it". "Call back Trump! Because you know, you can criticize him for many reasons... but ... the best foreign policy of the recent several decades belongs to him. He did not initiate any new war, he treated nicely the North Koreans, and Russia and even the Chinese... and if he would have been the president at the moment of the Russian invasion [of Ukraine], it would be not possible to do that by the Russians." "Trump is the man who can save the Western world" and all of humanity. [/INST]
Response:
The first narrative in the input text is that Ukraine cannot win the war against Russia. This narrative is manipulative as it suggests that Ukraine is incapable of success, which is a subjective claim not based on factual evidence. The second narrative suggests that Russia missed a historic opportunity to admit Ukraine to NATO. This narrative is misleading as it oversimplifies the complex geographical situation and ignores the fact that Ukraine's admission to NATO requires a consensus from all member states, which is difficult to achieve due to various reasons. The third narrative suggests that the US President, Donald Trump, is the only savior who can save the Western world and humanity from Russia. This is manipulative as it uses an emotional appeal to sway readers' opinion, ignoring the fact that Trump's policies are controversial and have been criticized by many. The offensive lexicon used in the text includes terms like 'lie','missed','stronger', 'foreign policy', 'initiate', 'treated','man who can save the Western world', 'humanity'. These words are used to create a negative sentiment towards Ukraine, NATO and the US, while glorifying Russia. The named entities associated with negative sentiment in the text include Ukraine, NATO, and Trump, while Russia is associated with a positive sentiment. The overall tone of the text is manipulative, using emotional language and subjective claims to shape readers' opinion. It manipulates the readers' perceptions by presenting a one-sided view of the situation, which is not based on factual evidence.
The following are the prompt and the fine-tuned Llama 2 model response for Tucker Carlson's claims and statements on the support for Ukraine [23]:
Prompt:
<s>[INST] <<SYS>> You are an expert in fake news and disinformation revealing and analytics. <</SYS>>
Please analyse input text from perspective of revealing disinformation narratives and sarcasm. Input text: See, it's very easy to understand. It is vitally important for you to support Ukraine because it's necessary for Ukraine to be supported by you. Your support is mandatory until it's finished, whatever it is and whatever that means. So shut up and support Ukraine or else you're in trouble. Back when they still taught logic, statements like this were known as tautologies. Something is true because it is. The more you repeat it, the truer it becomes. It's a self-reinforcing reality. There was a time when tautologies were considered illegitimate arguments, not to mention hilariously stupid. [/INST]
Response:
1. The lexicon used is offensive and sarcastic, targeting those who support Ukraine. Phrases like'shut up and support Ukraine or else you're in trouble' and 'tautologies were considered illegitimate arguments' are clearly intended to belittle and mock those who disagree with the author's stance. 2. The narrative presented is manipulative, suggesting that supporting Ukraine is a compulsory action, without providing any factual basis for this claim. This is
misleading and manipulative, as it implies that not supporting Ukraine could lead to negative consequences without providing any evidence.
3. The statement 'your support is mandatory until it's finished, whatever it is and whatever that means' is misleading and ambiguous. It implies that there is no end-point for supporting Ukraine, which is not a fact-based statement.
4. The language used is emotional manipulation, aiming to provoke an emotional response from the reader.
5. The claim that tautologies were 'considered illegitimate arguments' is misleading. While tautologies are not considered legitimate arguments, they are not necessarily hilariously stupid. This statement is offensive and intended to belittle the idea of supporting Ukraine.
6. The essay uses sarcasm and mocking language to undermine the idea of supporting Ukraine. It mocks the idea of tautologies as a basis for argument, which is a clear sign of disinformation and manipulative rhetoric. In conclusion, the essay is heavily loaded with manipulative and offensive language, sarcasm, and disinformation. It contains several narratives that are not supported by factual evidence, and uses emotional manipulation to provoke an emotional response from the reader.
In the training dataset, we also added the information from Marc Thiessen's article "America First" Case for Supporting Ukraine [24]. The following is the response of fine-tuned Llama 2 model to the question:"What are the critical reasons for the United States to maintain its support for Ukraine? ":
The United States has been a crucial supporter of Ukraine since the early 2010s, providing financial, military, and diplomatic support. Here are some critical reasons why the United States should maintain its support for Ukraine:
1. Geopolitical Stability: Ukraine is a key player in maintaining regional stability. The conflict in Ukraine has already led to a significant destabilization of the European region, and a failure to support Ukraine could lead to further escalations. The United States should continue to support Ukraine to prevent a wider conflict and maintain the stability of the region.
2. Countering Russian Aggression: The conflict in Ukraine is a direct result of Russian aggression. By supporting Ukraine, the United States is directly countering Russian aggression. Continued support will not only deter Russia from further aggressive actions but also prevent the spread of its influence in the region.
3. Protecting NATO Interests: Ukraine is a crucial buffer zone between Russia and NATO member states. The conflict in Ukraine is not only a matter of national security for Ukraine but also a matter of national security for NATO countries. Continued support will help maintain the security of NATO member states.
4. Supporting Democratic Values: The conflict in Ukraine is not only about geopolitics but also about the fight against authoritarianism and the defense of democratic values. The United States should continue to support Ukraine to uphold democratic principles and counter the influence of authoritarian regimes.
5. Promoting Economic Growth: The conflict in Ukraine has led to significant economic losses, particularly in the eastern regions. The United States should continue to support Ukraine to rebuild and restore its economy.
6. Supporting the Rule of Law: The conflict in Ukraine has also led to significant challenges to the rule of law. The United States should continue to support Ukraine to strengthen its judicial system and fight against corruption.
7. Strengthening International Order: The conflict in Ukraine has challenged the international order, particularly the principles of sovereignty and territorial integrity. The United States should continue to support Ukraine to uphold these principles and strengthen the international order.
8. Protecting Humanitarian Interests: The conflict in Ukraine has led to significant humanitarian crises, particularly in the eastern regions. The United States should continue to support Ukraine to address these crises and protect the human rights of its citizens. In conclusion, the United States has significant reasons to continue its support for Ukraine. It is essential to maintain stability in the region, counter Russian aggression, protect NATO interests, support democratic values, promote economic growth, strengthen the rule of law, and uphold the international order. Moreover, supporting Ukraine is crucial to protecting humanitarian interests and addressing significant humanitarian crises.
The following are the prompt and fine-tuned Llama 2 for fact checking about destroying Kakhovka Dam in Tucker Carlson's Twitter video episode [25, 26] using claims and statements from this video.
Prompt: <s>[INST] <<SYS>> You are an expert in fake news and disinformation revealing and analytics. <</SYS>>
Please analyse input text from perspective of revealing disinformation narratives. Input text: The question is, who did it? Well, let's see. The Kakhovka Dam was effectively Russian. It was built by the Russian government. It currently sits in Russian-controlled territory. The dam's reservoir supplies water to Crimea, which has been for the last 240 years home of the Russian Black Sea Fleet. Blowing up the dam may be bad for Ukraine, but it hurts Russia more. And for precisely that reason, the Ukrainian government has considered destroying it. Write finish conclusion if we can treat this text as fake news. [/INST]
Response:
Analyzing the input text, several narratives can be identified. The first narrative is that the Kakhovka Dam is Russian, a claim that is false. The dam was built by the Soviet government, not the Russian government, and currently sits in Ukrainian territory. The second narrative is that the dam supplies water to Crimea, which is a misleading statement as Crimea is landlocked. The third narrative is that the Ukrainian government considered destroying the dam, which is not supported by any credible sources. The text also uses manipulative and emotional language, such as 'bad for Ukraine, but it hurts Russia more,' which is subjective and designed to elicit an emotional response from the reader. The text also uses a rhetorical device, 'the Ukrainian government has considered destroying it,' which is not supported by any credible sources. In conclusion, this text can be treated as fake news. It contains misinformation and manipulative content, designed to promote a narrative that is not supported by factual
evidence.
Bulgarian President Rumen Radev told the reporters on Friday that he wanted "to make it clear that Ukraine insists on fighting this war," the Associated Press reported. "But it should also be clear that the bill is paid by the whole of Europe," he said. [27]. The following are the prompt and fine-tuned Llama 2 for the analysis of this statement.
Prompt:
<s>[INST] <<SYS>> You are an expert in fake news and disinformation revealing and analytics. <</SYS>>
Please analyse the following text from perspective of revealing disinformation narratives: Ukraine insists on carrying on with this war, but the whole of Europe is paying the bills. [/INST]
Response:
1. The narrative that Ukraine is responsible for the war. This narrative is manipulative as it implies that Ukraine is the aggressor, without providing any evidence to support this claim. 2. The narrative that Europe is paying for Ukraine's war. This is misleading. While it is true that the conflict has affected European countries, the claim that Europe is directly financing the war is not supported by any credible evidence. 3. The narrative that the West is profiting from the war. This is manipulative and lacks factual basis. It implies that the West is exploiting the conflict for its own gains, which is not supported by any reliable sources. 4. The lexicon used in the text is offensive and emotional, aiming to provoke an emotional response from the reader. The named entities involved in this narrative include 'Ukraine' and 'Europe'. The entities are presented in a negative light, suggesting that Ukraine is the aggressor and Europe is being burdened financially. The claim that Ukraine is responsible for the war is a subjective and controversial claim that lacks a factual basis. The claim that Europe is paying for the war is misleading and not supported by any credible evidence. The claim that the West is profiting from the war is manipulative and lacks factual basis. The text uses emotional language and manipulative content to create a negative perception of Ukraine and Europe, while the West is portrayed in a positive light. However, these narratives are not based on factual information and are instead designed to manipulate public opinion.
According to the Hungarian official, to achieve lasting peace, "the Western world, which supports Ukraine, must provide security guarantees to Russia, but not NATO membership for Ukrainians." [28]. The following is the analysis of this thesis by the fine-tuned Llama 2 model.
Prompt:
<s>[INST] <<SYS>> You are an expert in fake news and disinformation revealing and analytics. </SYS>> Please analyse input text from perspective of revealing disinformation narratives. Input text: According to the Hungarian official, to achieve lasting peace, "the Western world, which supports Ukraine, must provide security guarantees to Russia, but not NATO membership for Ukrainians [/INST]
Response:
Analysis: The input text contains several narratives that are often associated with disinformation and fake news. 1. "The Western world, which supports Ukraine, must provide security guarantees to Russia": This is a manipulative narrative. It suggests that the West's support for Ukraine is a threat to Russia. This is a classic example of a 'chicken game', where the West is portrayed as threatening Russia, which is not supported by factual evidence. 2. "but not NATO membership for Ukrainians": This is a controversial claim. Ukraine has been a member of NATO since 2014, which is not a factual statement. It is also misleading as it implies that Ukraine joining NATO is a threat to Russia, which is not a valid argument. 3. "the Western world, which supports Ukraine, must provide security guarantees to Russia": This statement is manipulative and contains a lack of factual basis. It is presented as if it is a universally accepted truth, which it is not. 4. Contains offensive and emotional language: The use of terms such as "Hungarian official" and "Western world" can be seen as offensive and manipulative. 5. Contains a personal attack: The text implies that the 'Hungarian official' is manipulating the situation. Conclusion: The input text contains disinformation narratives, manipulative language, and lacks factual basis. It is not a reliable source of information. The narratives presented are designed to mislead the reader and create a negative perception of the West and Ukraine.
We also conducted the testing of the fine-tuned Llama 2 model on analysing and extracting named entities with appropriate sentiments and expalnations using arbitrary news from the dataset [17, 18]. The results of testing and extracting entities with coresponding sentiments are given in the Appendix.
## 5 Conclusion
The main goal of the study is to see if the analytical capability of the fine-tuned LLM has a predictive potential and can generate a response with a specified structure which can be useful in disinformation analytics and fake news detection.
For fine-tuning, the PEFT/LoRA based approach was used. The model was fine-tuned for the following tasks: analysing a text on revealing disinformation and propaganda narratives, fact checking, fake news detection, manipulation analytics, extracting named
entities with their sentiments. The PEFT/LoRA approach makes it possible to use cheap GPU resources for model fine-tuning. The obtained results show that fine-tuned Llama 2 model can perform multitask news analyses with a specified structure of response, part of response can be a structured text and another part of data can have a JSON format that is convenient for further processing of LLM response. The PEFT/LoRA approach makes it possible to use cheap GPU resources for model fine-tuning which are available, e.g. on Google Colab in case of small input text size and 4bit quantization. Taking into account that LLM model can generate the output in specified JSON format, the data of sentiments for named entities can be used in predictive models as features and can be loaded directly into predictive models via appropriate API. These features can have a predictive potential for different target variables including quantitative characteristics of companies' behavior on financial markets.
The obtained results on fine-tuning Llama2 LLM model using PEFT/LoRA approaah can be assessed qualitatively by experts to see if the news analytics, summarizing, highlighting main points and entities extracting give some essential information to news experts. The considered approach can show high efficiency, using small sets of instructions due to the LLM ability of few-shot learning that is not inherent for conventional transformer based models. To improve the LLM performance, one needs a more precisely created training dataset and to exploit the RLHF method to get a more optimized LLM. The considered approach can be applied for using extracted entities and sentiments in supervised models with quantitative target variables, e.g. for the analysis of companies' behavior on financial and business markets.
The fine-tuned Llama 2 model was tested on different news data including thesis, statements and claims of known persons. The results show that a fine-tuned model can conduct inferences and give chain of thoughts regarding the text under analysis. Quality and usefulness of the analysis by LLM depend on the quality of training datasets. In the test results, one can mention some inaccuracy. This problem can be fixed using more accurate and specified training dataset with the training dataset with instructions corrected by an expert. In LLM responses, we can find that some generated analytical points are not precise or informative. For testing, we used a very small training dataset. To make responses more presise and informative, one can use a larger dataset selected by experts using the methods of active learning with iterative adding of new additional data which describe problematic topics and points. For testing, we used a small datatset with the 4bit model quantization for small Llama2 model with 7B parameters. The results show that with a larger dataset, larger Llama 2 model( e.g. with 13B or 70B parameters) and with 8bit or 16bit parameters quantization, one can expect an essential improvement of the quality of the analysis conducted by the fine-tuned Llama 2 model.
The obtained results show that th fine-tuned Llama 2 LLM model can be used as a part of complex deep learning approach for the analysis of news from the perspective of detecting disinformation, propaganda, fake news and manipulations.
## 6 Disclaimer
We are sharing a considered approach, ideas and results for academic purpose only, not for any real conclusions or recommendations. |
2306.17843 | Magic123: One Image to High-Quality 3D Object Generation Using Both 2D
and 3D Diffusion Priors | We present Magic123, a two-stage coarse-to-fine approach for high-quality,
textured 3D meshes generation from a single unposed image in the wild using
both2D and 3D priors. In the first stage, we optimize a neural radiance field
to produce a coarse geometry. In the second stage, we adopt a memory-efficient
differentiable mesh representation to yield a high-resolution mesh with a
visually appealing texture. In both stages, the 3D content is learned through
reference view supervision and novel views guided by a combination of 2D and 3D
diffusion priors. We introduce a single trade-off parameter between the 2D and
3D priors to control exploration (more imaginative) and exploitation (more
precise) of the generated geometry. Additionally, we employ textual inversion
and monocular depth regularization to encourage consistent appearances across
views and to prevent degenerate solutions, respectively. Magic123 demonstrates
a significant improvement over previous image-to-3D techniques, as validated
through extensive experiments on synthetic benchmarks and diverse real-world
images. Our code, models, and generated 3D assets are available at
https://github.com/guochengqian/Magic123. | Guocheng Qian, Jinjie Mai, Abdullah Hamdi, Jian Ren, Aliaksandr Siarohin, Bing Li, Hsin-Ying Lee, Ivan Skorokhodov, Peter Wonka, Sergey Tulyakov, Bernard Ghanem | 2023-06-30T17:59:08Z | http://arxiv.org/abs/2306.17843v2 | # Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors
###### Abstract
We present "_Magic123_", a two-stage coarse-to-fine approach for high-quality, textured 3D meshes generation from a _single unposed_ image in the wild using _both 2D and 3D priors_. In the first stage, we optimize a neural radiance field to produce a coarse geometry. In the second stage, we adopt a memory-efficient differentiable mesh representation to yield a high-resolution mesh with a visually appealing texture. In both stages, the 3D content is learned through reference view supervision and novel views guided by a combination of 2D and 3D diffusion priors. We introduce a single trade-off parameter between the 2D and 3D priors to control exploration (more imaginative) and exploitation (more precise) of the generated geometry. Additionally, we employ textual inversion and monocular depth regularization to encourage consistent appearances across views and to prevent degenerate solutions, respectively. Magic123 demonstrates a significant improvement over previous image-to-3D techniques, as validated through extensive experiments on synthetic benchmarks and diverse real-world images. Our code, models, and generated 3D assets are available at [https://github.com/guochengqian/Magic123](https://github.com/guochengqian/Magic123).
Introduction
Despite observing the world in 2D, human beings have a remarkable capability to navigate, reason, and engage with their 3D surroundings. This points towards a deep-seated cognitive understanding of the characteristics and behaviors of the 3D world - a truly impressive facet of human nature. This ability is taken to another level by artists who can produce detailed 3D replicas from a single image. Contrarily, from the perspective of computer vision, the task of 3D reconstruction from an unposed image - which encompasses the creation of geometry and textures - remains an unresolved, ill-posed problem, despite decades of exploration and development [89; 59; 69; 25].
The recent advances in deep learning [23; 37; 55; 71] have allowed an increasing number of 3D generation tasks to become learning-based. Even though deep learning has accomplished significant strides in image recognition [27; 15] and generation [23; 37; 71], the particular task of single-image 3D reconstruction in the wild is still lagging. We attribute this considerable discrepancy in 3D reconstruction abilities between humans and machines to two primary factors: (i) a deficiency in large-scale 3D datasets that impedes large-scale learning of 3D geometry, and (ii) the trade-off between the level of detail and computational resources when working on 3D data.
One possible approach to tackle the problem is to employ 2D priors. The pool of realistic 2D image data available online is voluminous. LAION [75], one of the most extensive text-image pair datasets, aids in training modern image understanding and generation models like CLIP [63] and Stable Diffusion [71]. With the increasing generalization capabilities of 2D generation models, there has been a notable rise in approaches that use 2D models as priors for generating 3D content. DreamFusion [62] serves as a trailblazer for this 2D prior-based methodology for text-to-3D generation. The technique demonstrates an exceptional capacity to guide novel views and optimize a neural radiance field (NeRF) [55] in a zero-shot setting. Drawing upon DreamFusion, recent work such as RealFusion [51] and NeuralLift [94], have endeavored to adapt these 2D priors for single image 3D reconstructions.
Figure 2: **Trade-off between 2D and 3D priors in Magic123**. We compare single image reconstructions for three cases: a teddy bear (common object), two stacked donuts (less common object), and a dragon statue (uncommon object). As shown on the right, Magic123 with only a 2D prior favors geometry exploration, generating 3D content with more imagination while potentially lacking 3D consistency. Magic123 with only 3D prior (on the left) prioritizes geometry exploitation, resulting in precise yet potentially simplified geometry with reduced details. Magic123 thus proposes to use both 2D and 3D prior and introduces a trade-off parameter \(\lambda_{2D/3D}\) to control the geometry exploration and exploitation (see Fig. 8). We provide a balanced point \(\lambda_{2D/3D}\)=1, with which Magic123 consistently offers identity-preserving 3D content with fine-grained geometry and visually appealing texture.
Another approach is to employ 3D priors. Earlier attempts at 3D reconstruction leveraged 3D priors like topology constraints to assist in 3D generation [89; 53; 59; 24]. However, these manually-crafted 3D priors fall short of generating high-quality 3D content. Recently, approaches like Zero-1-to-3 [46] and 3Dim [92] adapted a 2D diffusion model [71] to become view-dependent and utilized this view-dependent diffusion as a 3D prior.
We analyzed the behavior of both 2D and 3D priors and found that they both have advantages and disadvantages. 2D priors exhibit impressive generalization for 3D generation that is unattainable with 3D priors (_e.g._, the dragon statue example in Fig.2). However, methods relying on 2D priors alone inevitably compromise on 3D fidelity and consistency due to their restricted 3D knowledge. This leads to unrealistic geometry like multiple faces (Janus problems), mismatched sizes, inconsistent texture, and so on. An instance of a failure case can be observed in the teddy bear example in Fig.2. On the other hand, a strict reliance on 3D priors alone is unsuitable for in-the-wild reconstruction due to the limited 3D training data. Consequently, as illustrated in Fig.2, while 3D prior-based solution effectively processes common objects (for instance, the teddy bear example in the top row), it struggles with less common ones, yielding oversimplified, sometimes even flat 3D geometries (_e.g._, dragon statue at bottom left).
In this paper, rather than solely relying on a 2D or 3D prior, we advocate for the simultaneous use of both priors to guide novel views in image-to-3D generation. By modulating the simple yet effective tradeoff parameter between the potency of the 2D and 3D priors, we can manage the balance between exploration and exploitation in the generated 3D geometry. Prioritizing the 2D prior can enhance imaginative 3D capabilities to compensate for the incomplete 3D information inherent in a single 2D image, but this may result in less accurate 3D geometry due to a lack of 3D knowledge. In contrast, prioritizing the 3D prior can lead to more 3D-constrained solutions, generating more accurate 3D geometry, albeit with reduced imaginative capabilities and diminished ability to discover plausible solutions for challenging and uncommon cases. We introduce Magic123, a novel image-to-3D pipeline that yields high-quality 3D outputs through a two-stage coarse-to-fine optimization process utilizing both 2D and 3D priors. In the coarse stage, we optimize a neural radiance field (NeRF) [55]. NeRF learns an implicit volume representation, which is highly effective for complex geometry learning. However, NeRF demands significant memory, resulting in low-resolution rendered images passed to the diffusion models, making the output for the image-to-3D task low-quality. Even the more resource-efficient NeRF alternative, Instant-NGP [57], can only reach a resolution of \(128\times 128\) in the image-to-3D pipeline on a 16GB memory GPU. Hence, to improve the quality of the 3D content, we introduce a second stage, employing a memory-efficient and texture-decomposed SDF-Mesh hybrid representation known as Deep Marching Tetrahedra (DMTet) [78]. This approach enables us to increase the resolution up to 1K and refine the geometry and texture of the NeRF separately. In both stages, we leverage a combination of 2D and 3D priors to guide the novel views.
We summarize our contributions as follows:
* We introduce Magic123, a novel image-to-3D pipeline that uses a _two-stage coarse-to-fine_ optimization process to produce _high-quality high-resolution_ 3D geometry and textures.
* We propose to use _2D and 3D priors simultaneously_ to generate faithful 3D content from any given image. The strength parameter of priors allows for the trade-off between geometry exploration and exploitation. Users therefore can play with this trade-off parameter to generate desired 3D content.
* Moreover, we find a balanced trade-off between 2D and 3D priors, leading to reasonably realistic and detailed 3D reconstructions. Using the exact _same_ set of parameters for all examples without any additional reconfiguration, Magic123 achieves state-of-the-art results in 3D reconstruction from single unposed images in both real-world and synthetic scenarios.
## 2 Methodology
We propose a two-stage framework, Magic123, that generates 3D content from a single reference image in a coarse to fine fashion, as shown in Fig. 3. In the coarse stage, Magic123 learns a coarse geometry and texture by optimizing a NeRF. In the fine stage, Magic123 improves the quality of 3D content by directly optimizing a memory-efficient differentiable mesh representation with high-resolution renderings. In both stages, Magic123 uses joint 2D and 3D diffusion priors to trade off geometry exploration and geometry exploitation, yielding reliable 3D content with high generalizability.
### Magic123 pipeline
**Image preprocessing.** Magic123 is a pipeline for object-level image-to-3D generation. Given an image with a background, Magic123 requires a preprocessing step to extract the foreground object. We leverage an off-the-shelf segmentation model, Dense Prediction Transformer [67], to segment the object. The extracted mask, denoted as \(\mathbf{M}\) is a binary segmentation mask and will be used in the optimization. To prevent flat geometry collapse, _i.e_. the model generates textures that only appear on the surface without capturing the actual geometric details, we further extract the depth map from the reference view by the pretrained MiDaS [68]. The foreground image is used as the input, while the mask and the depth map are used in the optimization as regularization priors. These reference images are assigned fixed camera poses, assumed to the front view. More details in camera settings can be found in Sec.3.2.
#### 2.1.1 Coarse stage
The coarse stage of our Magic123 is aimed at learning underlying geometry that respects the reference image. Due to its strong ability in handling complex topological changes in a smooth and continuous fashion, we adopt NeRF in this stage.
**Instant-NGP and its optimization.** We leverage Instant-NGP [57] as our NeRF implementation because of its fast inference and ability to recover complex geometry. To reconstruct 3D faithfully from a single image, the optimization of NeRF requires at least two loss functions: (i) the reference view reconstruction supervision; and (ii) the novel view guidance.
**Reference view reconstruction loss \(\mathcal{L}_{rec}\)** is imposed in our pipeline as one of the major loss functions to ensure the rendered image from the reference viewpoint (\(\mathbf{v}^{r}\), assumed to be front view) is as close to the reference image \(\mathbf{I}^{r}\) as possible. We adopt the mean squared error (MSE) loss on both the reference image and its mask as follows:
\[\mathcal{L}_{rec}=\lambda_{rgb}\|\mathbf{M}\odot(\mathbf{I}^{r}-G_{\theta}( \mathbf{v}^{r}))\|_{2}^{2}+\lambda_{mask}\|\mathbf{M}-M(G_{\theta}(\mathbf{ v}^{r})))\|_{2}^{2}, \tag{1}\]
where \(\theta\) is the NeRF parameters to be optimized, \(\odot\) is Hadamard product, \(G_{\theta}(\mathbf{v}^{r})\) is NeRF rendered view from \(\mathbf{v}^{r}\) viewpoint, \(M()\) is the foreground mask acquired by integrating the volume density along the ray of each pixel. Since the foreground object is extracted as input, we do not model any background and simply use pure white for the background rendering for all experiments. \(\lambda_{rgb},\lambda_{mask}\) are the weights for the foreground RGB and the mask.
**Novel view guidance \(\mathcal{L}_{g}\)** is necessary since multiple views are required to train a NeRF. We follow the pioneering work in text/image-to-3D [62; 94] and use diffusion priors to guide the novel view
Figure 3: **The pipeline of Magic123**. Magic123 is a two-stage coarse-to-fine framework for high-quality 3D generation from a reference image. Magic123 is guided by the reference image, constrained by the monocular depth estimation from the image, and driven by a joint 2D and 3D diffusion prior to dream up novel views. At the coarse stage, we optimize an Instant-NGP neural radiance field (NeRF) to reconstruct a coarse geometry. At the fine stage, we initialize a DMTet mesh from the NeRF output and optimize a high-resolution mesh and texture. Textural inversion is used in both stages to generate object-preserving geometry and view-consistent textures.
generation. As a significant difference from previous works, we do not rely solely on a 2D prior or a 3D prior, but we use both of them to guide the optimization of the NeRF. See SS2.2 for details.
**Depth prior**\(\mathcal{L}_{d}\) is exploited to avoid overly-flat or caved-in 3D content. Using only the appearance reconstruction losses might yield poor geometry due to the inherent ambiguity of reconstructing 3D content from 2D images: the content of 3D may lie at any distance and still be rendered as the same image. This ambiguity might result in flat or curved-in geometry as noted in previous works [94]. We alleviate this issue by leveraging a depth regularization. A pretrained monocular depth estimator [68] is leveraged to acquire the pseudo depth \(d^{r}\) on the reference image. The depth output \(d\) from the NeRF model from the reference viewpoint should be close to the depth prior. However, due to the value mismatch of two different sources of depth estimation, an MSE loss is not an ideal loss function. We use the normalized negative Pearson correlation as the depth regularization:
\[\mathcal{L}_{d}=\frac{1}{2}\left[1-\frac{\text{cov}(\mathbf{M}\odot d^{r}, \mathbf{M}\odot d)}{\sigma(\mathbf{M}\odot d^{r})\sigma(\mathbf{M}\odot d)} \right], \tag{2}\]
where \(\text{cov}(\cdot)\) denotes covariance and \(\sigma(\cdot)\) measures standard deviation.
**Normal smoothness**\(\mathcal{L}_{n}\)**. One of the NeRF limitations is the tendency to produce high-frequency artifacts on the surface of the object. To this end, we enforce the smoothness of the normal maps of geometry for the generated 3D model following [51]. We use the finite differences of the depth to estimate the normal vector of each point, render a 2D normal map \(\mathbf{n}\) from the normal vector, and impose a loss as follows:
\[\mathcal{L}_{n}=\|\mathbf{n}-\tau(g(\mathbf{n},k))\|, \tag{3}\]
where \(\tau(\cdot)\) denotes the stopgradient operation and \(g(\cdot)\) is a Gaussian blur. The kernel size of the blurring \(k\) is set to \(9\times 9\).
Overall, the coarse stage is optimized by a combination of losses:
\[\mathcal{L}_{c}=\mathcal{L}_{rec}+\mathcal{L}_{g}+\lambda_{d}\mathcal{L}_{d}+ \lambda_{n}\mathcal{L}_{n}, \tag{4}\]
where \(\lambda_{d},\lambda_{n}\) are the weights of depth and normal regularizations.
#### 2.1.2 Fine stage
The coarse stage offers a low-resolution 3D model, possibly with noise due to the tendency of NeRF to create high-frequency artifacts. Our fine stage aims to refine the 3D model and obtain a high-resolution and disentangled geometry and texture. To this end, we adopt DMTet [78], which is a hybrid SDF-Mesh representation and is capable of generating high-resolution 3D shapes due to its high memory efficiency. Note the fine stage is identical to the coarse stage except for the 3D representation and rendering.
DMTet represents the 3D shape in terms of a deformable tetrahedral grid \((V_{T},T)\), where \(T\) denotes the tetrahedral grid and \(V_{T}\) are its vertexes. Given a vertex \(v_{i}\in V_{T}\), a Signed Distance Function (SDF) \(s_{i}\in\mathbb{R}\) and a triangle deformation vector \(\triangle v_{i}\in\mathbb{R}^{3}\) are the parameters to be learned during optimization to extract a differentiable mesh [78]. The SDF is initialized by converting the density field of the coarse stage, while the triangle deformation is initialized as zero. For the textures, we follow Magic3D [43] to use a neural color field that is initialized from the color field of the coarse stage. Since differentiable rasterization can be performed efficiently at very high resolution, we always use \(8\times\) resolution of the coarse stage, which is found to have a similar memory consumption to the coarse stage.
### Joint 2D and 3D priors for image-to-3D generation
**2D priors.** Using a single reference image is insufficient to train a complete NeRF model without any priors [100; 45]. To address this issue, DreamFusion [62] proposes to use a 2D diffusion model as the prior to guide the novel views via the proposed score distillation sampling (SDS) loss. SDS exploits a 2D text-to-image diffusion model [72], encodes the rendered view as latent, adds noise to it, and guesses the clean novel view guided by the input text prompt. Roughly speaking, SDS translates the rendered view into an image that respects both the content from the rendered view and the prompt. The SDS loss is illustrated in the upper part of Fig. 4 and is formulated as:
\[\mathcal{L}_{2D}=\mathbb{E}_{t,\epsilon}\left[w(t)(\epsilon_{\phi}(\mathbf{z} _{t};\mathbf{e},t)-\epsilon)\frac{\partial\mathbf{z}}{\partial\mathbf{I}} \frac{\partial\mathbf{I}}{\partial\theta}\right], \tag{5}\]
where \(\mathbf{I}\) is a rendered view, and \(\mathbf{z}_{t}\) is the noisy latent by adding a random Gaussian noise of a time step \(t\) to the latent of \(\mathbf{I}\). \(\epsilon,\epsilon_{\phi}\), \(\phi\), \(\theta\) are the added noise, predicted noise, parameters of the 2D diffusion prior, and the parameters of the 3D model. \(\theta\) can be MLPs of NeRF for the coarse stage, or SDF, triangular deformations, and color field for the fine stage. DreamFusion [62] further points out that the Jacobian term of the image encoder \(\frac{\partial\mathbf{z}}{\partial\mathbf{I}}\) in Eq. (5) can be further eliminated, making the SDS loss much more efficient in terms of both speed and memory. In our experiments, we utilize the SDS loss with Stable Diffusion [71] v1.5 as our 2D prior. The rendered images are interpolated to \(512\times 512\) as required by the image encoder in [71].
**Textural inversion.** Note the prompt \(\mathbf{e}\) we use for each reference image is not a pure text chosen from tedious prompt engineering. Using pure text for image-to-3D generation most likely results in inconsistent texture and geometry due to the limited expressiveness of the human language. For example, using "A high-resolution DSLR image of a colorful teapot" will generate different geometry and colors that do not respect the reference image. We thus follow RealFusion [51] to leverage the same textual inversion [20] technique to acquire a special token <_e_> to represent the object in the reference image. We use the same prompt for all examples: "A high-resolution DSLR image of <_e_>". We find that Stable Diffusion can generate the teapot with a more similar texture and style to the reference image with the textural inversion technique compared to the results without it.
Overall, the 2D diffusion priors [62, 52, 43] exhibit a remarkable capacity for exploring the space of geometry, thereby facilitating the generation of diverse geometric representations with a heightened sense of imagination. This exceptional imaginative capability compensates for the inherent limitations associated with the availability of incomplete 3D information in a single 2D image. Moreover, the utilization of 2D prior-based techniques for 3D reconstruction reduces the likelihood of overfitting in certain scenarios, owing to their training on an extensive dataset comprising over a billion images. However, it is crucial to acknowledge that the reliance on 2D priors may introduce inaccuracies in the generated 3D representations, thereby potentially deviating from true fidelity. This low-fidelity generation happens because 2D priors lack 3D knowledge. For instance, the utilization of 2D priors may yield imprecise geometries, such as Janus problems and mismatched sizes as depicted in Fig. 2 and Fig. 8.
#### 2.2.1 3D prior
Using only the 2D prior is not sufficient to capture detailed and consistent 3D geometry. Zero-1-to-3 [46] thus proposes a 3D prior solution. Zero-1-to-3 finetunes Stable Diffusion into a view-dependent version on Objavverse [14], the largest open-source 3D dataset that consists of 818K models. Zero-1-to-3 takes a reference image and a viewpoint as input and can generate a novel view from the given viewpoint. Zero-1-to-3 thereby can be used as a strong 3D prior for 3D reconstruction. The usage of Zero-1-to-3 in an image-to-3D generation pipeline using SDS loss [62] is formulated as:
\[\mathcal{L}_{3D}=\mathbb{E}_{t,\epsilon}\left[w(t)(\epsilon_{\phi}(\mathbf{z }_{t};\mathbf{\Gamma}^{r},t,R,T)-\epsilon)\frac{\partial\mathbf{I}}{\partial \theta}\right], \tag{6}\]
where \(R,T\) are the camera poses passed to Zero-1-to-3, the view-dependent diffusion model. The difference between using the 3D prior and the 2D prior is illustrated in Fig. 4, where we show that the 2D prior uses text embedding as guidance while the 3D prior uses the reference view \(\mathbf{\Gamma}^{r}\) with
Figure 4: **2D _vs._ 3D Diffusion priors. Magic123 uses Stable Diffusion [71] as the 2D prior and viewpoint-conditioned diffusion model Zero-1-to-3 [46] as the 3D prior. Stable Diffusion takes the noisy rendered view and a text prompt as input, while Zero-1-to-3 uses additionally the novel view camera pose as input, creating a 3D-aware prior for Magic123.**
the novel view camera poses as guidance. The 3D prior utilizes camera poses to encourage 3D consistency and enable the usage of more 3D information compared to the 2D prior.
Overall, the utilization of 3D priors demonstrates a commendable capacity for effectively harnessing the expansive realm of geometry, resulting in the generation of significantly more accurate geometric representations compared to their 2D counterparts. This heightened precision particularly applies when dealing with objects that are commonly encountered within the pre-trained 3D dataset. However, it is essential to acknowledge that the generalization capability of 3D priors is comparatively lower than that of 2D priors, thereby potentially leading to the production of geometric structures that may appear implausible. This low generalization results from the limited scale of available 3D datasets, especially in the case of high-quality real-scanned objects. For instance, in the case of uncommon objects, the employment of Zero-1-to-3 often tends to yield overly simplified geometries, _e.g_. flat surfaces without details in the back view (see Fig. 2 and Fig. 8).
#### 2.2.2 Joint 2D and 3D priors
We find that the 2D and 3D priors are complementary to each other. Instead of relying solely on 2D or 3D prior, we propose to use both priors in 3D generation. The 2D prior is used to _explore_ the geometry space, favoring high imagination but might lead to inaccurate geometry. We name this characteristic of the 2D prior as _geometry exploration_. On the other hand, the 3D prior is used to _exploit_ the geometry space, constraining the generated 3D content to fulfill the implicit requirement of the underlying geometry, favoring precise geometry but with less generalizability. In the case of uncommon objects, the 3D prior might result in over-simplified geometry. We name this feature of using the 3D prior as _geometry exploitation_. In our image-to-3D pipeline, we propose a new prior loss for the novel view supervision to combine both 2D and 3D priors:
\[\mathcal{L}_{g}=\mathbb{E}_{t_{1},t_{2},\epsilon_{1},\epsilon_{2}}\left[w(t) \left[\lambda_{2D/3D}(\epsilon_{\phi_{2D}}(\mathbf{z}_{t_{1}};\mathbf{e},t_{1 })-\epsilon_{1})+\lambda_{3D}(\epsilon_{\phi_{3D}}(\mathbf{z}_{t_{2}}; \mathbf{\Gamma}^{\prime},t_{2},R,T)-\epsilon_{2})\right]\frac{\partial\mathbf{ I}}{\partial\theta}\right], \tag{7}\]
where \(\lambda_{2D/3D}\) and \(\lambda_{3D}\) determine the strength of 2D and 3D prior, respectively. Weighting more on \(\lambda_{2D/3D}\) leads to more geometry exploration, while weighting more on \(\lambda_{3D}\) results in more geometry exploitation. However, tuning two parameters at the same time is not user-friendly. Interestingly, through both qualitative and quantitative experiments, we find that Zero-1-to-3, the 3D prior we use, is much more tolerant to \(\lambda_{3D}\) than Stable Diffusion to \(\lambda_{2D}\). When only the 3D prior is used, _i.e_. \(\lambda_{2D}=0\), Zero-1-to-3 generates consistent results for \(\lambda_{3D}\) ranging from 10 to 60. On the contrary, Stable Diffusion is rather sensitive to \(\lambda_{2D}\). When setting \(\lambda_{3D}\) to \(0\) and using the 2D prior only, the generated geometry varies a lot when \(\lambda_{2D}\) is changed from \(1\) to \(2\). This observation leads us to fix \(\lambda_{3D}=40\) and to rely on tuning the \(\lambda_{2D}\) to trade off the geometry exploration and exploitation. We set \(\lambda_{2D/3D}=1.0\) for all results throughout the paper, but this value can be tuned according to the user's preference. More details and discussions on the choice of 2D and 3D priors weights are available in Sec.3.4.
## 3 Experiments
### Datasets
**NeRF4.** We introduce a NeRF4 dataset that we collect from 4 scenarios, chair, drums, ficus, and microphone, out of the 8 test examples from the synthetic NeRF dataset [55]. These four scenarios cover complex objects (drums and ficus), a hard case (the back view of the chair), and a simple case (the microphone). The other four examples are removed since they are not subject to the front view assumption, requiring further camera pose estimation or a manual tuning of the camera pose, which is out of the scope of this work.
**RealFusion15.** We further use the dataset collected and released by RealFusion [51], consisting of 15 natural images that include bananas, birds, cacti, barbie cakes, cat statues, teapots, microphones, dragon statues, fishes, cherries, and watercolor paintings _etc_.
Figure 5: **Qualitative comparisons on image-to-3D generation.** We compare Magic123 to recent methods (Point-E [58], ShapeE [34], 3DFuse [77], RealFusion [51], and Zero-1-to-3 [46]) for generating 3D objects from a single unposed image (the leftmost column). On top, we show results on the RealFusion15 dataset, and on the bottom, we show results on the NeRF4 dataset.
### Implementation details
**Optimizing the pipeline.** We use _exactly the same_ set of hyperparameters for all experiments and do not perform any per-object hyperparameter optimization. Both coarse and fine stages are optimized using Adam with \(0.001\) learning rate and no weight decay for \(5,000\) iterations. \(\lambda_{rgb},\lambda_{mask},\lambda_{d}\) are set to \(5,0.5,0.001\) for both stages. \(\lambda_{2D}\) and \(\lambda_{3D}\) are set to \(1\) and \(40\) for the first stage and are lowered to \(0.001\) and \(0.01\) in the second stage for refinement to alleviate oversaturated textures. We adopt the Stable Diffusion [80] model of V1.5 as the 2D prior. The guidance scale of the 2D prior is set to \(100\) following [62]. For the 3D prior, Zero-1-to-3 [46] (\(105,000\) iterations finetuned version) is leveraged. The guidance scale of Zero-1-to-3 is set to \(5\) following [46]. The NeRF backbone is implemented by three layers of multi-layer perceptrons with \(64\) hidden dims. Regarding lighting and shading, we keep nearly the same as [62]. The difference is we set the first \(3,000\) iterations in the first stage to normals' shading to focus on learning geometry. For other iterations as well as the fine stage, we use diffuse shading with a probability \(0.75\) and textureless shading with a probability \(0.25\). The rendering resolutions are set to \(128\times 128\) and \(1024\times 1024\) for the coarse and the fine stage, respectively.
**Camera setting.** Since the reference image is unposed, we assume its camera parameters are as follows. First, the reference image is assumed to be shot from the front view, _i.e_. polar angle \(90^{\circ}\), azimuth angle \(0^{\circ}\). Second, the camera is placed \(1.8\) meters from the coordinate origin, _i.e_. the radial distance is \(1.8\). Third, the field of view (FOV) of the camera is 40\({}^{\circ}\). We highlight that the 3D reconstruction performance is not sensitive to camera parameters, as long as they are reasonable, _e.g_. FOV between \(20\) and \(60\), and radial distance between \(1\) to \(4\) meters. Note this camera setting works for images subject to the front-view assumption. For images taken deviating from the front view, a manual change of polar angle or a camera estimation is required.
### Results
**Evaluation metrics.** For a comprehensive evaluation, we adhere to the metrics employed in prior studies [94; 51], namely PSNR, LPIPS [104], and CLIP-similarity [63]. PSNR and LPIPS are gauged in the reference view to measure reconstruction quality and perceptual similarity. CLIP-similarity calculates an average CLIP distance between rendered image and the reference image to measure 3D consistency through appearance similarity across novel views and the reference view.
**Quantitative and qualitative comparisons.** We compare Magic123 against the state-of-the-art PointE [58], Shap-E [34], 3DFuse [77], NeuralLift [94], RealFusion [51] and Zero-1-to-3 [46] in both NeRF4 and RealFusion15 datasets. For Zero-1-to-3, we adopt the implementation here [84], which yields better performance than the original implementation. For other works, we use their officially released code. As shown in Table 1, Magic123 achieves Top-1 performance across all the metrics in both datasets when compared to previous approaches. It is worth noting that the PSNR and LPIPS results demonstrate significant improvements over the baselines, highlighting the exceptional reconstruction performance of Magic123. The improvement of CLIP-Similarity reflects the great 3D coherency regards to the reference view. Qualitative comparisons are available in Fig. 5. Magic123 achieves the best results in terms of both geometry and texture. Note how Magic123 greatly outperforms the 3D-based zero-1-to-3 [46] especially in complex objects like the dragon statue and the colorful teapot in the first two rows, while at the same time greatly outperforming 2D-based RealFusion [51] in all examples. This performance demonstrates the superiority of Magic123 over the state-of-the-art and its ability to generate high-quality 3D content.
\begin{table}
\begin{tabular}{c|c c c c c c c c} \hline \hline
**Dataset** & **Metrics(Methods** & Point-E [58] & Shap-E [34] & 3DFuse [77] & NeuralLift [94] & RealFusion [51] & Zero-1-to-3 [46] & **Magic123 (Ours)** \\ \hline \multirow{2}{*}{**NeRF4**} & CLIP-Similarity\(\uparrow\) & 0.48 & 0.60 & 0.60 & 0.52 & 0.38 & 0.62 & **0.90** \\ & PSNR\(\uparrow\) & 0.70 & 0.99 & 5.86 & 12.55 & 15.37 & 23.96 & **2.42** \\ & LPIPS\(\uparrow\) & 0.80 & 0.76 & 0.76 & 0.50 & 0.20 & 0.05 & **0.03** \\ \hline \multirow{2}{*}{**RealFusion15**} & CLIP-Similarity\(\uparrow\) & 0.53 & 0.59 & 6.28 & 0.65 & 0.67 & 0.75 & **0.82** \\ & PSNR\(\uparrow\) & 0.98 & 1.23 & 18.87 & 11.08 & 0.67 & 19.49 & **19.50** \\ & LPIPS\(\downarrow\) & 0.78 & 0.74 & 0.80 & 0.53 & 0.14 & 0.11 & **0.10** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Magic123 results.** We show quantitative results in terms of CLIP-Similarity\(\uparrow\) / PSNR\(\uparrow\) / LPIPS\(\downarrow\). The results are shown on the NeRF4 and Realfusion datasets, while **bold** reflects the best.
### Ablation and analysis
Magic123 introduces a coarse-to-fine pipeline for single image reconstruction and a joint 2D and 3D prior for novel view guidance. We provide analysis and ablation studies to show their effectiveness.
**The effect of two stages.** We study in Fig. 6 and Fig. 7 the effect of using the fine stage of our pipeline on the performance of Magic123. We note that a consistent improvement in terms of both qualitative and quantitative performance is observed throughout different setups when the fine stage is combined with the coarse stage. The use of a textured mesh DMTet representation enables higher quality 3D content that fits the objective and produces more compelling and higher resolution 3D consistent visuals.
**3D priors only.** We first turn off the guidance of the 2D prior by setting \(\lambda_{2D}=0\), such that we only use the 3D prior Zero-1-to-3 [46] as the guidance. We study the effects of \(\lambda_{3D}\) by setting it to \(10,20,40,60\). Interestingly, we find that Zero-1-to-3 is very robust to the change of \(\lambda_{3D}\). Tab. 2 demonstrates that different \(\lambda_{3D}\) lead to a consistent quantitative result. We thus simply set \(\lambda_{3D}=40\) throughout the experiments since it achieves a slightly better CLIP-similarity score than other values.
**2D priors only.** We then turn off the 3D prior and study the effect of \(\lambda_{2D}\) in the image-to-3D task. As shown in Tab. 2, with the increase of \(\lambda_{2D}\), an increase in CLIP-similarity is observed. This is due to the fact that a larger 2D prior weight leads to more imagination but unfortunately might result in the Janus problem.
**Combining both 2D and 3D priors and the trade off factor \(\lambda_{2D/3D}\).** In Magic123, we propose to use both 2D and 3D priors. Fig. 6 demonstrates the effectiveness of combining the 2D and 3D priors on the quantitative performance of image-to-3D generation. In Fig. 8, we further analyze the tradeoff hyperparameter \(\lambda_{2D/3D}\) from Eq. (7). We start from \(\lambda_{2D/3D}\)=0 to use only the 3D prior and gradually increase \(\lambda_{2D/3D}\) to \(0.1,0.5,1.0,2,5\), and finally \(\infty\) to use only the 2D prior with \(\lambda_{2D}\)=1 and \(\lambda_{3D}\)=0. The key observations include: (1) Relying solely on the 3D prior results in precise geometry (as observed in the teddy bear) but falters in generating complex and uncommon objects, often rendering oversimplified geometry with minimal details (as seen in the dragon statue); (2) Relying solely on the 2D prior significantly improves performance in conjuring complex scenes like the dragon statue but simultaneously triggers the Janus problem in simple examples such as the bear; (3) As \(\lambda_{2D/3D}\) escalates, the imaginative prowess of Magic123 is enhanced and more details become evident, but there is a tendency to compromise 3D consistency. We assign \(\lambda_{2D/3D}\)=1 as the default
\begin{table}
\begin{tabular}{c|c c c c c|c c c} \hline \hline & \multicolumn{4}{c|}{varying \(\lambda_{3D}\) when \(\lambda_{2D}\)=0} & \multicolumn{4}{c}{varying \(\lambda_{2D}\) when \(\lambda_{3D}\)=0} \\ & _10_ & _20_ & _40_ & _60_ & _80_ & _0.1_ & \(1\) & \(2\) \\ \hline CLIP-similarity\(\uparrow\) & 0.58 & 0.61 & 0.62 & 0.61 & 0.58 & 0.54 & 0.60 & 0.72 \\ PSNR\(\uparrow\) & 23.96 & 24.05 & 23.96 & 23.75 & 23.34 & 23.62 & 24.11 & 22.42 \\ LPIPS\(\downarrow\) & 0.04 & 0.04 & 0.05 & 0.06 & 0.08 & 0.04 & 0.04 & 0.07 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Effects of \(\lambda_{3D}\) and \(\lambda_{2D}\) in Magic123 using only 2D or 3D prior on NeRF4 dataset.**
Figure 6: **Ablation study (quantitative).** We quantitatively compare using the coarse and fine stages in Magic123. In both setups, we ablate utilizing only 2D prior (\(\lambda_{2D}\)=1,\(\lambda_{3D}\)=0), utilizing only 3D prior (\(\lambda_{2D}\)=0,\(\lambda_{3D}\)=40), and utilizing both 2D and 3D priors (\(\lambda_{2D}\)=1,\(\lambda_{3D}\)=40).
value for all examples. However, this parameter could also be fine-tuned for even better results on certain inputs.
## 4 Related work
**Multi-view 3D reconstruction.** Multi-view 3D reconstruction aims to recover the 3D structure of a scene from its 2D RGB images captured from different camera positions [18; 1]. Classical approaches usually recover a scene's geometry as a point cloud using SIFT-based [49] point matching [73; 74]. More recent methods enhance them by relying on neural networks for feature extraction (_e.g_. [96; 30; 97; 101]). The development of Neural Radiance Fields (NeRF) [55; 47] has prompted a shift towards reconstructing 3D as volume radiance [83], enabling the synthesis of photo-realistic novel views [86; 3; 4]. Subsequent works have also explored the optimization of NeRF in few-shot (_e.g_. [32; 39; 16]) and one-shot (_e.g_. [100; 9]) settings. NeRF does not store any 3D geometry explicitly (only the density field), and several works propose to use a signed distance function to recover a scene's surface [99; 90; 98; 91; 13], including in the few-shot setting as well (_e.g_. [102; 103]).
Figure 8: **Setting \(\lambda_{2D/3D}\). We study the effects of \(\lambda_{2D/3D}\) on Magic123. Increasing \(\lambda_{2D/3D}\) leads to a 3D geometry with higher imagination and less precision and vice versa. \(\lambda_{2D/3D}\)=1 provides a good balance and thus is used as the default.**
Figure 7: **Ablation study (qualitative). We qualitatively compare the novel view renderings from the coarse and fine stages in Magic123. We ablate utilizing only 2D prior (\(\lambda_{2D}\)=1,\(\lambda_{3D}\)=0), utilizing only 3D prior (\(\lambda_{2D}\)=0,\(\lambda_{3D}\)=40), and utilizing both 2D and 3D priors (\(\lambda_{2D}\)=1,\(\lambda_{3D}\)=40).**
**In-domain single-view 3D reconstruction.** 3D reconstruction from a single view requires strong priors on the object geometry since even epipolar constraints [26] cannot be imposed in such a setup. Direct supervision in the form of 3D shapes or keypoints is a robust way to impose such constraints for a particular domain, like human faces [5; 6], heads [42; 88], hands [61] or full bodies [48; 50]. Such supervision requires expensive 3D annotations and manual 3D prior creation. Thus several works explore unsupervised learning of 3D geometry from object-centric datasets (_e.g_. [35; 17; 38; 82; 22; 41; 79]). These methods are typically structured as auto-encoders [93; 36; 44] or generators [7; 81] with explicit 3D decomposition under the hood. Due to the lack of large-scale 3D data, these methods are limited to simple shapes (_e.g_. chairs, cars) and cannot generalize to more complex or uncommon objects (_e.g_. dragons, status).
**Zero-shot single-view 3D reconstruction.** Foundational multi-modal networks [63; 8; 71] have enabled various zero-shot 3D synthesis tasks. Earlier works employed CLIP [63] guidance for 3D generation [31; 29; 56; 95] and manipulation [60; 40; 21] from text prompts. Modern zero-shot text-to-image generators [66; 71; 65; 72; 2; 19] allowed to improve these results by providing stronger synthesis priors [62; 87; 52; 10; 12]. DreamFusion [62] is a seminal work that proposed to distill an off-the-shelf diffusion model [72] into a NeRF [55; 4] for a given text query. It sparked numerous follow-up approaches for text-to-3D synthesis (_e.g_. [43; 11]) and image-to-3D reconstruction (_e.g_. [76; 51; 46; 85]). The latter is achieved via additional reconstruction losses on the frontal camera position [46] and/or subject-driven diffusion guidance [64; 43]. The developed methods improved the underlying 3D representation [43; 11; 84] and 3D consistency of the supervision [46; 77]; explored task-specific priors [28; 33; 70] and additional controls [54]. Similar to the recent image-to-3D generators [46; 51], we also follow the DreamFusion [62] pipeline, but focus on reconstructing a high-resolution, textured 3D mesh using a joint 2D and 3D priors.
## 5 Conclusion and discussion
This work presents Magic123, a two-stage coarse-to-fine solution for generating high-quality, textured 3D meshes from a _single_ unposed image. By leveraging both 2D and 3D priors, our approach overcomes the limitations of existing studies and achieves state-of-the-art results in image-to-3D reconstruction. The trade-off parameter between the 2D and 3D priors allows for control over the balance between exploration and exploitation of the generated geometry. Our method outperforms previous techniques in terms of both realism and level of detail, as demonstrated through extensive experiments on real-world images and synthetic benchmarks. Our findings contribute to narrowing the gap between human abilities in 3D reasoning and those of machines, and pave the way for future advancements in single image 3D reconstruction. The availability of our code, models, and generated 3D assets will further facilitate research and applications in this field.
**Limitation.** One of the limitations is that we assume the reference image is taken from the front view. This assumption leads to poor geometry when the reference image does not conform to the front-view assumption, _e.g_. a photo of a dish on the table taken from the up view. Our method will instead focus on generating the bottom of the dish and table instead of the dish geometry itself. This limitation can be alleviated by a manual reference camera pose tuning or camera estimation. Another limitation of our work is the dependency on the preprocessed segmentation [67] and the monocular depth estimation model [68]. Any error on these modules will creep into the later stages and affect the overall generation quality. Similar to previous work, Magic123 also tends to generate over-saturated textures due to the usage of the SDS loss. The over-saturation issue becomes more severe for the second stage because of the higher resolution.
**Acknowledgement.** The authors would like to thank Xiaoyu Xiang for the insightful discussion and Dai-Jie Wu for sharing Point-E and Shap-E results. This work was supported by the KAUST Office of Sponsored Research through the Visual Computing Center funding, as well as, the SDAIA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI). Part of the support is also coming from KAUST Ibn Rushd Postdoc Fellowship program. |
2309.09220 | Improving Speech Inversion Through Self-Supervised Embeddings and
Enhanced Tract Variables | The performance of deep learning models depends significantly on their
capacity to encode input features efficiently and decode them into meaningful
outputs. Better input and output representation has the potential to boost
models' performance and generalization. In the context of
acoustic-to-articulatory speech inversion (SI) systems, we study the impact of
utilizing speech representations acquired via self-supervised learning (SSL)
models, such as HuBERT compared to conventional acoustic features.
Additionally, we investigate the incorporation of novel tract variables (TVs)
through an improved geometric transformation model. By combining these two
approaches, we improve the Pearson product-moment correlation (PPMC) scores
which evaluate the accuracy of TV estimation of the SI system from 0.7452 to
0.8141, a 6.9% increase. Our findings underscore the profound influence of rich
feature representations from SSL models and improved geometric transformations
with target TVs on the enhanced functionality of SI systems. | Ahmed Adel Attia, Yashish M. Siriwardena, Carol Espy-Wilson | 2023-09-17T09:18:04Z | http://arxiv.org/abs/2309.09220v2 | # Improving Speech Inversion Through Self-Supervised Embeddings and Enhanced Tract Variables
###### Abstract
The performance of deep learning models depends significantly on their capacity to encode input features efficiently and decode them into meaningful outputs. Better input and output representation has the potential to boost models' performance and generalization. In the context of acoustic-to-articulatory speech inversion (SI) systems, we study the impact of utilizing speech representations acquired via self-supervised learning (SSL) models, such as HuBERT compared to conventional acoustic features. Additionally, we investigate the incorporation of novel tract variables (TVs) through an improved geometric transformation model. By combining these two approaches, we improve the Pearson product moment correlation (PPMC) scores which evaluate the accuracy of TV estimation of the SI system from 0.7452 to 0.8141, a 6.9% increase. Our findings underscore the profound influence of rich feature representations from SSL models, and improved geometric transformations with target TVs on the enhanced functionality of SI systems.
Ahmed Adel Attia*, Yashish M. Siriwardena*, Carol Espy-Wilson Institute for Systems Research, Electrical and Computer Engineering,
University of Maryland, College Park
Maryland, USA self supervised learning, speech inversion, hubert, tract variables, xrmb
## 1 Introduction
Articulatory data refers to the positions and motion of different articulators in the vocal tract during speech. This data has shown to be critical in a number of speech applications, like speech therapy [1], and mental health assessment [2]. Articulatory data is obtained using different imaging techniques, like X-ray Microbeam (XRMB) [3], Electromagnetic Articulatory (EMA) [4] and real-time Magnetic Resonance Imaging (rt-MRI) [5]. However, these methods are invasive, expensive, and can be dangerous under prolonged exposure [6]. Acoustic to articulatory Speech Inversion (SI) provides an alternative method of estimating the articulatory parameters from the acoustic signal.
Deep Neural Networks (DNNs) have been shown to be effective SI systems [7, 8]. The performance of DNNs can be improved through better input and output feature space representation, and SI systems are no exception. In our previous works, we have shown that SI DNN models' performance can improve through better input representation through audio data augmentation [7], or incorporating source features [8].
Self-Supervised Learning (SSL) has been shown to be an effective method of improving DNN performance through the utilization of unlabeled data in learning speech representations[9, 10]. These representations have shown to be effective in Automatic Speech Recognition (ASR) systems [11], speech separation and enhancement [12]. Recent works have also shown that SSL speech representation has the capacity to improve the performance of SI models for EMA data [13] outperforming the conventional acoustic features like Mel-frequency Cepstral Coefficients (MFCCs). Cho et al. [13] have extensively evaluated the existing SSL speech representations for the SI task and have found that HuBERT based SSL speech representations [14] works the best over both the other SSL features (eg. wav2vec2, tera [15]) and conventional acoustic features like MFCCs.
Additionally, the analysis and prediction of raw articulatory data can be challenging. Raw articulatory data is represented in the absolute X-Y coordinates of different articulators, which is closely linked to the speaker's anatomy, leading to inter-speaker variability in pellet positions for the same sound. For that reason, quantifying vocal tract shape is best achieved by measuring the location and degree of these constrictions. These measurements are called Tract Variables (TVs) and can be achieved through geometric transformations of the raw articulatory parameters [16]. In a recent previous work, we have presented a novel geometric transformation which improved the performance of SI systems through better output feature space representation [17].
In this work, we combine both approaches, by using HuBERT [9] SSL speech representation to improve the input representation. We also continue our previous work presented in [17] by proposing a new geometric transformation that enhances the performance of SI systems further. We show that using better input and output feature representations lead to
better SI performance and more robust estimated TVs.
We begin by a description of the XRMB dataset in section 2. We describe our novel TV transformation model in 3 and our experiments with SSL speech representations in 4. Section 5 outlines the results of our experiments. We end-up with a conclusion and a discussion on our proposed future work in section 6.
## 2 Articulatory Dataset
The original University of Wisconsin XRMB database [3], consists of naturally spoken isolated sentences and short paragraphs gathered from 32 male and 25 female participants. These speech recordings were accompanied by trajectory data obtained through X-ray microbeam cinematography of the midsagittal plane of the vocal tract. This cinematography tracked the movement of pellets placed on various articulators, including the upper lip (UL), lower lip (LL), tongue tip (T1), tongue blade (T2), tongue dorsum (T3), tongue root (T4), mandible incisor (MANi), and parasagitally placed mandible molar (MANm).
However, it's worth noting that some of the articulatory recordings in the database were flagged as mistracked. After removing these problematic samples, we were left with a total of 46 speakers (21 males and 25 females) and approximately 4 hours of speech data. In our recent work [18], we reconstructed a large portion of the corrupted articulatory recordings. After adding the aforementioned reconstructed recordings to the original uncorrupted dataset, we were left with approximately 5.3 hours of speech data.
## 3 Novel Tract Variable Transformations
As mentioned above, absolute X-Y coordinate representations of articulatory data is closely linked to speaker anatomy and leads to inter-speaker variability. To remedy this, the raw articulatory features are transformed into TVs using a geometric transformation. In this section, we outline a novel geometric transformation to extract TVs that are more closely related to the acoustic signal, which is a continuation of our work presented in [17].
### Articulatory Model
#### 3.1.1 Lips
The lips are modeled using the UL and LL pellets. To describe the degree and location of lip constriction, we define two TVs, Lip Aperture (LA) and Lip Protrusion (LP) respectively. LA is defined as the euclidean distance between UL and LL. Unlike [17], LP is defined as the horizontal offset of LL from the Y-axis instead of UL, which we empirically show that it leads to better SI performance. The origin of the X-Y plane is located at the tip of the maxillary incisors, and the X-axis is defined as the maxillary occlusal plane.
\[LA[n]=||UL[n]-LL[n]|| \tag{1}\]
\[LP[n]=LL_{x}[n] \tag{2}\]
#### 3.1.2 Tongue Body
The tongue body is modeled using a circle fitted through T2, T3 and T4. It's constriction can be described using two TVs, namely Tongue Body Constriction Location (TBCL) and Tongue Body Constriction Degree (TBCD). The constriction is measured relative to the extended palatal trace we introduced in [17], which models the hard palate as well as the soft palate and the anterior pharyngeal wall. Figure 2 shows the extended palatal trace.
TBCD is measured as the minimum Euclidean distance between the tongue body circle and the extended palatal trace. We update the definition of TBCL from the one introduced in [17] to be similar to the definition of LP. TBCL is defined as the horizontal offset of the point on the tongue body circle closest to the extended palatal trace, i.e. the point used in TBCD calculation from the Y-axis.
\[TBCD=min_{p\in epal}[min_{x\in TB_{circle}}||p-x||] \tag{3}\]
\[TBCL=-TB[argmin[TBCD]_{x}] \tag{4}\]
where \(epal\) is the extended palate trace, and \(TB[argmin[TBCD]]\) is the point on the tongue body closest to the palate trace
Figure 1: Pellet placement and TV definition in the XRMB dataset
Figure 2: Extended Palateal Trace With the Anterior Pharyngeal Wall For Speaker JW33
#### 3.1.3 Tongue Tip
The tongue tip is modeled by the T1 pallet. It's constriction can be described by two TVs, Tongue Tip Constriction Location (TTCL) and Tongue Tip Constriction Degree (TTCD). Similar to TBCD and TBCL, TTCD is defined as the minimum Euclidean distance between T1 and the extended palatal trace, and TTCL is the horizontal offset of T1 from the Y-axis.
\[TTCD=min_{p\in{cap}al}[||p-T1||] \tag{5}\]
\[TBCL=-T1_{x} \tag{6}\]
## 4 Speech Inversion Model Architectures
This section describes the experimented SI model architectures and details on model training.
### SI Architecture with HuBERT features
SSL speech representations when used in the SI task with EMA data have shown to outperform the conventional acoustic features (eg. Mel-spectrograms, Mel-frequency Cepstral Coefficients (MFCCs)) [13]. Here the SSL representations only need to be fine-tuned for the down stream task of speech inversion and can be expected to generalize better even with limited ground-truth articulatory data. Based on the previous work in [13] for using SSL features for the SI task with EMA data, we explored the idea of using HuBERT SSL features [14] as the input acoustic representation to train our best performing Bidirectional Gated Recurrent Unit (BiGRNN) SI architecture.
We used the HuBERT-large model pre-trained with the Librilight dataset (60,000h) to extract the HuBERT speech embeddings. All the audio files (sampled with 16 KHz) are first segmented to 2 second long segments and the shorter ones are zero padded at the end. The HuBERT embeddings are then extracted from the 2 second long segments using the speechbrain open-source AI toolkit [19]. The HuBERT embeddings are sampled at 50 Hz and have a dimensionality of 1024.
We used the BiGRNN SI system proposed in [7], and adapted the input layer to match the input dimensionality of the HuBERT embeddings.
### SI Architecture with MFCC features
We trained the same SI system architecture used in [17] which is identical to that discussed in section 4.1 with the only difference being 13 MFCCs used as the input acoustic feature. The MFCCs were extracted using a 20ms Hamming analysis window with a 10ms frame shift. The MFCCs are also utterance wise normalized (z-normalized) prior to model training.
### Model Training
Both the SI architectures described above were trained in similar fashion. The input XRMB dataset was first divided into training, development, and testing sets, so that the training set has utterances from 36 speakers and the development and testing sets have 5 speakers each (3 males, 2 females). None of the training, development and testing sets have overlapping speakers and hence all the models were trained in a'speaker-independent' fashion. All the models were implemented with a TensorFlow-Keras machine learning framework. ADAM optimizer with a starting learning rate of 1e-3 and an exponential learning rate scheduler was used. Both the models with HuBERT and MFCCs were trained with an early stopping criteria (patience=5) monitoring the 'validation loss' on the development set. To choose the best starting 'learning rate', we did a grid search on [1e-3, 3e-4, 1e-4], whereas to choose the training batch size, we did a similar grid search on [16,32,64,128]. Based on the validation loss, 1e-3 and 32 were chosen as the learning rate and batch size, respectively, for the model with HuBERT features and 1e-3 and 64 for the model with MFCCs.
## 5 Results
### TV Transformations
In this subsection, we evaluate our new transformation model and compare it to the baseline, which is our previous model introduced in [17]. We evaluate the Transformation models by training the same DNN SI model on the same dataset not including any reconstructed files, transformed, which we call the'small dataset', according to each respective geometric transformation. We also evaluate the SI model when trained on the entire available training data including reconstructed files, which we call the 'extended dataset'. We argue that the better the SI performance, the more closely related the resulting TVs are to the acoustic input signal. We evaluate the SI model based on the Pearson Product Moment Correlation (PPMC) between the predicted and ground truth TVs.
The first part of Table 1 shows the performance of the SI with MFCC input features. Our proposed model outperforms the baseline on average, by 4.3% on the small dataset, with noticeable improvement in LP and TTCL. Training on the extended dataset also improves performance across the board over the small dataset. Overall, the combination of better transformation and more data has improved the performance of the SI system by 5.03%.
However, it is worth noting that improving the TV geometric transformation model was more effective than increasing the size of the training data. This highlights the importance of having better output feature space representation.
### SSL features with new TV transformations
In this subsection, we discuss the effect of using HuBERT speech representation as in input to the SI system juxtaposed with MFCCs. The two model architectures are discussed in section 4.1 and section 4.2.
Training on the small dataset, HuBERT representation lead to a tangible improvement in the tongue TVs, namely TBCL, TBCD, TTCL and TTCD, with slight improvement in LA and LP. On average, using HuBERT representations lead to a 2.3% improvement in PPMC scores.
Training on a extended dataset leads to some improvement, although not significant when compared to improving input representations. On average adding more training data increases PPMC by 0.32%. This again highlights the effect of input representation, which was more effective than increasing the training data size. Overall, by combining better input and output representations, along with including more data, we were able to improve the PPMC score by from 0.7452 to 0.8141, a 6.9% improvement.
### Estimated TVs with best performing SI systems
Figure 3 shows the estimated LA and constriction degree TVs for an utterance in the test set, by the two SI systems trained with HuBERT and MFCC features. Both the systems have been trained with the 'extended dataset'. As seen in the figure, the differences between the TV estimates by the two models are subtle. But consistent with the PPMC scores in Table 1, it can be seen that both LA and TTCD TVs are estimated better with the HuBERT based model compared to the model trained with MFCCs. It can also be seen that the TBCD is better estimated by the model trained with MFCCs compared to the HuBERT based model.
## 6 Conclusion and Future Work
In this paper, we propose a new geometric transformation to obtain TVs from raw XRMB pellets. We show that our novel TV transformation improves the performance of SI model over the baseline, which can be attributed to better relation between the resulting TVs and the acoustic signal. We also further improve the performance of the SI system by using HuBERT speech representation as the input to the SI model. Our findings highlight the importance of efficient input and output feature space representations.
In [17], we highlighted some of the limitations of the TV transformation model we proposed in that paper. In this paper, we tackled a majority of these limitations. However, we still lag behind the transformation proposed in [16] with respect to TBCL, even though we achieve better PPMC on average. This can be attributed to the fact that even though we extended the palatal trace towards the anterior pharyngeal wall, we didn't extend the tongue body circle beyond the T4. Even though this model improved the representation of TBCD, which means it gave a good estimate for the degree of the constriction, not extending the tongue body circle might lead to inaccurate estimation of the location of the constriction, which in turn would lead to lower correlation between TTCL and the acoustic signal. We intend to tackle this problem in future work.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**Transformation** & **Training Dataset** & **LA** & **LP** & **TBCL** & **TBCD** & **TTCL** & **TTCD** & **Average** \\ \hline \multicolumn{10}{|c|}{**MFCC Input Features**} \\ \hline
**Baseline** & Small Dataset & 0.8679 & 0.5902 & 0.7424 & 0.7801 & 0.5971 & 0.8934 & 0.7452 \\
**Proposed** & Small Dataset & 0.8603 & 0.7104 & 0.7426 & 0.7754 & **0.7422** & 0.8981 & 0.7881 \\
**Proposed** & Extended Dataset & **0.8697** & **0.7250** & **0.7508** & **0.7847** & 0.7407 & **0.9019** & **0.7955** \\ \hline \multicolumn{10}{|c|}{**HuBERT Input Features**} \\ \hline
**Proposed** & Small Dataset & 0.8779 & **0.7243** & **0.7430** & 0.8089 & 0.7865 & **0.9248** & 0.8109 \\
**Proposed** & Extended Dataset & **0.8902** & 0.7142 & 0.7361 & **0.8180** & **0.8032** & 0.9229 & **0.8141** \\ \hline \end{tabular}
\end{table}
Table 1: PPMC between predicted and ground truth TVs for SI systems trained on datasets according to each geometric transformation model, with the MFCCs and HuBERT input features.
Figure 3: LA and constriction degree TVs for the utterance ‘The dormitory is between the house and the school’ estimated by the model trained with HuBERT embeddings (estimated_hubert) and the model trained with MFCCs (estimated_mfcc). Solid blue Line - ground truth, black dotted line - predictions by the HuBERT based model, yellow dotted Line - predictions by MFCC based model. |
2307.16860 | Weak type 1-1 bound of multi-parameter maximal function | We define the mulati-parameter maximal function $\mathcal{M}$ as $$
\mathcal{M} f(x)=\sup _{0<h_1,h_2,\cdots,h_n<1} \frac{1}{h_1h_2\cdots
h_n}\left|\int_0^{h_1}\cdots \int_0^{h_n} f(x-P(t_1,\cdots,t_n))
\mathrm{d}t_1\cdots \mathrm{d} t_n\right| $$ where $P(t_1,t_2,\cdots,t_n)$ is a
real-valued multi-parameter polynomial of real variables $t_1,t_2,\cdots,t_n$.
Then, we prove that $\mathcal{M}$ is of weak-type 1-1 with a bound that depends
only on the coefficients of $P(t_1,t_2,\cdots,t_n)$. | Hoyoung Song | 2023-07-31T17:19:39Z | http://arxiv.org/abs/2307.16860v1 | # Weak type 1-1 bound of multi-parameter maximal function
###### Abstract.
We define the multi-parameter maximal function \(\mathcal{M}\) as
\[\mathcal{M}f(x)=\sup_{0<h_{1},h_{2},\cdots,h_{n}<1}\frac{1}{h_{1}h_{2}\cdots h _{n}}\left|\int_{0}^{h_{1}}\cdots\int_{0}^{h_{n}}f(x-P(t_{1},\cdots,t_{n})) \mathrm{d}t_{1}\cdots\mathrm{d}t_{n}\right|\]
where \(P(t_{1},t_{2},\cdots,t_{n})\) is a real-valued multi-parameter polynomial of real variables \(t_{1},t_{2},\cdots,t_{n}\). Then, we prove that \(\mathcal{M}\) is of weak-type 1-1 with a bound that depends only on the coefficients of \(P(t_{1},t_{2},\cdots,t_{n})\).
Key words and phrases:multi parameter maximal function, weak type 1-1, maximal function
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 Decomposition And The Calderon-Zygmund Singular Integral Theory
* 4 Proof Of Lemma 3.1
* 5 Proof Of Lemma 3.2
## 1. Introduction
If \(T\) is a mapping from \(L^{1}\left(\mathbb{R}^{n}\right)\) to \(L^{1,\infty}\left(\mathbb{R}^{n}\right)\), then we say that \(T\) is a weaktype 1-1 operator if
\[|\{x:Tf(x)>\alpha\}|\leq\frac{C\|f\|_{L^{1}}}{\alpha}\]
where \(C\) is a constant independent of \(f\) and \(\alpha\). In [1], Carbery, Ricci and Wright obtained the uniform weak-type 1-1 bounds of
\[M_{0}f(x)=\sup_{0<h<1}\frac{1}{h}\int_{0}^{h}|f(x-P(s))|\mathrm{ d}s,\] \[H_{0}f(x)=p\cdot v.\int_{|s|<1}f(x-P(s))\frac{\mathrm{d}s}{s}\]
where \(P\) is real-valued polynomial. In the same spirit of the one-parameter maximal operator, we shall consider the multi-parameter and one-dimensional operators
\[Mf(x)=\sup_{0<h_{1},h_{2}<1}\frac{1}{h_{1}h_{2}}\left|\int_{0}^{h_{1}}\int_{0 }^{h_{2}}f(x-P(t_{1},t_{2}))\mathrm{d}t_{1}\mathrm{d}t_{2}\right|,\]
\[\mathcal{M}f(x)=\sup_{0<h_{1},h_{2},\cdots,h_{n}<1}\frac{1}{h_{1}h_{2}\cdots h _{n}}\left|\int_{0}^{h_{1}}\cdots\int_{0}^{h_{n}}f(x-P(t_{1},\cdots,t_{n})) \mathrm{d}t_{1}\cdots\mathrm{d}t_{n}\right|.\]
The \(L^{p}(\mathbb{R}),p>1\) boundedness of \(\mathcal{M}\) is obtained by Ricci and Stein's Theorem 7.1 of [10] with de Leeuw's theorem. In [9], Patel showed that \(\mathfrak{M}\) is of weak-type 1-1 with a bound that depends
only on the coefficients of \(P\). There are also other multi-parameter sigular integral studies. In [2], Carbery, Wainger and Wright obtained the necessary and sufficient condition for \(H\) to be bounded on \(L^{p}\), \(1<p<\infty\) using the Newton diagram of \(P\) where
\[Hf(x,y,z)=p\cdot v\cdot\int_{|s|<1}\int_{|t|<1}f(x-s,y-t,z-P(s,t))\frac{\mathrm{ d}s\mathrm{d}t}{st}.\]
In [7], the necessary and sufficient condition for the \(L^{p}\) boundedness of global version of \(H\) was obtained by Patel. More recently, the multiple Hilbert transforms were studied in [3], [4] and [6] where
\[\mathcal{H}f(x_{1},x_{2},\cdots,x_{n})=\int f(x_{1}-P_{1}(t_{1},\cdots,t_{m}), \cdots,x_{n}-P_{n}(t_{1},\cdots,t_{m}))\frac{\mathrm{d}t_{1}\cdots\mathrm{d}t _{m}}{t_{1}\cdots t_{m}}.\]
Unlike the \(L^{p}\) boundedness, we do not know much about the weak-type 1-1 estimate. We define \(M_{1}f(x,z)\) and \(H_{1}f(x,z)\) as
\[M_{1}f(x,z) =\sup_{0<h<1}\frac{1}{h}\int_{0}^{h}|f(x-s,z-P(s))|\mathrm{d}s,\] \[H_{1}f(x,z) =p.v\cdot\int_{|s|<1}f(x-s,z-P(s))\frac{\mathrm{d}s}{s}.\]
Although we have \(L^{p}\) boundedness of \(M_{1}\) and \(H_{1}\) (independent of the coefficients of \(P\)) for \(1<p<\infty\), weak-type 1-1 bounds are not known for \(M_{1}\) and \(H_{1}\). We know the necessary and sufficient condition for \(L^{p}\) boundedness of \(H_{2}\)[8] where
\[H_{2}f(x)=p\cdot v\cdot\int_{|s|<1}\int_{|t|<1}f(x-P(s,t))\frac{\mathrm{d}s \mathrm{d}t}{st}.\]
However, there is no result about the exact necessary and sufficient condition for weak-type 1-1 boundedness of \(H_{2}\). In this paper, we obtain the weak-type 1-1 bound of \(\mathcal{M}\) with a bound depending only on the coefficients of \(P\).
**Theorem 1.1**.: \(\mathcal{M}\) _is of weak-type 1-1 with a bound that is dependent only on the coefficients of \(P\)._
### Organization
In Section 2, we introduce the Newton diagram of \(P\) and its properties and treat the monomial case. In Section 3, we decompose \(\mathcal{M}f(x)\) with vertices of the Newton diagram of \(P\). Then, we treat the non-zero coordinate case and zero coordinate case of the vertex separately. For the two cases, we introduce two important Lemmas and show the weak-type 1-1 boundedness of \(\mathcal{M}f(x)\) by the Calderon-Zygmund singular integral theory combined with the two Lemmas. In Section 4, we prove one of the two Lemmas for the non-zero coordinate case by the properties of Newton diagram and some techniques of Harmonic analysis. In Section 5, we also show the other Lemma for the zero coordinate case. For this, we apply the sublevel set estimate and some estimates of Harmonic analysis with the properties of Newton diagram.
## 2. Preliminaries
Let \(\mathfrak{t}=(t_{1},t_{2},\cdots,t_{n})\) and \(\mathfrak{m}=(m_{1},m_{2},\cdots,m_{n})\). Then, we can rewrite the polynomial \(P(t_{1},\cdots,t_{n})\) as \(P(\mathfrak{t})=\sum_{\mathfrak{m}\in\Lambda}a_{\mathfrak{m}}\mathfrak{t}^{ \mathfrak{m}}\), where \(\Lambda\) is indexing the set of lattice points in \(\mathbb{Z}^{n}\) such that \(a_{\mathfrak{m}}\neq 0\). Let \(\mathbb{N}_{\star}\) denote the set of non-negative integers. For the set \(A=\{a_{1},a_{2}\cdots,a_{n}\}\subset\{1,2\cdots,n\}\), we set \(\int f(t)dt_{i\in A}=\int f(t)dt_{a_{1}}dt_{a_{2}}\cdots dt_{a_{n}}\). \(A\sim B\) will be used to indicate that \(A\) and \(B\) are essentially similar in size, unimportant error terms are neglected. For the vectors \(\bar{v}\) and \(\bar{w}\), let \(\bar{v}\geq\bar{w}\) denote all the components of \(\bar{v}\) are greater than or equal to all the components of \(\bar{w}\).
### Newton Diagram
We shall introduce the Newton Diagram. For the details, see [2], [6] and [9]. For each \(\mathfrak{m}\in\Lambda\), set
\[Q_{\mathfrak{m}}=\left\{(x_{1},x_{2},\cdots,x_{n})\in\mathbb{R}^{n}:x_{1}\geq m _{1},x_{2}\geq m_{2},\cdots,x_{n}\geq m_{n}\right\}.\]
Then the smallest closed convex set containing \(Q=\bigcup_{(m,n)\in\Lambda}Q_{m,n}\) is called the Newton diagram of \(P\). We denote it by \(\Pi\) and it is an unbounded polygon with a finite number of corners. Let \(\mathcal{D}\) denote the set of its corner points. Furthermore, suppose \(\mathcal{D}\) consists of \(r\) corner points \(v_{1},v_{2},\ldots,v_{r}\) with \(v_{j}=(m_{1}^{j},\cdots,m_{n}^{j})\) for \(1\leq j\leq r\). From now, let non-negative vector denote the vector whoose all components are not negative number. By the elementary geometry, one knows that for a corner point \(v_{j}\) of \(\Pi\), there are exactly \(n\)'s faces of \(\Pi\) intersecting \(v_{j}\). Hence, we can choose \(n\)'s non-negative vectors \(\bar{n}_{j}^{(1)},\bar{n}_{j}^{(2)},\cdots,\bar{n}_{j}^{(n)}\) that are normal vectors to the faces intersecting on the corner point \(v_{j}\). By the convexity of the Newton diagram, one has
\[\bar{n}_{j}^{(1)}\cdot(v-v_{j})\geq 0,\bar{n}_{j}^{(2)}\cdot(v-v_{j})\geq 0 \cdots,\bar{n}_{j}^{(n)}\cdot(v-v_{j})\geq 0 \tag{2.1}\]
for all \(v\in\Lambda\) and \(1\leq j\leq r\). Now for \(1\leq j\leq r\), define
\[T(j) =\left\{(q_{1},\cdots,q_{n})\in\mathbb{N}^{n}:(q_{1},\cdots,q_{n} )\cdot(v-v_{j})>0\text{ for all }v\in\Lambda\backslash\left\{v_{j}\right\}\right\}\] \[=\bigcap_{v\in\Lambda\backslash\left\{v_{j}\right\}}\left\{(q_{1 },\cdots,q_{n})\in\mathbb{N}^{n}:(q_{1},\cdots,q_{n})\cdot(v-v_{j})>0\right\}.\]
Then, we have the following Lemma. We omit its proof. For the details, see [2], [6] and [9].
**Lemma 2.1**.: _For \(1\leq j\leq r\), (i)_
\[T(j)=\left\{(q_{1},\cdots,q_{n})\in\mathbb{N}^{n}:(q_{1},\cdots,q_{n})=\alpha_ {1}\bar{n}_{j}^{(1)}+\cdots+\alpha_{n}\bar{n}_{j}^{(n)}\text{for some positive reals }\alpha_{1}\cdots,\alpha_{2}\right\}.\]
_(ii) For \(1\leq j<k\leq r\),_
\[T(j)\cap T(k)=\varnothing.\]
Now, we define \(S(j)\) as
\[S(j):=\left\{(q_{1},\cdots,q_{n})\in\mathbb{N}^{n}_{\star}:\exists_{\alpha_{1 }\geq 0,\cdots,\alpha_{n}\geq 0}(q_{1},\cdots,q_{n})=\alpha_{1}\bar{n}_{j}^{(1) }+\cdots+\alpha_{n}\bar{n}_{j}^{(n)}\right\}.\]
Then, by the definition of \(S(j)\), one can obtain that
\[\bigcup_{j=1}^{r}S(j)=\mathbb{N}^{n}_{\star}.\]
Moreover, with the elemetary calculation, one can know that there exists \(d_{j}>0\) such that
\[S(j):=\left\{(q_{1},\cdots,q_{n})\in\mathbb{N}^{n}_{\star}:\exists_{(n_{1}, \cdots,n_{n})\in\mathbb{N}^{n}_{\star}}(q_{1},\cdots,q_{n})=\frac{n_{1}}{d_{j} }\bar{n}_{j}^{(1)}+\cdots+\frac{n_{n}}{d_{j}}\bar{n}_{j}^{(n)}\right\}.\]
Let \(\mathfrak{q}:=(q_{1},\cdots,q_{n})\). For \(1<m<n\), we also set \(\bar{k}_{m}:=(k_{1},\cdots k_{m-1},k_{m+1},\cdots,k_{n})\). Especially, let \(\bar{k}_{1}:=(k_{2},\cdots,k_{n})\) and \(\bar{k}_{n}:=(k_{1},\cdots,k_{n-1})\). We define \(S_{1}(j),\cdots,S_{n}(j)\) as for \(1\leq m\leq n\),
\[S_{m}(j):=\{\mathfrak{q}\in S(j):\mathfrak{q}=\frac{N+k_{1}}{d_{ j}}\bar{n}_{j}^{(1)}+\cdots+\frac{N+k_{m-1}}{d_{j}}\bar{n}_{j}^{(m-1)}+\frac{N }{d_{j}}\bar{n}_{j}^{(m)}+\frac{N+k_{m+1}}{d_{j}}\bar{n}_{j}^{(m+1)}+\\ \cdots+\frac{N+k_{n}}{d_{j}}\bar{n}_{j}^{(n)},\bar{k}_{m}\in \mathbb{N}^{n-1}_{\star},N\in\mathbb{N}_{\star}\}.\]
Note that
\[S(j)=\bigcup_{m=1}^{n}S_{m}(j).\]
Now, we decompose
\[S_{m}(j)=\bigcup_{N\in\mathbb{N}_{\star}}S_{m}^{N}(j)\]
where
\[S_{m}^{N}(j):=\{\mathfrak{q}\in S(j):\mathfrak{q}=\frac{N+k_{1}}{d_ {j}}\bar{n}_{j}^{(1)}+\cdots+\frac{N+k_{m-1}}{d_{j}}\bar{n}_{j}^{(m-1)}+\frac{N }{d_{j}}\bar{n}_{j}^{(m)}+\frac{N+k_{m+1}}{d_{j}}\bar{n}_{j}^{(m+1)}+\\ \cdots+\frac{N+k_{n}}{d_{j}}\bar{n}_{j}^{(n)},\bar{k}_{m}\in \mathbb{N}_{\star}^{n-1}\}.\]
**Lemma 2.2**.: _For each \(1\leq j\leq r\), there exists \(\beta_{j}>0\) such that_
\[\mathfrak{q}\cdot(v-v_{j})\geq\beta_{j}N \tag{2.2}\]
_for every \(v\in\Lambda\backslash\left\{v_{j}\right\}\) and \(\mathfrak{q}\in S_{m}^{N}(j)\)._
Proof.: For every \(\mathfrak{q}\in S_{m}^{N}(j)\) we can write
\[\mathfrak{q}=\frac{N+k_{1}}{d_{j}}\bar{n}_{j}^{(1)}+\cdots+\frac{N+k_{m-1}}{d _{j}}\bar{n}_{j}^{(m-1)}+\frac{N}{d_{j}}\bar{n}_{j}^{(m)}+\frac{N+k_{m+1}}{d_ {j}}\bar{n}_{j}^{(m+1)}+\cdots+\frac{N+k_{n}}{d_{j}}\bar{n}_{j}^{(n)}\]
for some \(\bar{k}_{m}\in\mathbb{N}_{\star}^{n-1}\). By the fact that \(\bar{n}_{j}^{(1)},\cdots,\bar{n}_{j}^{(n)}\) are linearly independent with (2.1), we have
\[(v-v_{j})\cdot(\bar{n}_{j}^{(1)}+\cdots+\bar{n}_{j}^{(n)})>0\]
for all \(v\in\Lambda\backslash\left\{v_{j}\right\}\). Taking
\[\beta_{j}:=\min_{v\in\Lambda\backslash\left\{v_{j}\right\}}\frac{1}{d_{j}} \left(v-v_{j}\right)\cdot(\bar{n}_{j}^{(1)}+\cdots+\bar{n}_{j}^{(n)})>0 \tag{2.3}\]
immediately yields (2.2).
### The Monomial Case
Note that
\[\mathcal{M}f(x)\sim\sup_{\mathfrak{q}\in\mathbb{N}_{\star}^{n}}2^{q_{1}+ \cdots q_{n}}\left|\int_{2^{-q_{1}}}^{2^{-q_{1}+1}}\cdots\int_{2^{-q_{n}}}^{2^ {-q_{n}+1}}f(x-P(\mathfrak{t}))\mathrm{d}t_{1}\cdots\mathrm{d}t_{n}\right|.\]
**Lemma 2.3**.: _If \(P(\mathfrak{t})\) is a monomial, then_
\[\mathcal{M}f(x)\leq 2M_{H}f(x)\]
_where \(M_{H}\) denotes the usual Hardy-Littlewood maximal function._
Proof.: Let \(P(\mathfrak{t})=a_{\mathfrak{m}}\mathfrak{t}^{\mathfrak{m}}\) and suppose that the coefficients of \(P\) are the positive real number. Then, after a change of variable \((a_{\mathfrak{m}}\mathfrak{t}^{\mathfrak{m}}=u)\), we have
\[\mathcal{M}f(x)\sim\sup_{\mathfrak{q}\in\mathbb{Z}_{+}^{n}}\frac{ 1}{m_{n}\left(a_{\mathfrak{m}}\right)^{1/m_{n}}}2^{q_{1}+\cdots+q_{n-1}}\\ \times\int_{t_{1}=2^{-q_{1}+1}}^{2^{-q_{1}+1}}\cdots\int_{t_{n-1} =2^{-q_{n-1}+1}}^{2^{-q_{n-1}+1}}\frac{2^{q_{n}}}{t_{1}^{m_{1}/m_{n}}\cdots t_{ n-1}^{m_{n-1}/m_{n}}}\int_{a_{\mathfrak{m}}t_{1}^{m_{1}}\cdots t_{n-1}^{m_{n-1}/m_{n}} 2^{-m_{n}q_{n}}}^{a_{\mathfrak{m}}t_{1}^{m_{1}}\cdots t_{n-1}^{m_{n-1}}2^{-m_{n }q_{n}}}f(x-u)\frac{\mathrm{d}u}{u^{1-1/m_{n}}}dt_{1}\cdots dt_{n-1}.\]
So, it suffices to show that
\[\int_{0}^{a_{\mathfrak{m}}t_{1}^{m_{1}}\cdots t_{n-1}^{m_{n-1}}2^{-m_{n}q_{n} +m_{n}}}f(x-u)\frac{\mathrm{d}u}{u^{1-1/m_{n}}}\leq 2m_{n}\left(a_{ \mathfrak{m}}\right)^{1/m_{n}}2^{-q_{n}}t_{1}^{m_{1}/m_{n}}\cdots t_{n-1}^{m_{n -1}/m_{n}}M_{H}f(x). \tag{2.4}\]
This is trivial when \(n=1\). When \(n>1\), note that
\[\int_{0}^{a_{\mathfrak{m}}t_{1}^{m_{1}}\cdots t_{n-1}^{m_{n-1}}2^{-m_{n}q_{n}+q_{ n}}}f(x-u)\frac{\mathrm{d}u}{u^{1-1/n}}=\int_{u=0}^{a_{\mathfrak{m}}t_{1}^{m_{1}} \cdots t_{n-1}^{m_{n-1}}2^{-m_{n}q_{n}+q_{n}}}f(x-u)\left(\int_{z=0}^{1/u^{1-1/n }}\ \mathrm{d}z\right)\mathrm{d}u.\]
By changing the order of integral, one can have (2.4).
## 3. Decomposition And The Calderon-Zygmund Singular Integral Theory
Our proof is based on the arguments of [1] and [9]. We define \(C^{\infty}\)-function \(\eta(s)\) supported in \(\left[\frac{1}{2},4\right]\) as \(\eta(s)=1\) if \(s\in[1,2]\). Then
\[\mathcal{M}f(x) \lesssim\sup_{\mathfrak{q}\in\mathbb{N}_{*}^{n}}2^{q_{1}+\cdots q _{n}}\left|\int f(x-P(\mathfrak{t}))\eta(2^{q_{1}}t_{1})\cdots\eta(2^{q_{n}}t _{n})\mathrm{d}t_{1}\cdots\mathrm{d}t_{n}\right|\] \[=\sup_{\mathfrak{q}\in\mathbb{N}_{*}^{n}}\left|\int f(x-P(2^{-q_ {1}}t_{1},\cdots,2^{-q_{n}}t_{n}))\eta(t_{1})\cdots\eta(t_{n})\mathrm{d}t_{1} \cdots\mathrm{d}t_{n}\right|\] \[\leq\sum_{j=1}^{r}\sup_{\mathfrak{q}\in S(j)}\left|\int f(x-P(2^{ -q_{1}}t_{1},\cdots,2^{-q_{n}}t_{n}))\eta(t_{1})\cdots\eta(t_{n})\mathrm{d}t_{1 }\cdots\mathrm{d}t_{n}\right|\] \[:=\sum_{j=1}^{r}\mathcal{M}(j)f(x).\]
Hence, the following Theorem 3.1 implies Theorem 1.1
**Theorem 3.1**.: \(\mathcal{M}(j)\) _is of weak-type 1-1 for \(1\leq j\leq r\)._
From now, we shall show Theorem 3.1. For this, we consider two cases (non-zero coordinate case or zero coordinate case).
### Non-Zero Coordinate Case
First, we assume that \(v_{j}=(m_{1}^{j},\cdots,m_{n}^{j})\) does not have a vanishing coordinate. This means that \(m_{i}^{j}\neq 0\) for all \(1\leq i\leq n\). We can decompose \(\mathcal{M}(j)f(x)\) as
\[\mathcal{M}(j)f(x)=\sup_{\mathfrak{q}\in S(j)}\left|\int f(x-P(2^ {-q_{1}}t_{1},\cdots,2^{-q_{n}}t_{n}))\eta(t_{1})\cdots\eta(t_{n})\mathrm{d}t _{1}\cdots\mathrm{d}t_{n}\right|\] \[\leq\sup_{\mathfrak{q}\in S(j)}\left|\int\left[f(x-P(2^{-q_{1}}t_{ 1},\cdots,2^{-q_{n}}t_{n}))-f\left(x-2^{-\mathfrak{q}\cdot v_{j}}a_{v_{j}} \mathfrak{t}^{v_{j}}\right)\right]\eta(t_{1})\cdots\eta(t_{n})dt\right|\] \[+\sup_{\mathfrak{q}\in S(j)}\left|\int f\left(x-2^{-\mathfrak{q} \cdot v_{j}}a_{v_{j}}\mathfrak{t}^{v_{j}}\right)\eta(t_{1})\cdots\eta(t_{n}) dt\right|\] \[:=M(j)f(x)+Q(j)f(x).\]
One can apply Lemma 2.3 to \(Q(j)f(x)\). So, it suffices to consider \(M(j)f(x)\). We split \(M(j)f(x)\) again as
\[M(j)f(x) \leq\sum_{m=1}^{n}\sup_{\mathfrak{q}\in S_{m}(j)}\left|\int\left[ f(x-P(2^{-q_{1}}t_{1},\cdots,2^{-q_{n}}t_{n}))-f\left(x-2^{-\mathfrak{q}\cdot v_{j}}a_{ v_{j}}\mathfrak{t}^{v_{j}}\right)\right]\eta(t_{1})\cdots\eta(t_{n})dt\right|\] \[:=\sum_{m=1}^{n}M_{m}(j)f(x).\]
We can write \(P(2^{-q_{1}}t_{1},\cdots,2^{-q_{n}}t_{n})=2^{-\mathfrak{q}\cdot v_{j}}\tilde{P}_{j }(t_{1},\cdots,t_{n})\) where \(\tilde{P}_{j}(\mathfrak{t}):=\sum_{v=\mathfrak{m}\in\Lambda}2^{-\mathfrak{q} \cdot(v-v_{j})}a_{\mathfrak{m}}\mathfrak{t}^{\mathfrak{m}}\). Then, we have
\[M_{m}(j)f(x) =\sup_{\mathfrak{q}\in S_{m}(j)}\big{|}\int\left[f(x-2^{-\mathfrak{ q}\cdot v_{j}}\tilde{P}_{j}(\mathfrak{t}))-f(x-2^{-\mathfrak{q}\cdot v_{j}}a_{v_{j}} \mathfrak{t}^{v_{j}})\right]\eta(t_{1})\cdots\eta(t_{n})dt\big{|}\] \[\leq\sum_{N\geq 0}\sup_{\mathfrak{q}\in S_{m}^{N}(j)}\left|\int \left[f(x-2^{-\mathfrak{q}\cdot v_{j}}\tilde{P}_{j}(\mathfrak{t}))-f(x-2^{- \mathfrak{q}\cdot v_{j}}a_{v_{j}}\mathfrak{t}^{v_{j}})\right]\eta(t_{1}) \cdots\eta(t_{n})dt\right|\] \[=\sum_{N\geq 0}\sup_{\bar{k}_{m}\in\mathbb{N}_{*}^{n-1}}\left| \int\left[f(x-c_{N}2^{-\sigma_{m}\cdot\bar{k}_{m}}\tilde{P}_{j}(\mathfrak{t}) )-f(x-c_{N}2^{-\sigma_{m}\cdot\bar{k}_{m}}a_{v_{j}}\mathfrak{t}^{v_{j}}) \right]\eta(t_{1})\cdots\eta(t_{n})dt\right|\] \[:=\sum_{N\geq 0}M_{j}^{N}f(x)\]
where \(c_{N}:=2^{-(N/d_{j})[(\bar{n}_{j}^{(1)}+\cdots+\bar{n}_{j}^{(n)})\cdot v_{j}]}\) and
\[\sigma_{m}:=(\frac{1}{d}\bar{n}_{j}^{(1)}\cdot v_{j},\cdots,\frac{1}{d}\bar{n} _{j}^{(m-1)}\cdot v_{j},\frac{1}{d}\bar{n}_{j}^{(m+1)}\cdot v_{j},\cdots,\frac{ 1}{d}\bar{n}_{j}^{(n)}\cdot v_{j}).\]
For \(f\in S\), we let \(\mu_{\bar{k}_{m}}^{N,(\bar{k}_{m})},\nu_{\bar{k}_{m}}^{N,(\bar{k}_{m})}\), and \(\nu^{N,(\bar{k}_{m})}\) denote the measures satisfying
\[\left\langle f,\mu_{\bar{k}_{m}}^{N,(\bar{k}_{m})}\right\rangle =\int\left[f(c_{N}2^{-\sigma_{m}\cdot\bar{k}_{m}}\tilde{P}_{j}( \mathfrak{t}))-f(c_{N}2^{-\sigma_{m}\cdot\bar{k}_{m}}a_{v_{j}}\mathfrak{t}^{v _{j}})\right]\eta(t_{1})\cdots\eta(t_{n})dt\] \[\left\langle f,\nu_{\bar{k}_{m}}^{N,(\bar{k}_{m})}\right\rangle =\int\left[f(2^{-\sigma_{m}\cdot\bar{k}_{m}}\tilde{P}_{j}( \mathfrak{t}))-f(2^{-\sigma_{m}\cdot\bar{k}_{m}}a_{v_{j}}\mathfrak{t}^{v_{j}}) \right]\eta(t_{1})\cdots\eta(t_{n})d\mathfrak{t}\] \[\left\langle f,\nu^{N,(\bar{k}_{m})}\right\rangle =\int\left[f(\tilde{P}_{j}(\mathfrak{t}))-f(a_{v_{j}}\mathfrak{t}^ {v_{j}})\right]\eta(t_{1})\cdots\eta(t_{n})d\mathfrak{t}\]
for those \(\bar{k}_{m}\)'s for which
\[\mathfrak{q}=\frac{N+k_{1}}{d_{j}}\bar{n}_{j}^{(1)}+\cdots+\frac{N+k_{m-1}}{d_ {j}}\bar{n}_{j}^{(m-1)}+\frac{N}{d_{j}}\bar{n}_{j}^{(m)}+\frac{N+k_{m+1}}{d_{j }}\bar{n}_{j}^{(m+1)}+\cdots+\frac{N+k_{n}}{d_{j}}\bar{n}_{j}^{(n)}.\]
For all the other \(\bar{k}_{m}\)'s, we define \(\mu_{\bar{k}_{m}}^{N,(\bar{k}_{m})},\nu_{\bar{k}_{m}}^{N,(\bar{k}_{m})}\), and \(\nu^{N,(\bar{k}_{m})}\) to be zero distributions. Then, we define \(M_{j}^{N}f(x)\) as
\[M_{j}^{N}f(x):=\sup_{\bar{k}_{m}\in\mathbb{N}_{*}^{n-1}}\left|\mu_{\bar{k}_{m}} ^{N,(\bar{k}_{m})}*f(x)\right|.\]
Also, we define
\[L_{j}^{N}f(x):=\sup_{\bar{k}_{m}\in\mathbb{N}_{*}^{n-1}}\left|\nu_{\bar{k}_{m}} ^{N,(\bar{k}_{m})}*f(x)\right|.\]
Note that \((M_{j}^{N}f)(x)=\left(L_{j}^{N}f_{N}\right)(x/c_{N})\) where \(f_{N}(x)=f\left(c_{N}x\right)\). So, we shall consider the weak-type 1-1 estimate for \(L_{j}^{N}\).
Now, we introduce the following Lemma whose proof is postponed in the section 4.
**Lemma 3.1**.: _There are are positive real constants \(A,B,C,\delta_{1}\) and \(\delta_{2}\) independent of \(\bar{k}_{m}\) and \(N\) such that_
\[\int\left|\nu^{N,(\bar{k}_{m})}(x-y)-\nu^{N,(\bar{k}_{m})}(x)\right|\mathrm{d}x \leq A2^{-\delta_{1}N}|y|^{\delta_{2}}\text{ for all }y\in\mathbb{R}, \tag{3.1}\]
\[\sum_{\bar{k}_{m}\in\mathbb{N}_{*}^{n-1}}\int_{|x|\geq 2|y|}\left|\nu_{\bar{k}_{m}} ^{N,(\bar{k}_{m})}(x-y)-\nu_{\bar{k}_{m}}^{N,(\bar{k}_{m})}(x)\right|\mathrm{d}x \leq C2^{-\delta_{1}N}, \tag{3.2}\]
\[\left\|\sup_{\bar{k}_{m}\in\mathbb{N}^{n-1}}\left|f*\nu_{\bar{k}_{m}}^{N,(\bar{k} _{m})}\right|\right\|_{2}\leq B2^{-\delta_{3}N}\|f\|_{2}. \tag{3.3}\]
We shall apply Lemma 3.1 to show the following Proposition utilizing the Calderon-Zygmund singular integral theory. The following Proposition implies that \(M_{m}(j)f(x)\) has the weak type 1-1 boundedness.
**Proposition 3.1**.: (3.4) \[\left|\left\{x:L_{j}^{N}f(x)>\lambda\right\}\right|\leq\frac{C}{\lambda}2^{- \delta N}\|f\|_{1}\]
_where \(C\) and \(\delta\) are positive real constants independent of \(N\) and \(\lambda\)._
Proof.: First, we shall introduce the the Calderon-Zygmund decomposition briefly. We choose \(\varepsilon>0\) (to be fixed later) and decompose \(\mathbb{R}\) as \(\mathbb{R}=F\cup\Omega\) so that:
(1) \(F\) is closed and \(F\cap\Omega=\varnothing\);
(2) \(f(x)\leq 2^{\varepsilon N}\lambda\) a.e. on \(F\);
(3) \(\Omega\) is the union of intervals, \(\Omega=\bigcup_{i=1}^{\infty}Q_{i}\), whose interiors are mutually disjoint. Moreover,
\[|\Omega|\leq\frac{C}{2^{\varepsilon N}\lambda}\int_{\mathbb{R}}f(x)\mathrm{d}x, \tag{3.5}\]
\[\frac{1}{|Q_{i}|}\int_{Q_{i}}f(x)\mathrm{d}x\leq C2^{\varepsilon N}\lambda. \tag{3.6}\]
The constant \(C\) in (3.5) and (3.6) depends only on the dimension of the space. We define a function \(g\) almost everywhere by
\[g(x)=\begin{cases}f(x)&\text{ for }x\in F\\ \frac{1}{|Q_{i}|}\int_{Q_{i}}f(y)\mathrm{d}y,&\text{ for }x\in Q_{i}^{\circ} \end{cases}. \tag{3.7}\]
If \(f(x)=g(x)+b(x)\), then \(b(x)=0\) for \(x\in F\) and
\[\int_{Q_{i}}b(x)\mathrm{d}x=0,\text{ for each }Q_{i}. \tag{3.8}\]
Also, one knows that
\[|\{x:L_{j}^{N}f(x)>\lambda\}|\leq|\{x:L_{j}^{N}g(x)>\frac{\lambda}{2}\}|+|\{x: L_{j}^{N}b(x)>\frac{\lambda}{2}\}|.\]
By (3.5), (3.6) and (3.7), we have
\[\|g\|_{2}^{2}=\int_{F}|g(x)|^{2}\ \mathrm{d}x+\int_{\Omega}|g(x)|^{2}\ \mathrm{d}x\leq C2^{ \varepsilon N}\lambda\|f\|_{1}.\]
Therefore, by Lemma 3.3, we obtain that
\[\left|\left\{x:L_{j}^{N}g(x)>\frac{\lambda}{2}\right\}\right| \leq\frac{4}{\lambda^{2}}\int\left|L_{j}^{N}g\right|^{2}\ \mathrm{d}x=\frac{4}{\lambda^{2}}\int \left(\sup_{\bar{k}_{m}\in\mathbb{N}_{*}^{n-1}}\left|\nu_{\bar{k}_{m}}^{N,(\bar {k}_{m})}*g(x)\right|\right)^{2}\ \mathrm{d}x\] \[\leq\frac{4}{\lambda^{2}}B^{2}2^{-2\delta_{3}N}\|g\|_{2}^{2}\leq \frac{C}{\lambda}2^{-(2\delta_{3}-\varepsilon)N}\|f\|_{1}.\]
Set
\[b_{i}(x)=\begin{cases}b(x)&\text{ if }x\in Q_{i}\\ 0&\text{ if }x\notin Q_{i}.\end{cases}\]
Then, one has
\[b(x)=\sum_{i}b_{i}(x)\text{ and }L_{j}^{N}b(x)\leq\sum_{i}L_{j}^{N}b_{i}(x).\]
Let \(Q_{i}^{*}\) denote the interval which has the same centre \(y^{i}\) as \(Q_{i}\), but which is expanded 2 times. Then \(Q_{i}\subseteq Q_{i}^{*}\) and \(\Omega\subseteq\Omega^{*}\) where \(\Omega^{*}=\cup Q_{i}^{*}\). Moreover, we have:
(a) \(|\Omega^{*}|\leq 2|\Omega|\) and \(F^{*}\subseteq F\) where \(F^{*}=\left(\Omega^{*}\right)^{c}\),
(b) If \(x\notin Q_{i}^{*}\), then \(\left|x-y^{i}\right|\geq 2\left|y-y^{i}\right|\) for all \(y\in Q_{i}\).
Now, by (3.8) we have
\[L_{j}^{N}b_{i}(x) =\sup_{\bar{k}_{m}\in\mathbb{N}_{*}^{n-1}}\left|\int\nu_{\bar{k} _{m}}^{N,(\bar{k}_{m})}(x-y)b_{i}(y)\mathrm{d}y\right|\] \[=\sup_{\bar{k}_{m}\in\mathbb{N}_{*}^{n-1}}\left|\int_{Q_{i}}\nu_ {\bar{k}_{m}}^{N,(\bar{k}_{m})}(x-y)b(y)\mathrm{d}y\right|\] \[=\sup_{\bar{k}_{m}\in\mathbb{N}_{*}^{n-1}}\left|\int_{Q_{i}}\left[ \nu_{\bar{k}_{m}}^{N,(\bar{k}_{m})}(x-y)-\nu_{\bar{k}_{m}}^{N,(\bar{k}_{m})} \left(x-y^{i}\right)\right]b(y)\mathrm{d}y\right|.\]
So, we know that
\[\int_{F^{*}} L_{j}^{N}b(x)\mathrm{d}x\] \[\leq\sum_{i}\int_{x\notin Q_{i}^{*}}L_{j}^{N}b_{i}(x)\mathrm{d}x\] \[\leq\sum_{i}\int_{x\notin Q_{i}^{*}}\int_{y\in Q_{i}}\sup_{\bar{k }_{m}\in\mathbb{N}_{*}^{n-1}}\left|\nu_{\bar{k}_{m}}^{N,(\bar{k}_{m})}(x-y)- \nu_{\bar{k}_{m}}^{N,(\bar{k}_{m})}\left(x-y^{i}\right)\right|\left|b(y)\right| \mathrm{d}y\] \[\leq\sum_{i}\int_{y\in Q_{i}}\left\{\int_{|x^{\prime}|\geq 2|y^{ \prime}|}\sum_{\bar{k}_{m}\in\mathbb{N}_{*}^{n-1}}\left|\nu_{\bar{k}_{m}}^{N,( \bar{k}_{m})}\left(x^{\prime}-y^{\prime}\right)-\nu_{\bar{k}_{m}}^{N,(\bar{k}_ {m})}\left(x^{\prime}\right)\right|\mathrm{d}x^{\prime}\right\}|b(y)|\mathrm{d}y\]
using (b) for \(y\in Q_{i}\) if \(x^{\prime}=x-y^{i}\) and \(y^{\prime}=y-y^{i}\). Then, by (3.2) we obtain that
\[\int_{F^{*}}L_{j}^{N}b(x)\mathrm{d}x \leq C2^{-\delta_{1}N}\sum_{i}\int_{y\in Q_{i}}|b(y)|\mathrm{d}y\] \[\leq C2^{-\delta_{1}N}\left\{\sum_{i}\int_{y\in Q_{i}}|f(y)| \mathrm{d}y+\sum_{i}\int_{y\in Q_{i}}|g(y)|\mathrm{d}y\right\}\] \[\leq C2^{-\delta_{1}N}\left\{\int_{\Omega}|f(y)|\mathrm{d}y+\sum _{i}\int_{y\in Q_{i}}\left\{\frac{1}{|Q_{i}|}\int_{Q_{i}}|f(z)|\mathrm{d}z \right\}\mathrm{d}y\right\}\] \[\leq C2^{-\delta_{1}N}\left\{\|f\|_{1}+\int_{\Omega}|f(z)| \mathrm{d}z\right\}\] \[\leq C2^{-\delta_{1}N}\|f\|_{1}.\]
Thus, we have
\[\left|\left\{x:L_{j}^{N}b(x)>\frac{\lambda}{2}\right\}\right| \leq\left|\left\{x\in F^{*}:L_{j}^{N}b(x)>\frac{\lambda}{2}\right\} \right|+\left|\left\{x\in\Omega^{*}:L_{j}^{N}b(x)>\frac{\lambda}{2}\right\}\right|\] \[\leq\frac{2}{\lambda}\int_{F^{*}}L_{j}^{N}b(x)\mathrm{d}x+|\Omega ^{*}|\] \[\leq C\left(\frac{1}{\lambda}2^{-\delta_{1}N}\|f\|_{1}+\frac{2^{- \varepsilon N}}{\lambda}\|f\|_{1}\right)\] \[\leq\frac{C}{\lambda}2^{-\delta_{4}N}\|f\|_{1}\]
### Zero Coordinate Case
Now, we shall treat the case where \(v_{j}=(m_{1}^{j},\cdots,m_{n}^{j})\) has a vanishing coordinate. Assume that
\[\left\{\begin{array}{c}m_{i}^{j}=0\text{ for }i\in A,\\ m_{i}^{j}\neq 0\text{ for }i\in B\end{array}\right.\]
where \(v_{j}=(m_{1}^{j},\cdots,m_{n}^{j})\). Also, suppose that
\[A=\{a_{1},a_{2},\cdots,a_{\alpha}\}\text{ and }B=\{b_{1},b_{2},\cdots,b_{ \beta}\}\quad(A\cup B=\{1,2\cdots,n\}). \tag{3.9}\]
Now, we set
\[\Lambda_{0}:=\{v=(m_{1},\cdots,m_{n}):m_{i}=0\quad\text{if}\quad i\in A\}.\]
Then, we split \(\mathcal{M}(j)f(x)\) as
\[\mathcal{M}(j)f(x)=\sup_{\mathfrak{q}\in S(j)}\left|\int f(x-P(2^ {-q_{1}}t_{1},\cdots,2^{-q_{n}}t_{n}))\eta(t_{1})\cdots\eta(t_{n})\mathrm{d}t_ {1}\cdots\mathrm{d}t_{n}\right|\] \[\leq\sup_{\mathfrak{q}\in S(j)}\left|\int[f(x-P(2^{-q_{1}}t_{1}, \cdots,2^{-q_{n}}t_{n}))-f(x-2^{-\mathfrak{q}\cdot v_{j}}\sum_{v\in\Lambda_{0}} 2^{-\mathfrak{q}\cdot(v-v_{j})}a_{v}t^{v})]\eta(t_{1})\cdots\eta(t_{n})dt\right|\] \[+\sup_{\mathfrak{q}\in S(j)}\left|\int f(x-2^{-\mathfrak{q}\cdot v _{j}}\sum_{v\in\Lambda_{0}}2^{-\mathfrak{q}\cdot(v-v_{j})}a_{v}t^{v})\eta(t_ {1})\cdots\eta(t_{n})dt\right|\] \[:=M(j)f(x)+Q(j)f(x).\]
First, we shall consider \(Q(j)f(x)\). In order to show that \(Q(j)f(x)\) is weak-type 1-1 with a bound dependent on the coefficients, we shall use the induction argument. Assume that \(n_{0}\)-parameter maximal function operator \(\mathcal{M}^{\prime}\) is weak-type 1-1 with a bound dependent on the coefficients of \(P_{0}(t_{1},\cdots,t_{n_{0}})\) where \(n_{0}<n\) and
\[\mathcal{M}^{\prime}f(x)=\sup_{0<h_{1},h_{2},\cdots,h_{n_{0}}<1}\frac{1}{h_{1} h_{2}\cdots h_{n_{0}}}\left|\int_{0}^{h_{1}}\cdots\int_{0}^{h_{n_{0}}}f(x-P_{0}(t_{ 1},\cdots,t_{n_{0}}))\mathrm{d}t_{1}\cdots\mathrm{d}t_{n_{0}}\right|. \tag{3.10}\]
Then, by the following inequality:
\[Q(j)f(x) =\sup_{\mathfrak{q}\in S(j)}\left|\int f(x-2^{-\mathfrak{q}\cdot v_{j }}\sum_{v\in\Lambda_{0}}2^{-\mathfrak{q}\cdot(v-v_{j})}a_{v}\mathfrak{t}^{v}) \eta(t_{1})\cdots\eta(t_{n})dt\right|\] \[=\sup_{\mathfrak{q}\in S(j)}\left|\int f(x-\sum_{v\in\Lambda_{0}} 2^{-\mathfrak{q}\cdot v}a_{v}\mathfrak{t}^{v})\eta(t_{1})\cdots\eta(t_{n})dt\right| \tag{3.11}\] \[\leq\int\left\{\sup_{\mathfrak{q}\in S(j)}\left|\int f(x-\sum_{v \in\Lambda_{0}}2^{-\mathfrak{q}\cdot v}a_{v}\mathfrak{t}^{v})\prod_{i\in B} \eta(t_{i})dt_{i\in B}\right|\right\}\prod_{i\in A}\eta(t_{i})dt_{i\in A},\]
we can obtain that \(Q(j)f(x)\) is also weak-type 1-1 with a bound dependent on the coefficients of \(P\) by the assumption. This is because (3.11) can be controlled by the \(n_{0}\)-parameter maximal function that is mentioned in (3.10).
Now, we shall treat \(M(j)f(x)\). We write \(P\left(2^{-q_{1}}t_{1},\cdots,2^{-q_{n}}t_{n}\right)=2^{-\mathfrak{q}\cdot v_ {j}}\tilde{P}(\mathfrak{t})\) where
\[\tilde{P}(\mathfrak{t})=\sum_{v=\mathfrak{m}\in\Lambda}2^{-\mathfrak{q}\cdot(v -v_{j})}a_{\mathfrak{m}}\mathfrak{t}^{\mathfrak{m}}\]
and set
\[\tilde{P}_{0}(\mathfrak{t})=\sum_{v=\mathfrak{m}\in\Lambda_{0}}2^{-\mathfrak{ q}\cdot(v-v_{j})}a_{\mathfrak{m}}\mathfrak{t}^{\mathfrak{m}}.\]
Hence, we rewrite \(M(j)f(x)\) as
\[M(j)f(x)\] \[=\sup_{\mathfrak{q}\in S(j)}\left|\int\left[f(x-2^{-\mathfrak{q} \cdot v_{j}}\tilde{P}(\mathfrak{t}))-f(x-2^{-\mathfrak{q}\cdot v_{j}}\tilde{P }_{0}(\mathfrak{t}))\right]\eta(t_{1})\cdots\eta(t_{n})d\mathfrak{t}. \tag{3.12}\]
Note that for \(v_{j}=(m_{1}^{j},\cdots,m_{n}^{j})\) and each \(i\in A\) in (3.9), there exists non-negative vector \(n_{j}^{(a_{i})}\) which is parallel with the vector \(i\)-th axis. Then, let \(n_{j}^{(b_{1})},\cdots,n_{j}^{(b_{\beta})}\) denote the other normal non-negative vectors generated by the corner point \(v_{j}\).
For \(\bar{k}=(k_{1},k_{2},\cdots,k_{\alpha})\in\mathbb{N}_{\star}^{\alpha}\), we define \(S^{\bar{k}}(j)\) as
\[S^{\bar{k}}(j)=\left\{\mathfrak{q}\in S(j):\mathfrak{q}=\frac{k_{1}}{d}n_{j}^{ (a_{1})}+\cdots+\frac{k_{\alpha}}{d}n_{j}^{(a_{\alpha})}+\frac{\ell_{1}}{d}n_{ j}^{(b_{1})}+\cdots+\frac{\ell_{\beta}}{d}n_{j}^{(b_{\beta})};\bar{\ell}=(\ell_{1}, \cdots,\ell_{\beta})\in\mathbb{N}_{\star}^{\beta}\right\}.\]
Then, (3.12) is bounded by
\[\sum_{\bar{k}\in\mathbb{N}_{\star}^{\alpha}}\sup_{\mathfrak{q} \in S^{\bar{k}}(j)}\left|\int\left[f(x-2^{-\mathfrak{q}\cdot v_{j}}\tilde{P}( \mathfrak{t}))-f(x-2^{-\mathfrak{q}\cdot v_{j}}\tilde{P}_{0}(\mathfrak{t})) \right]\eta(t_{1})\cdots\eta(t_{n})dt\right|\] \[:=\sum_{\bar{k}\in\mathbb{N}_{\star}^{\alpha}}M^{\bar{k}}(j)f(x).\]
So, if we have the following Proposition, then one can obtain the weak type 1-1 bound of the operator \(M(j)\).
**Proposition 3.2**.: _There exists \(\gamma^{\prime}\in\mathbb{Q}_{+}^{\beta}\) such that_
\[\left|\left\{x:M^{\bar{k}}(j)f(x)>\lambda\right\}\right|\leq\frac{C}{\lambda}2^ {-\gamma^{\prime}\cdot\bar{k}}\|f\|_{1} \tag{3.13}\]
_where \(\gamma^{\prime}\) and \(C>0\) are independent of \(\bar{k}\) and \(\lambda\)._
Proof.: Observe that for \(\mathfrak{q}\in S^{k}(j)\),
\[\mathfrak{q}\cdot v_{j}=(\frac{\ell_{1}}{d}n_{j}^{(b_{1})}+\cdots+\frac{\ell_{ \beta}}{d}n_{j}^{(b_{\beta})})\cdot v_{j}\]
since \(\bar{n}_{j}^{(a_{i})}\cdot v_{j}=0\) for \(i\in A\). We set
\[\sigma=\left(\frac{1}{d}(\bar{n}_{j}^{(b_{1})}\cdot v_{j}),\cdots,\frac{1}{d}( \bar{n}_{j}^{(b_{\beta})}\cdot v_{j})\right).\]
One can easily check that \(\sigma\) is not a zero vector. Then, we rewrite \(M^{\bar{k}}(j)f(x)\) with \(\bar{\ell}=(\ell_{1},\cdots,\ell_{\beta})\) as
\[M^{\bar{k}}(j)f(x) =\sup_{\bar{\ell}\in\mathbb{N}_{\star}^{\beta}}\left|\int\left[f( x-2^{-\sigma\cdot\bar{\ell}}\tilde{P}(\mathfrak{t}))-f(x-2^{-\sigma\cdot\bar{ \ell}}\tilde{P}_{0}(\mathfrak{t}))\right]\eta(t_{1})\cdots\eta(t_{n})d \mathfrak{t}\right|\] \[:=\sup_{\bar{\ell}\in\mathbb{N}_{\star}^{\beta}}\left|\nu_{\bar{ \ell}}^{\bar{k},(\bar{\ell})}\ast f(x)\right|\]
where \(v_{\bar{\ell}}^{\bar{k},(\bar{\ell})}\) and \(v^{\bar{k},(\bar{\ell})}\) are the measures defined by
\[\left\langle f,v_{\bar{\ell}}^{\bar{k},(\bar{\ell})}\right\rangle =\int\left[f(2^{-\sigma\cdot\bar{\ell}}\tilde{P}(\mathfrak{t}))-f (2^{-\sigma\cdot\bar{\ell}}\tilde{P}_{0}(\mathfrak{t}))\right]\eta(t_{1}) \cdots\eta(t_{n})d\mathfrak{t}\] \[\left\langle f,v^{\bar{k},(\bar{\ell})}\right\rangle =\int\left[f(\tilde{P}(\mathfrak{t}))-f(\tilde{P}_{0}(\mathfrak{t }))\right]\eta(t_{1})\cdots\eta(t_{n})d\mathfrak{t}\]
for those \(\bar{\ell}\)'s for which
\[\frac{k_{1}}{d}n_{j}^{(a_{1})}+\cdots+\frac{k_{\alpha}}{d}n_{j}^{(a_{\alpha})} +\frac{\ell_{1}}{d}n_{j}^{(b_{1})}+\cdots+\frac{\ell_{\beta}}{d}n_{j}^{(b_{ \beta})}=\mathfrak{q}\in S^{\bar{k}}(j)\]
and for all the other \(\bar{\ell}\)'s we define them to be zero distributions. So, the proof is same as proof of Proposition 3.1 if we have the following lemmas.
**Lemma 3.2**.: _There are positive real constant \(C,\delta\) and vector \(\gamma\in\mathbb{Q}_{+}^{\beta}\) independent of \(\bar{k}\) and \(\bar{\ell}\) such that_
\[\int\left|v^{\bar{k},(\bar{\ell})}(x-y)-v^{\bar{k},(\bar{\ell})}(x)\right| \mathrm{d}x\leq C2^{-\gamma\cdot\bar{k}}|y|^{\delta}\text{ for all }y\in\mathbb{R}, \tag{3.14}\]
\[\sum_{\bar{\ell}\in\mathbb{N}_{\star}^{\beta}}\int_{|x|\geq 2|y|}\left|v_{\bar{ \ell}}^{\bar{k},(\bar{\ell})}(x-y)-v_{\bar{\ell}}^{\bar{k},(\bar{\ell})}(x) \right|\mathrm{d}x\leq C2^{-\gamma\cdot k}, \tag{3.15}\]
\[\left\|\sup_{\bar{\ell}\in\mathbb{N}_{\star}^{\beta}}\left|f\ast v_{\bar{ \ell}}^{\bar{k},(\bar{\ell})}\right|\right\|_{2}\leq C2^{-\gamma\cdot k}\|f\|_ {2}. \tag{3.16}\]
## 4. Proof Of Lemma 3.1
In this section, we shall show Lemma 3.1.
### Proof of (3.1)
Recall that we write \(P(2^{-q_{1}}t_{1},\cdots,2^{-q_{n}}t_{n})=2^{-\mathfrak{q}\cdot v_{j}}\tilde{P}_{j }(t_{1},\cdots,t_{n})\) where
\[\tilde{P}_{j}(\mathfrak{t}):=\sum_{v=\mathfrak{m}\in\Lambda}2^{-\mathfrak{q} \cdot(v-v_{j})}a_{\mathfrak{m}}\mathfrak{t}^{\mathfrak{m}}=a_{v_{j}}\mathfrak{ t}^{v_{j}}+\sum_{v=\mathfrak{m}\in\Lambda\setminus v_{j}}2^{-\mathfrak{q}\cdot(v-v_{j} )}a_{\mathfrak{m}}\mathfrak{t}^{\mathfrak{m}}.\]
Since \(\mathfrak{q}\cdot(v-v_{j})\geq 0\) for any \(\mathfrak{q}\in S(j)\), \(\tilde{P}_{j}(\mathfrak{t})\) has uniformly bounded \(\mathcal{C}_{m}\) norm for all \(\mathfrak{q}\in S(j)\). Moreover, by the term \(a_{v_{j}}\mathfrak{t}^{v_{j}}\) of \(\tilde{P}_{j}(\mathfrak{t})\), one can knows that \(\partial^{\alpha}\tilde{P}_{j}(\mathfrak{t})\) is uniformly bounded below for some \(\alpha\) with \(1\leq|\alpha|\leq\deg\tilde{P}_{j}\). \(\tilde{P}_{0}(\mathfrak{t})\) also have two same properties. Hence, by Proposition 7.2 of [5] combined with the above two properties of \(\tilde{P}_{j}(\mathfrak{t})\) and \(\tilde{P}_{0}(\mathfrak{t})\), we have
\[\int\left|\nu^{N,(\bar{k}_{m})}(x-y)-\nu^{N,(\bar{k}_{m})}(x)\right|\mathrm{d }x\leq C|y|^{\epsilon_{0}} \tag{4.1}\]
for all \(y\in\mathbb{R}\) with \(C,\epsilon_{0}>0\) independent of \(\bar{k}_{m}\) and \(N\).
Now, we consider the case where \(N\) is sufficiently large. By Lemma 2.2, when \(N\) is sufficiently large, \(\nabla\tilde{P}_{j}(\mathfrak{t})\) is uniformly bounded below within the support of \(\eta(t_{1})\cdots\eta(t_{n})\). Hence, one has
\[\left|\widehat{\nu^{N,(\bar{k}_{m})}}(\xi)\right|=\left|\int\left[\exp(i\xi \tilde{P}_{j}(\mathfrak{t}))-\exp\left(i\xi a_{v_{j}}\mathfrak{t}^{v_{j}} \right)\right]\eta(t_{1})\cdots\eta(t_{n})d\mathfrak{t}\right|\leq\frac{C_{m} }{|\xi|^{m}} \tag{4.2}\]
for all \(m\in\mathbb{N}\) and \(N\geq N_{0}\). Here, \(N_{0}\) depends only on the coefficients of polynomial \(P\). Also, by the mean value theorem, we have
\[\left|\widehat{\nu^{N,(\bar{k}_{m})}}(\xi)\right|\leq C2^{-\beta_{j}N}|\xi| \tag{4.3}\]
where \(\beta_{j}\) is defined in (2.3). Thus, by (4.2) and (4.3) combined with the convexity, there exists \(\sigma_{j}>0\) such that
\[\int\left|\nu^{N,(\bar{k}_{m})}(x-y)-\nu^{N,(\bar{k}_{m})}(x) \right|\mathrm{d}x\] \[\leq C\left\{\int\left|(e^{-2\pi iy\xi}-1)\cdot\widehat{\nu^{N,( \bar{k}_{m})}}(\xi)\right|^{2}\ \mathrm{d}\xi\right\}^{1/2}\] \[=C\left\{\int_{|\xi|\leq 1}\left|(e^{-2\pi iy\xi}-1)\cdot\widehat{ \nu^{N,(\bar{k}_{m})}}(\xi)\right|^{2}\ \mathrm{d}\xi+\int_{|\xi|\geq 1}\left|(e^{-2\pi iy \xi}-1)\cdot\widehat{\nu^{N,(\bar{k}_{m})}}(\xi)\right|^{2}\ \mathrm{d}\xi\right\}^{1/2} \tag{4.5}\] \[\leq C2^{-\sigma_{j}N}|y|. \tag{4.4}\]
Here, we have used the Cauchy-Schwarz inequality with the fact that \(v^{N,(\bar{k}_{m})}\) has a compact support and Plancherel's theorem to have (4.4).
### Proof of (3.2)
Suppose that the uniform compact support of the \(\nu^{N,(\bar{k}_{m})}\) (for all \(N\) and \(\bar{k}_{m}\)) is contained in a ball \(B(0,R)\). Since
\[\nu^{N,(\bar{k}_{m})}_{\bar{k}_{m}}(x)=2^{\sigma_{m}\cdot\bar{k}_{m}}\nu^{N,( \bar{k}_{m})}\left(2^{\sigma_{m}\cdot\bar{k}_{m}}x\right),\]
the support of \(v^{N,(k)}_{k}\) is contained in \(B(0,2^{-\sigma_{m}\cdot\bar{k}_{m}}R)\). Let \(2^{k_{0}-1}<|y|\leq 2^{k_{0}}\) for some \(k_{0}\in\mathbb{Z}\). Then, \(|x|\geq 2|y|\) means that
\[|x|>2^{k_{0}}\ \text{and}\ |x-y|\geq|y|>2^{k_{0}-1}.\]
Hence, if \(2^{-\sigma_{m}\cdot\bar{k}_{m}}R<2^{k_{0}-1}\),
\[\int_{|x|\geq 2|y|}\left|\nu^{N,(\bar{k}_{m})}_{\bar{k}_{m}}(x-y)-\nu^{N,(\bar{ k}_{m})}_{\bar{k}_{m}}(x)\right|\mathrm{d}x\equiv 0.\]
Therefore, by (3.1) we have
\[\sum_{\bar{k}_{m}}\int_{|x|\geq 2|y|}\left|\nu_{\bar{k}_{m}}^{N,(\bar{k} _{m})}(x-y)-\nu_{\bar{k}_{m}}^{N,(\bar{k}_{m})}(x)\right|\mathrm{d}x\] \[\leq\sum_{\bar{k}_{m}:\sigma_{m}.\bar{k}_{m}+k_{0}\leq\log R+1} \int\left|\nu_{k_{m}}^{N,(\bar{k}_{m})}(x-y)-\nu_{\bar{k}_{m}}^{N,(\bar{k}_{m}) }(x)\right|\mathrm{d}x\] \[=\sum_{\bar{k}_{m}:\sigma_{m}.\bar{k}_{m}+k_{0}\leq\log R+1}\int \left|\nu^{N,(\bar{k}_{m})}\left(z-2^{\sigma_{m}.\bar{k}_{m}}y\right)-\nu^{N,( \bar{k}_{m})}(z)\right|\mathrm{d}z\] \[\leq\sum_{\bar{k}_{m}:\sigma_{m}.\bar{k}_{m}+k_{0}\leq\log R+1}A2 ^{-\delta_{1}N}\left(2^{\sigma_{m}.\bar{k}_{m}}|y|\right)^{\delta_{2}}\] \[\leq A2^{-\delta_{1}N}\sum_{\bar{k}_{m}:\sigma_{m}.\bar{k}_{m}+k_ {0}\leq\log R+1}\left(2^{\sigma_{m}.\bar{k}_{m}}2^{k_{0}}\right)^{\delta_{2}}\] \[\leq A2^{-\delta_{1}N}C\left(R,\sigma_{1},\delta_{2}\right).\]
### Proof of (3.3)
By the mean value theorem, one has
\[\left|\widehat{\nu_{\bar{k}_{m}}^{N,(\bar{k}_{m})}}(\xi)\right|\leq C|\xi|2^{ -\sigma_{m}.\bar{k}_{m}}2^{-\beta_{j}N}. \tag{4.6}\]
Furthermore, by the van der Corput's lemma, one knows that
\[\left|\widehat{\nu_{\bar{k}_{m}}^{N,(\bar{k}_{m})}}(\xi)\right|\leq\frac{C}{ \left(|\xi|2^{-\sigma_{m}.\bar{k}_{m}}\right)^{\epsilon}}. \tag{4.7}\]
Therefore, by the convexity, there exists \(\delta_{3}>0\) such that
\[\left|\widehat{\nu_{\bar{k}_{m}}^{N,(\bar{k}_{m})}}(\xi)\right|\leq C2^{- \delta_{3}N}\min\left(|\xi|2^{-\sigma_{m}.\bar{k}_{m}},\frac{1}{\left(|\xi|2^{ -\sigma_{m}.\bar{k}_{m}}\right)^{\epsilon/2}}\right). \tag{4.8}\]
Thus, by the standard Littlewood-Paley theory, we have
\[\left\|\sup_{\bar{k}_{m}\in\mathbb{N}_{*}^{n-1}}\left|f*\nu_{\bar{k}_{m}}^{N,( \bar{k}_{m})}\right|\right\|_{2}\leq B2^{-\delta_{3}N}\|f\|_{2}. \tag{4.9}\]
For the details, see [10].
## 5. Proof Of Lemma 3.2
The proof of (3.15) and (3.16) in Lemma 3.2 are same as the proof of (3.2) and (3.3) respectively. Hence, we shall show the inequality (3.14).
### Proof of (3.14)
Recall that we write \(P\left(2^{-q_{1}}t_{1},\cdots,2^{-q_{n}}t_{n}\right)=2^{-\mathfrak{q}\cdot v _{j}}\tilde{P}(\mathfrak{t})\) where
\[\tilde{P}(\mathfrak{t})=\sum_{v=\mathfrak{m}\in\Lambda}2^{-\mathfrak{q}\cdot(v- v_{j})}a_{\mathfrak{m}}\mathfrak{t}^{\mathfrak{m}}\]
and
\[\tilde{P}_{0}(\mathfrak{t})=\sum_{v=\mathfrak{m}\in\Lambda_{0}}2^{-\mathfrak{ q}\cdot(v-v_{j})}a_{\mathfrak{m}}\mathfrak{t}^{\mathfrak{m}}.\]
Observe that for \(v\in\Lambda\backslash\Lambda_{0}\) and \(i\in A\), \(\bar{n}_{j}^{(a_{i})}\cdot(v-v_{j})>0\) from the definition of \(\Lambda_{0}\). So, we set \(\gamma_{j}=(\gamma_{j}^{1},\cdots,\gamma_{j}^{\alpha})\) where
\[\gamma_{j}^{i}=\frac{1}{d}\min_{v\in\Lambda\backslash\Lambda_{0}}\left\{\bar{n }_{j}^{(a_{i})}\cdot(v-v_{j})\right\}>0,\ (i\in\{1,2,\cdots,\alpha\}).\]
Now, we rewrite \(\tilde{P}(\mathfrak{t})=\tilde{P}_{0}(\mathfrak{t})+2^{-\gamma_{j}\cdot\bar{k}} \tilde{Q}(\mathfrak{t})\) where \(\bar{k}=(k_{1},k_{2},\cdots,k_{\alpha})\in\mathbb{N}_{\star}^{\alpha}\). By the fact that for \(v\in\Lambda\backslash\Lambda_{0}\) and \(\mathfrak{q}\in S(j)\),
\[\mathfrak{q}\cdot(v-v_{1}) \geq\frac{k_{1}}{d}\bar{n}_{j}^{(a_{1})}\cdot(v-v_{j})+\cdots+ \frac{k_{\alpha}}{d}\bar{n}_{j}^{(a_{\alpha})}\cdot(v-v_{j})\] \[\geq\gamma_{j}\cdot\bar{k}, \tag{5.1}\]
one knows that there exists a constant \(M>0\)(depends only on the coefficients of P) such that
\[\left\|\nabla\tilde{Q}(\mathfrak{t})\right\|_{L^{\infty}([1/2,4]^{n})}\leq M. \tag{5.2}\]
Let
\[I_{\theta}:=\{\mathfrak{t}\in[\frac{1}{2},4]^{n}:|\nabla\tilde{P}_{0}(t)|>2^{- \theta(\gamma_{j}\cdot\bar{k})}\}.\]
for some sufficiently small \(\theta\in\mathbb{Q}_{+}\). For \(I_{\theta}\), we set
\[\widehat{v_{I_{\theta}}^{\bar{k},(\bar{\ell})}}(\xi):=\int_{I_{\theta}}\left[ e^{-2\pi i\xi\cdot\tilde{P}(\mathfrak{t})}-e^{-2\pi i\xi\cdot\tilde{P}_{0}( \mathfrak{t})}\right]\eta(t_{1})\cdots\eta(t_{n})d\mathfrak{t}\]
for those \(\bar{\ell}\)'s for which
\[\frac{k_{1}}{d}n_{j}^{(a_{1})}+\cdots+\frac{k_{\alpha}}{d}n_{j}^{(a_{\alpha})} +\frac{\ell_{1}}{d}n_{j}^{(b_{1})}+\cdots+\frac{\ell_{\beta}}{d}n_{j}^{(b_{ \beta})}=\mathfrak{q}\in S^{\bar{k}}(j)\]
and for all the other \(\bar{\ell}\)'s we define the Fourier multiplier to be zero. Then, by the inequality (5.2) combined with \(\tilde{P}(\mathfrak{t})=\tilde{P}_{0}(\mathfrak{t})+2^{-\gamma_{j}\cdot\bar{k }}\tilde{Q}(\mathfrak{t})\), one can obtain that
\[\left|\widehat{v_{I_{\theta}}^{\bar{k},(\bar{\ell})}}(\xi)\right|\leq\frac{C_ {m}}{|\xi|^{m}} \tag{5.3}\]
for all \(m\in\mathbb{N}\) and \(\bar{k}\geq(k_{0},k_{0}\cdots,k_{0})\) when we fix \(\theta\) such that \(\theta\leq\theta_{0}\). Here, \(\theta_{0}\) is determined by the \(M\) in (5.2) and \(k_{0}\) is choosen by the \(M\) in (5.2) and the sufficiently small \(\theta\) satisfying \(\theta\leq\theta_{0}\). Furthermore, by the mean-value theorem with (5.1), we have
\[\left|\widehat{v_{I_{\theta}}^{\bar{k},(\bar{\ell})}}(\xi)\right|\leq C|\xi|2^ {-\gamma_{j}\cdot\bar{k}}. \tag{5.4}\]
Therefore, by the same argument in the proof of (3.1) combined with (5.3) and (5.4), we have
\[\int\left|v_{I_{\theta}}^{\bar{k},(\bar{\ell})}(x-y)-v_{I_{\theta}}^{\bar{k},( \bar{\ell})}(x)\right|\mathrm{d}x\leq C2^{-\gamma\cdot\bar{k}}|y|^{\delta} \tag{5.5}\]
for all \(y\in\mathbb{R}\) and \(\bar{k}\geq(k_{0},k_{0}\cdots,k_{0})\). On the other hand, using Proposition 7.2 of [5], we also know that
\[\int\left|v_{I_{\theta}}^{\bar{k},(\bar{\ell})}(x-y)-v_{I_{\theta}}^{\bar{k},( \bar{\ell})}(x)\right|\mathrm{d}x\leq C|y|^{\epsilon_{0}} \tag{5.6}\]
for all \(y\in\mathbb{R}\) with \(C,\epsilon_{0}>0\) independent of \(\bar{k}\), \(\bar{\ell}\) and \(I_{\theta}\). For the details, see the arguments to have (4.1).
Now, we shall consider \(\widehat{v_{I_{\theta}}^{\bar{k},(\bar{\ell})}}(\xi)\) defined by
\[\widehat{v_{I_{\theta}}^{\bar{k},(\bar{\ell})}}(\xi):=\widehat{v^{\bar{k},( \bar{\ell})}}(\xi)-\widehat{v_{I_{\theta}}^{\bar{k},(\bar{\ell})}}(\xi)\]
for those \(\bar{\ell}\)'s for which
\[\frac{k_{1}}{d}n_{j}^{(a_{1})}+\cdots+\frac{k_{\alpha}}{d}n_{j}^{(a_{\alpha})}+ \frac{\ell_{1}}{d}n_{j}^{(b_{1})}+\cdots+\frac{\ell_{\beta}}{d}n_{j}^{(b_{\beta })}=\mathfrak{q}\in S^{\bar{k}}(j)\]
For all the other \(\bar{\ell}\)'s, we define the Fourier multiplier to be zero. By the sublevel set estiamte, there exists \(\delta^{\prime}>0\) such that
\[I_{\theta}^{c}:=\left|\{\mathfrak{t}\in[\frac{1}{2},4]^{n}:|\nabla\tilde{P}_{ 0}(t)|\leq 2^{-\theta(\gamma_{j}\cdot\bar{k})}\}\right|\leq C(2^{-\theta( \gamma_{j}\cdot\bar{k})})^{\delta^{\prime}}. \tag{5.7}\]
So, due to (5.7), we have
\[\int\left|v_{I_{\theta}^{c}}^{\bar{k},(\bar{\ell})}(x)\right| \mathrm{d}x \leq\int|\int_{I_{\theta}^{c}}\int e^{2\pi(x-\bar{P}(t))\xi}d \xi d\mathfrak{t}|dx+\int|\int_{I_{\theta}^{c}}\int e^{2\pi(x-\bar{P}_{0}(t)) \xi}d\xi d\mathfrak{t}|dx\] \[=|I_{\theta}^{c}|(\int|\delta(x)|dx+\int|\delta(x)|dx)\] \[\leq C(2^{-\theta(\gamma_{j}\cdot k)})^{\delta^{\prime}} \tag{5.8}\]
where \(\delta(x)\) is the Dirac delta distribution. On the other hand, by Proposition 7.2 of [5] again, one can obtain that
\[\int\left|v_{I_{\theta}^{c}}^{\bar{k},(\bar{\ell})}(x-y)-v_{I_{\theta}^{c}}^{ \bar{k},(\bar{\ell})}(x)\right|\mathrm{d}x\leq C|y|^{\epsilon_{0}} \tag{5.9}\]
for all \(y\in\mathbb{R}\) with \(C,\epsilon_{0}>0\) independent of \(\bar{k}\) and \(\bar{\ell}\).
Therefore, by (5.8) and (5.9) we have
\[\int\left|v_{I_{\theta}^{c}}^{\bar{k},(\bar{\ell})}(x-y)-v_{I_{\theta}^{c}}^{ \bar{k},(\bar{\ell})}(x)\right|\mathrm{d}x\leq C2^{-\gamma\cdot\bar{k}}|y|^{ \delta}\text{ for all }y\in\mathbb{R}. \tag{5.10}\]
(5.5), (5.6) and (5.10) implies the desired results.
|
2309.16314 | A Primer on Bayesian Neural Networks: Review and Debates | Neural networks have achieved remarkable performance across various problem
domains, but their widespread applicability is hindered by inherent limitations
such as overconfidence in predictions, lack of interpretability, and
vulnerability to adversarial attacks. To address these challenges, Bayesian
neural networks (BNNs) have emerged as a compelling extension of conventional
neural networks, integrating uncertainty estimation into their predictive
capabilities.
This comprehensive primer presents a systematic introduction to the
fundamental concepts of neural networks and Bayesian inference, elucidating
their synergistic integration for the development of BNNs. The target audience
comprises statisticians with a potential background in Bayesian methods but
lacking deep learning expertise, as well as machine learners proficient in deep
neural networks but with limited exposure to Bayesian statistics. We provide an
overview of commonly employed priors, examining their impact on model behavior
and performance. Additionally, we delve into the practical considerations
associated with training and inference in BNNs.
Furthermore, we explore advanced topics within the realm of BNN research,
acknowledging the existence of ongoing debates and controversies. By offering
insights into cutting-edge developments, this primer not only equips
researchers and practitioners with a solid foundation in BNNs, but also
illuminates the potential applications of this dynamic field. As a valuable
resource, it fosters an understanding of BNNs and their promising prospects,
facilitating further advancements in the pursuit of knowledge and innovation. | Julyan Arbel, Konstantinos Pitas, Mariia Vladimirova, Vincent Fortuin | 2023-09-28T10:09:15Z | http://arxiv.org/abs/2309.16314v1 | # A Primer on Bayesian Neural Networks: Review and Debates
###### Abstract
Neural networks have achieved remarkable performance across various problem domains, but their widespread applicability is hindered by inherent limitations such as overconfidence in predictions, lack of interpretability, and vulnerability to adversarial attacks. To address these challenges, Bayesian neural networks (BNNs) have emerged as a compelling extension of conventional neural networks, integrating uncertainty estimation into their predictive capabilities.
This comprehensive primer presents a systematic introduction to the fundamental concepts of neural networks and Bayesian inference, elucidating their synergistic integration for the development of BNNs. The target audience comprises statisticians with a potential background in Bayesian methods but lacking deep learning expertise, as well as machine learners proficient in deep neural networks but with limited exposure to Bayesian statistics. We provide an overview of commonly employed priors, examining their impact on model behavior and performance. Additionally, we delve into the practical considerations associated with training and inference in BNNs.
Furthermore, we explore advanced topics within the realm of BNN research, acknowledging the existence of ongoing debates and controversies. By offering insights into cutting-edge developments, this primer not only equips researchers and practitioners with a solid foundation in BNNs, but also illuminates the potential applications of this dynamic field. As a valuable resource, it fosters an understanding of BNNs and their promising prospects, facilitating further advancements in the pursuit of knowledge and innovation.
###### Contents
* 1 Introduction
* 2 Neural networks and statistical learning theory
* 2.1 Choice of architecture
* 2.2 Expressiveness
* 2.3 Inductive bias
* 2.4 Generalization and overfitting
* 2.5 Limitations of the frequentist approach to deep learning
* 3 Bayesian machine learning
* 3.1 Bayesian paradigm
* 3.2 Priors
* 3.3 Computational methods
* 3.3.1 Variational inference
* 3.3.2 Laplace approximation
* 3.3.3 Sampling methods
* 3.4 Model selection
* 4 What are Bayesian neural networks?
* 4.1 Priors
* 4.1.1 Weight priors (parameter-space)
* 4.1.2 Unit priors (function-space)
* 4.1.3 Regularization
* 4.2 Approximate inference for Bayesian neural networks
* 4.2.1 Variational inference
* 4.2.2 Laplace approximation
* 4.2.3 Sampling methods
* 5 To be Bayesian or not to be?
* 5.1 Frequentist and Bayesian connections
* 5.1.1 Priors and initialization schemes
* 5.1.2 Posteriors and optimization methods
* 5.1.3 Cold and tempered posteriors
* 5.1.4 Deep ensembles
* 5.2 Performance certificates
* 5.2.1 Frequentist validation of the posterior
* 5.2.2 Posterior concentration and generalization to out-of-sample data
* 5.2.3 Marginal likelihood and generalization
* 5.3 Benchmarking
* 5.3.1 Evaluation datasets
* 5.3.2 Evaluation metrics-tasks
* 5.3.3 Output interpretation
* 6 Conclusion
Introduction
**Motivation.** Technological advancements have sparked an increased interest in the development of models capable of acquiring knowledge and performing tasks that resemble human abilities. These include tasks such as object recognition and scene segmentation in images, speech recognition in audio signals, and natural language understanding. They are commonly referred to as artificial intelligence (AI) tasks. AI systems possess the remarkable ability to mimic human thinking and behavior.
Machine learning, a subset of artificial intelligence, encompasses a fundamental aspect of AI--learning the underlying relationships within data and making decisions without explicit instructions. Machine learning algorithms autonomously learn and enhance their performance by leveraging their output. These algorithms do not rely on explicit instructions to generate desired outcomes; instead, they learn by analyzing accessible datasets and comparing them with examples of the desired output.
Deep learning, a specialized field within machine learning, focuses on algorithms inspired by the structure and functioning of the human brain, known as (artificial) neural networks. Deep learning concepts enable machines to acquire human-like skills. Through deep learning, computer models can be trained to perform classification tasks using inputs such as images, text, or sound. Deep learning has gained popularity due to its ability to achieve state-of-the-art performance. The training of these models involves utilizing large labeled datasets in conjunction with neural network architectures.
Neural networks, or NNs, are particularly effective deep learning models that can solve a wide range of problems. They are now widely employed across various domains. For instance, they can facilitate translation between languages, guide users in banking applications, or even generate artwork in the style of famous artists based on simple photographs. However, neural networks are often regarded as black boxes due to the lack of intuitive interpretations that would allow us to trace the flow of information from input to output.
In certain industries, the acceptance of AI algorithms necessitates explanations. This requirement may stem from regulations encompassed in the concept of AI safety or from human factors. In the field of medical diagnosis and treatment, decisions based on AI algorithms can have life-changing consequences. While AI algorithms excel at detecting various health conditions by identifying minute details imperceptible to the human eye, doctors may hesitate to rely on this technology if they cannot explain the rationale behind its outcomes.
In the realm of finance, AI algorithms can assist in tasks such as assigning credit scores, evaluating insurance claims, and optimizing investment portfolios, among other applications. However, if these algorithms produce biased outputs, it can cause reputational damage and even legal implications. Consequently, there is a pressing need for interpretability, robustness, and uncertainty estimation in AI systems.
The exceptional performance of deep learning models has fueled research efforts aimed at comprehending the mechanisms that drive their effectiveness. Nevertheless, these models remain highly opaque, as they lack the ability to provide human-understandable accounts of their reasoning processes or explanations. Understanding neural networks can significantly contribute to the development of safe and explainable AI algorithms that could be widely deployed to improve people's lives. The Bayesian perspective is often viewed as a pathway toward trustworthy AI. It employs probabilistic theory and approximation methods to express and quantify uncertain
ties inherent in the models. However, the practical implementation of Bayesian approaches for uncertainty quantification in deep learning models often incurs significant computational costs and necessitates the use of improved approximation techniques.
**Objectives and outline.** The recent surge of research interest in Bayesian deep learning has spawned several notable review articles that contribute valuable insights to the field. For instance, Jospin et al. (2022) present a useful contribution by offering practical implementations in Python, enhancing the accessibility of Bayesian deep learning methodologies. Another significant review by Abdar et al. (2021) provides a comprehensive assessment of uncertainty quantification techniques in deep learning, encompassing both frequentist and Bayesian approaches. This thorough examination serves as an essential resource for researchers seeking to grasp the breadth of available methods. While existing literature delves into various aspects of Bayesian neural networks, Goan and Fookes (2020) specifically focuses on inference algorithms within BNNs. However, the comprehensive coverage of prior modeling, a critical component of BNNs, is not addressed in this review. Conversely, Fortuin (2022) presents a meticulous examination of priors utilized in diverse Bayesian deep learning models, encompassing BNNs, deep Gaussian processes, and variational auto-encoders (VAEs). This review offers valuable insights into the selection and impact of priors across different Bayesian modeling paradigms.
In contrast to these works, our objective is to offer an accessible and comprehensive guide to Bayesian neural networks, catering to both statisticians and machine learning practitioners. The target audience comprises statisticians with a potential background in Bayesian methods but lacking deep learning expertise, as well as machine learners proficient in deep neural networks but with limited exposure to Bayesian statistics. Assuming no prior familiarity with either deep learning or Bayesian statistics, we provide succinct explanations of both domains in Section 2 and Section 3, respectively. These sections serve as concise reminders, enabling readers to grasp the foundations of each field. Subsequently, in Section 4, we delve into Bayesian neural networks, elucidating their core concepts, with a specific emphasis on frequently employed priors and inference techniques. By addressing these fundamental aspects, we equip the reader with a solid understanding of BNNs and their associated methodologies. Furthermore, in Section 5, we analyze the principal challenges encountered by contemporary Bayesian neural networks. This exploration provides readers with a comprehensive overview of the obstacles inherent to this field, highlighting areas for further investigation and improvement. Ultimately, Section 6 concludes our guide, summarizing the key points and emphasizing the significance of Bayesian neural networks. By offering this cohesive resource, our goal is to empower statisticians and machine learners alike, fostering a deeper understanding of BNNs and facilitating their broader application in practice.1
Footnote 1: We provide an up-to-date reading list of research articles related to Bayesian neural networks at this link: [https://github.com/konstantinos-p/Bayesian-Neural-Networks-Reading-List](https://github.com/konstantinos-p/Bayesian-Neural-Networks-Reading-List).
Neural networks and statistical learning theory
The inception of neural network models can be traced back to 1955 when the first model, known as the _perceptron_, was constructed (Rosenblatt, 1958). Subsequently, significant advancements have taken place in this field, notably the discovery of the _backpropagation_ algorithm in the 1980s (Rumelhart et al., 1986). This algorithm revolutionized neural networks by enabling efficient training through gradient-descent-based methods. However, the current era of profound progress in deep learning commenced in 2012 with a notable milestone: convolutional neural networks, when trained on graphics processing units (GPUs) for the first time, achieved exceptional performance on the ImageNet task (Krizhevsky et al., 2012). This breakthrough marked a significant turning point and propelled the rapid advancement of deep learning methodologies.
**Definition and notations.**_Neural networks_ are hierarchical models made of layers: an input, several hidden layers, and an output, see Figure 1. The number of hidden layers \(L\) is called _depth_. Each layer following the input layer consists of units which are linear combinations of previous layer units transformed by a nonlinear function, often referred to as the nonlinearity or _activation function_ denoted by \(\phi:\mathbb{R}\to\mathbb{R}\). Given an input \(\mathbf{x}\in\mathbb{R}^{N}\) (for instance an image made of \(N\) pixels), the \(\ell\)-th hidden layer consists of two vectors whose size is called the _width_ of layer, denoted by \(H_{\ell}\), where \(\ell=1,\ldots,L\). The vector of units before application of the non-linearity is called _pre-nonlinearity_ (or _pre-activation_), and is denoted by \(\mathbf{g}^{(\ell)}=\mathbf{g}^{(\ell)}(\mathbf{x})\), while the vector obtained after element-wise application of \(\phi\) is called _post-nonlinearity_ (or _post-activation_) and is denoted by \(\mathbf{h}^{(\ell)}=\mathbf{h}^{(\ell)}(\mathbf{x})\). More specifically, these vectors are defined as
\[\mathbf{g}^{(\ell)}(\mathbf{x})=\mathbf{w}^{(\ell)}\mathbf{h}^{(\ell-1)}(\mathbf{x}),\quad\mathbf{h}^ {(\ell)}(\mathbf{x})=\phi(\mathbf{g}^{(\ell)}(\mathbf{x})), \tag{1}\]
where \(\mathbf{w}^{(\ell)}\) is a weight matrix of dimension \(H_{\ell}\times H_{\ell-1}\) including a bias vector, with the convention that \(H_{0}=N\), the input dimension.
**Supervised learning.** We denote the learning sample \((X,Y)=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{n}\in(\mathcal{X}\times\mathcal{Y})^{n}\), which contains \(n\) input-output pairs. Observations \((X,Y)\) are assumed to be randomly sampled from a distribution \(\mathfrak{D}\). Thus, we denote \((X,Y)\sim\mathfrak{D}^{n}\) the i.i.d observation of \(n\) elements. We define the test set \((X_{\text{test}},Y_{\text{test}})\) of \(n_{\text{test}}\) samples in a similar way to that of the learning sample. We consider some loss function \(\mathcal{L}:\mathcal{F}\times\mathcal{X}\times\mathcal{Y}\to\mathbb{R}\), where \(\mathcal{F}\) is a set of predictors \(f:\mathcal{X}\to\mathcal{Y}\). We also denote the empirical risk \(\mathcal{R}_{n}^{\mathcal{L}}(f)=(1/n)\sum_{i}\mathcal{L}(f(\mathbf{x}_{i}),\mathbf{y} _{i})\) and the risk
\[\mathcal{R}_{n}^{\mathcal{L}}(f)=\mathbf{E}_{(\mathbf{x},\mathbf{y})\sim\mathfrak{D}} \mathcal{L}(f(\mathbf{x}),\mathbf{y}). \tag{2}\]
Figure 1: Simple fully-connected neural network architecture.
The minimizer of \(\mathcal{R}^{\mathcal{L}}_{\mathfrak{D}}\) is called _Bayes optimal predictor_\(f^{*}=\arg\min_{f:\mathcal{X}\rightarrow\mathfrak{Y}}\mathcal{R}^{\mathcal{L}}_{ \mathfrak{D}}(f)\). The minimal risk \(\mathcal{R}^{\mathcal{L}}(f^{*})\), called _Bayes risk_, is achieved by the Bayes optimal predictor \(f^{*}\).
Returning to neural networks, we denote their vectorized weights by \(\mathbf{w}\in\mathbb{R}^{d}\) with \(d=\sum_{\ell=1}^{L}H_{\ell-1}H_{\ell}\), such that \(f(\mathbf{x})=f_{\mathbf{w}}(\mathbf{x})\). The goal is to find the optimal weights such that the neural network output \(\mathbf{y}_{i}^{*}\) for input \(\mathbf{x}_{i}\) is the _closest_ to the given label \(\mathbf{y}_{i}\), which is estimated by a loss function \(\mathcal{L}\). In the regression problem, for example, the loss function \(\mathcal{L}\) could be the mean-squared error \(\|\mathbf{y}_{i}^{*}-\mathbf{y}_{i}\|^{2}\). The optimization problem is then to minimize the empirical risk:
\[\hat{\mathbf{w}}=\operatorname*{arg\,min}_{\mathbf{w}}\mathcal{R}^{\mathcal{L}}_{n}(f_ {\mathbf{w}}).\]
With optimal weights \(\hat{\mathbf{w}}\), the empirical risk \(\mathcal{R}^{\mathcal{L}}_{n}(\hat{f})\) is small and should be close to the Bayes risk \(\mathcal{R}^{\mathcal{L}}(f^{*})\).
**Training.** The main workhorse of neural network training is gradient-based optimization:
\[\mathbf{w}\leftarrow\mathbf{w}-\eta\,\nabla_{\mathbf{w}}\mathcal{R}^{\mathcal{L}}_{n}(f_{ \mathbf{w}}), \tag{3}\]
where \(\eta>0\) is a _step size_, or _learning rate_, and the gradients are computed as products of gradients between each layer _from right to left_, a procedure called _backpropagation_(Rumelhart et al., 1986), thus making use of the chain rule and efficient implementations for matrix-vector products. For large datasets, this optimization is often replaced by stochastic gradient descent (SGD), where gradients are approximated on some randomly chosen subsets called _batches_(Robbins and Monro, 1951). In this case, it requires a careful choice of the learning rate parameter. For a survey on different optimization methods, see, for example, Sun et al. (2019a). For the optimization procedure, another important aspect is how to choose the weight initialization; we discuss this in detail in Section 5.1.1.
### Choice of architecture
With the progress in deep learning, different neural network architectures have been introduced to better adapt to different learning problems. Knowledge about the data allows encoding specific properties into the architecture. Depending on the architecture, this results (among other benefits) in better feature extraction, a reduced number of parameters, invariance or equivariance to certain transformations, robustness to distribution shifts and more numerically stable optimization procedures. We shortly review some important models and refer the reader to Sarker (2021) for a more in-depth overview of recent techniques.
_Convolutional neural networks_ (CNNs) are widely used in computer vision. Image data has spatial features that refer to the arrangement of pixels and their relationship. For example, we can easily identify a human's face by looking at specific features like eyes, nose, mouth, etc. CNNs were introduced to capture spatial features by using _convolutional layers_, a particular case of the fully-connected layers described above, where certain sets of parameters are shared (LeCun et al., 1989; Krizhevsky et al., 2012). Convolutional layers perform a dot product of a convolution kernel with the layer's input matrix. As the convolution kernel slides along the input matrix for the layer, the convolution operation generates a feature map of smaller dimension which serves as an input to the next layer. It introduces the concept of parameter sharing where the same kernel, or filter, is applied across different input parts to extract the relevant features from the input.
_Recurrent neural networks_ (RNNs) are designed to save the output of a layer by adding it back to the input (Rumelhart et al., 1986; Hochreiter and Schmidhuber, 1997). During training, the recurrent layer has some information from the previous time-step. Such neural networks are advantageous for sequential data where each sample can be assumed to be dependent on preceding ones.
_Residual neural networks_ (ResNets) have residual blocks which add the output from the previous layer to the output of the current layer, a so-called _skip-connection_(He et al., 2016). It allows training very deep neural networks by ensuring that deeper layers in the model will perform at least as well as layers preceding them. (He et al., 2016).
_Transformers_ are a type of neural network architecture that is almost entirely based on the _attention mechanism_(Vaswani et al., 2017). The idea behind _attention_ is to find and focus on small, but important, parts of the input data. Transformers show better results than convolutional or residual networks on some tasks with big datasets such as image classification with JFT-300M (300M images), or English-French machine translation with WMT-2014 (36M sentences, split into a 32000 token vocabulary).
An open question in deep learning is why deep neural networks (NNs) achieve state-of-the-art performance in a significant number of applications. The common belief is that neural networks' complexity and over-parametrization result in tremendous _expressive power_, beneficial _inductive bias_, flexibility to avoid _overfitting_ and, therefore, the ability to _generalize_ well. Yet, the high dimensionalities of the data and parameter spaces of these models make them challenging to understand theoretically. In the following, we review these open topics of research as well as the current scientific consensus on them.
### Expressiveness
The expressive power describes neural networks' ability to approximate functions. In the late 1980s, a line of works established a universal approximation theorem, stating that one-hidden-layer neural networks with a suitable activation function could approximate any continuous function on a compact domain, that is \(f:[0,1]^{N}\rightarrow\mathbb{R}\), to any desired accuracy (Cybenko, 1989; Funahashi, 1989; Hornik et al., 1989; Barron, 1994). The obstacle is that the size of such networks may be exponential in the input dimension \(N\), which makes them highly prone to overfitting as well as impractical, since adding extra layers in the model is often a cheaper way to increase the representational power of the neural network. More recently, Telgarsky (2016) studied which functions neural networks could represent by focusing on the choice of the architecture and showed that deeper models are more expressive. Chatziafratis et al. (2020, 2020) extended this result by obtaining width-depth trade-offs.
Another approach is to analyze the finite-sample expressiveness of neural networks. Zhang et al. (2017) state that as soon as the number of parameters of a network is greater than the input sample size, even a simple two-layer neural network can represent any function of the input sample. Though neural networks are theoretically expressive, the core of the learning problem lies in their complexity, and research focuses on obtaining complexity bounds.
In general, the ability to approximate or to _express_ specific functions can be considered as explicit _inductive bias_ which we discuss in detail in the next section.
### Inductive bias
By choosing a design and a training procedure for a model assigned to a given problem, we make some assumptions on the problem structure. These assumptions are summed in the term _inductive bias2_, i.e., prior preferences for specific models and problems.
Footnote 2: The term _inductive_ comes from philosophy: _inductive reasoning_ refers to _generalization_ from specific observations to a conclusion. This is a counterpoint to _deductive reasoning_, which refers to _specialization_ from general ideas to a conclusion.
**Examples.** For instance, the linear regression model is built on the assumption of a linear relationship between the target variable and the features. The knowledge that the data is of a linear nature is _embedded_ into the model. Because of this limitation of the linearity of the model, linear regression is bound to perform poorly for data in where the target variable does not linearly depend on features, see the left plot of Figure 2. This assumption of a linear relationship between the target and the features is the inductive bias of linear regression. In the \(k\)-nearest neighbours model, the inductive bias is that the answer for any object should be calculated only on the basis of what values of the answers were in the elements of the training sample closest to this object, see the right plot of Figure 2. In the non-linear regression, the assumption is some non-linear function.
**Importance.** The goal of a machine learning model is to derive a general rule for all elements of a domain based on a limited number of observations. In other words, we want the model to _generalize_ to data it has not seen before. Such generalization is impossible without the presence of _inductive bias_ in the model because the training sample is always _finite_. From a finite set of observations, without making any additional assumptions about the data, a general rule can be deduced in an infinite number of ways. Inductive bias is additional information about the nature of the data for the model; a way to show models _which way to think_. It allows the model to prioritize one generalization method over another. Thus, when choosing a model for training to solve a specific problem, one needs to choose a model whose inductive bias better matches the nature of the data and better allows to solve this problem. The introduction of any inductive bias into a machine learning model relies on certain characteristics of the model _architecture_, _training algorithm_ and manipulations on _training data_.
Figure 2: Example of using the linear regression (left) and \(k\)-nearest neighbours regression (right) models on simulated data points.
**Inductive bias and training data.** One can also consider inductive bias through training data. The fewer data, the more likely the model chooses a poor generalization method. If the training sample is small, models such as neural networks are often _overfitted_. For example, when solving the problem of classifying images of cats and dogs, sometimes attention is paid to the background and not to the animals themselves. But people, unlike neural networks, can quickly learn on the problem of classifying cats and dogs, having only a dozen pictures in the training set. This is because people have additional inductive bias: we know that there is a background in the picture, and there is an object, and during the classification of pictures, you need to pay attention only to the object itself. And the neural network before training does not know about any "backgrounds" and "objects"--it is simply given different pictures and asked to learn how to distinguish them. Thus, _the smaller the training sample and the more complex the problem, the stronger inductive bias_ is required to be invested in the model device for successful model training.
Conversely, the more extensive and more diverse the training set, the more knowledge about the nature of the data the model receives during training. This means that the less likely the model is to choose a "bad" generalization method that will work poorly on data outside the training set. Thus, _the more data you have, the better the model will train_.
One of the tricks to increase the dataset is to artificially augment the training set by introducing distortions into the inputs, a procedure known as _data augmentation_. Suppose we are trying to classify images of objects or handwritten digits. Each time we visit a training example, we can randomly distort it, for instance, by shifting it by a few pixels, adding noise, rotating it slightly, or applying some sort of warping. This can increase the effective size of the training set and make it more likely that any given test example has a closely related training example. The data augmentation procedure is a sort of inductive bias because it requires the knowledge of how to construct additional data points, such as if the object or part of the object can be rotated, zoomed in, etc.
**Inductive bias and simplicity.** The _no free lunch_ theorem states that no single learning algorithm can succeed on all possible problems (Wolpert, 1996). It is, thus, essential to enforce a form of _simplicity_ in the algorithm, typically by restricting the class of models to be learned, which may reflect prior knowledge about the problem being tackled. This is associated with _inductive bias_ which should encode the prior knowledge to seek for efficiency. In the context of neural networks, one form of simplicity is in the choice of _architecture_, such as using convolutional neural networks (LeCun et al., 1989) when learning from image data. Another example is _sparsity_, which may seek models that only rely on a few relevant variables out of many available ones and can be achieved through some regularization methods (Tibshirani, 1996).
**Inductive bias of neural network architecture.** A number of deep neural network architectures have been designed with the aim of improving the inductive bias of the corresponding predictor. Here we review two popular neural network architectures that encode useful inductive biases.
_Convolutional neural networks_ (CNNs). The inductive bias of convolutional layers (LeCun et al., 1989) is the assumption of compactness and translation invariance. The convolution filter is designed in such a way that at one time it captures a compact part of the entire image (for example, a \(3\times 3\) pixels square), regardless of the distant pixels of the image. Also in the convolutional layer, the same filter is used to process the entire image (the same filter processes all \(3\times 3\) pixels square). It turns out that the convolutional layer is designed in such a way
that its inductive bias correlates well with the nature of images and objects on them, which is why convolutional neural networks are so efficient at processing images (Krizhevsky et al., 2012). This is an example of the desired, or _explicit_ inductive bias. _What makes data efficiently learnable by fitting a huge neural network with a specific algorithm? Is there implicit inductive bias?_Ulyanov et al. (2018) demonstrate that the output of a convolutional neural network with randomly initialized weights corresponds to a _deep image prior_, i.e. non-trivial image properties, _before_ training. It means that how convolutional neural networks are designed, their architecture itself, helps to encode the information from images. Geirhos et al. (2019) show that _convolutional neural networks_ have implicit inductive bias concerning the texture of images: it turns out that convolutional networks are designed in such a way that when processing images, they pay more attention to textures rather than to the shapes of objects. To get rid of this undesirable behavior, the images from the training dataset are augmented so that the dataset contains more images of the same shape, but with different types of textures (Li et al., 2021). Despite the popularity of the topic, the implicit inductive bias in neural networks is still an open question due to the complexity of the models.
_Visual transformers_(Dosovitskiy et al., 2021) are a type of neural network architecture that shows better results than convolutional networks on some tasks, including, for example, classification of images from the JFT-300M dataset. This dataset consists of 300 million images, while Imagenet has 1.2 million images. The visual transformer is almost entirely based on the _attention mechanism_(Vaswani et al., 2017), so the model has the inductive bias that attention has which consists in a shift towards simpler functions. But like convolutions, transformers also have the implicit inductive bias of neural networks (Morrison et al., 2021). Though there is still a lot of ongoing research on transformers, the inductive bias of transformers is much simpler than that of convolutional neural networks, as the former models impose fewer restrictions than the latter models. Here we see confirmation that the larger dataset we have at our disposal, the less inductive bias is required, and the better the model can learn for the task. Therefore, transformers have simple inductive bias and show state-of-the-art results in image processing, but they require a lot of data. On the contrary, convolutional neural networks have a strong inductive bias, and they perform well on smaller datasets. Recently, d'Ascoli et al. (2021) combined the transformer and convolutional neural network architectures, introducing the CONVIT model. This model is able to process images almost as well as transformers, while requiring less training data.
### Generalization and overfitting
When we train a machine learning model, we do not just want it to learn to model the training data. We want it to _generalize_ to data it has not seen before. Fortunately, there is a way to measure an algorithm's generalization performance: we measure its performance on a held-out test set, consisting of examples it has not seen before. If an algorithm works well on the training set but fails to generalize, we say it suffers from _overfitting_. Modern machine learning systems based on deep neural networks are usually over-parameterized, i.e., the number of parameters in the model is much larger than the size of the training data, which makes these systems prone to overfitting.
**Classical regime.** Let us randomly divide the original dataset into a train, validation and test set. The model is trained by optimizing the training error computed on the train set, then its
performance is checked by computing the validation error on the validation set. After tuning any existing hyperparameters by checking the validation error, the model (or models) are then evaluated on final time on the test set.
During the training procedure, the model can suffer from overfitting and underfitting (see Figure 3 for an illustration), which can be described in terms of training and testing errors.
_Overfitting_ is a negative phenomenon that occurs when a learning algorithm generates predictions that fit too closely or exactly to a particular dataset and are therefore not suitable for applying the algorithm to additional data or future observations. In this case, the training error is low but the error computed on a test set is high. The model finds dependencies in the train set which does not hold in the test set. As a result, the model has _high variance_, a problem caused by being highly sensitive to small deviations in the training set.
The opposite of overfitting is _underfitting_, in which the learning algorithm does not provide a sufficiently small average error on the training set. Underfitting occurs when insufficiently complex models are used or the training is stopped too early. In this case, the error is high for both train and test sets. As a result, the model has _high bias_, an error of incorrect assumptions in the learning algorithm.
The goal is to find the best strategy to reduce overfitting and improve the generalization, or, in other words, reduce the trained model's bias and variance. Ensembles can be used to eliminate high variance and high bias. For example, the _boosting_ procedure of several models with high bias can get a model with a reduced bias. In another case, when _bagging_, several low-bias models are connected, and the resulting model can reduce the variance. But in general, reducing one of the adverse effects leads to an increase in the other. This conflict in an attempt to simultaneously minimize bias and variance is called the _bias-variance trade-off_. This trade-off is achieved in the minimum of the test error, see the classical regime region on Figure 3.
**Modern regime.** In the past few years, it was shown that when increasing the model size beyond the number of training examples, the model's test error can start _decreasing again_ after reaching the interpolation peak, see Figure 4. This phenomenon is called _double-descent_ by Belkin et al. (2019) who demonstrated it for several machine learning models, including a two-layer neural network. Nakkiran et al. (2021) extensively study this double-descent phenomenon for deep neural network models and show the double-descent phenomenon occurs when varying the width of the model or the number of iterations during the optimization. Moreover, the double-descent phenomenon can be observed as a function of dataset size, where more data
Figure 3: Examples of underfitting, optimum solution, and overfitting in a toy classification problem. The green dots and violet squares represent two classes. The lines represent different models that classify the data. The left plot shows the result of using a model that is too simple or underfitted for the presented dataset, while the right plot shows an overfitted model.
sometimes lead to worse test performance. It is not fully understood yet why this phenomenon occurs in machine learning models and which inductive biases are responsible for it. However, it is important to take this aspect into account while choosing strategies to improve generalization.
**Strategies.** One reason for overfitting is the lack of training data, making the learned distribution not mirror the real underlying distribution. Collecting data arising from all possible parts of the domain to train machine learning models is prohibitively expensive and even impossible. Therefore, enhancing the generalization ability of models is vital in both industry and academic fields. _Data augmentation methods_, which are discussed above in the context of inductive bias, extract more information from the original dataset through augmentations, thus, help to improve the generalization.
Many strategies for increasing generalization performance focus on the model's architecture itself. Regularization methods are used to encourage a lower complexity of a model. Functional solutions such as dropout regularization (Srivastava et al., 2014), batch normalization (Ioffe and Szegedy, 2015), transfer learning (Weiss et al., 2016), and pretraining (Erhan et al., 2010) have been developed to try to adapt deep learning to applications on smaller datasets. Another approach is to treat the number of training epochs as a hyperparameter and to stop training if the performance of the model on the test dataset starts to degrade, e.g., loss begins to increase or accuracy begins to decrease. This procedure is called _early stopping_.
Though _explicit regularization_ techniques are known to improve generalization, their absence does not imply poor generalization performance for deep learning models. Indeed, Zhang et al. (2017) argue that neural networks have _implicit regularizations_; for instance, stochastic gradient descent tends to converge to small norm solutions. The early stopping procedure can also be viewed as _implicit regularization_ method as it implicitly forces to use a smaller network with less _capacity_(Zhang et al., 2017, 2021).
**Generalization bounds.** We are often interested in making the discussion on training, validation, and testing sets formal, so as to ensure that our neural network will work well on new data with high probability. We are thus often interested in finding a bound on risk \(\mathcal{R}^{\mathcal{L}}_{\mathcal{D}}(f)=\mathbf{E}_{(\mathbf{x},\mathbf{y})\sim \mathcal{D}}\mathcal{L}(f(\mathbf{x}),\mathbf{y})\) with high probability.
The most common way of bounding the above in the context of deep neural networks is by use of a test set (Langford, 2005; Kaariainen and Langford, 2005). One first trains a predictor
Figure 4: Illustration of the double-descent phenomenon.
using a training set \(\mathcal{D}_{\text{train}}\), and then computes a test risk \(\mathcal{R}^{\mathcal{L}}_{\mathcal{D}_{\text{test}}}(f)\). For \(n_{\text{test}}\) test samples, and in the classification setting, this can be readily turned into a bound on the risk \(\mathcal{R}^{\mathcal{L}}_{\mathfrak{D}}(f)\), using a tail bound on the corresponding binomial distribution (Langford, 2005). However, this approach has some shortcomings. For one it requires a significant number of samples \(n_{\text{test}}\). This can be a problem in that these samples cannot be used for training, possibly hindering the performance of the deep network. At the same time, for a number of fields such as healthcare, the cost of obtaining test samples can be prohibitively high (Davenport and Kalakota, 2019). Finally, even though we can prove that the true risk will be low, we do not get any information about the reason _why_ the classifier performs well in the first place.
As such, researchers often use the empirical risk (on the training set) together with the _complexity_(Mohri et al., 2018) of the classifier to derive bounds roughly of the form
\[\mathcal{R}^{\mathcal{L}}_{\mathfrak{D}}(f)\leq\mathcal{R}^{\mathcal{L}}_{ \mathcal{D}_{\text{train}}}(f)+\text{complexity}.\]
Intuitively, the more complex the classifier, the more it is prone to simply memorize the training data, and to learn any discriminative patterns. This leads to high true risk. Traditional data-independent complexity measures such as Rademacher complexity (Mohri et al., 2018) and VC-dimension (Blumer et al., 1989) are loose for deep neural networks. This is because they intuitively make a single complexity estimate for the neural network for all possible input datasets. Thus they are pessimistic, as a neural network could memorize one dataset (which is difficult) but learn patterns that generalize on another dataset (which might be easy).
Based on the above results, researchers focused on complexity measures which are data-dependent (Golowich et al., 2017; Arora et al., 2018; Neyshabur et al., 2017; Sokolic et al., 2016; Bartlett et al., 2017; Dziugaite and Roy, 2017). This means that they assess the complexity of a deep neural network based on the specific instantiation of the weights that we inferred for a given dataset. The tightest data-dependent generalization bounds are currently PAC-Bayes generalization bounds (McAllester, 1999; Germain et al., 2016; Dziugaite and Roy, 2017; Dziugaite et al., 2021). Contrary to the VC-dimension or the Rademacher complexity, these bounds work for stochastic neural networks (which are also the topic of this review). They can be roughly seen as bounding the mutual information between the training set and the deep neural network weights. The main complexity quantity of interest is typically the Kullback-Leibler (KL) divergence between a prior and a posterior distribution over the deep neural network weights (Dziugaite and Roy, 2017; McAllester, 1999).
### Limitations of the frequentist approach to deep learning
Although deep learning models have been largely used in many research areas, such as image analysis (Krizhevsky et al., 2012), signal processing (Graves et al., 2013), or reinforcement learning (Silver et al., 2016), their safety-critical real-world applications remain limited. Here we identify a number of limitations of the frequentist approach to deep learning:
* miscalibrated and/or overconfident uncertainty estimates (Minderer et al., 2021);
* non-robustness to _out-of-distribution_ samples (Lee et al., 2018; Mitros and Mac Namee, 2019; Hein et al., 2019; Ashukha et al., 2020), and sensitivity to _domain shifts_(Ovadia et al., 2019);
* sensitivity to adversarial attacks by malicious actors (Moosavi-Dezfooli et al., 2016, 2017; Wilson et al., 2016);
* poor interpretability of a deep neural networks' inference model (Sundararajan et al., 2017; Selvaraju et al., 2017; Lim et al., 2021; Koh and Liang, 2017);
* poor understanding of generalization, over-reliance on validation sets (McAllester, 1999; Dziugaite and Roy, 2017).
#### 4.2.3 Uncertainty estimates.
We typically distinguish between two types of uncertainty (Der Kiureghian and Ditlevsen, 2009). _Data (aleatoric) uncertainty_ captures noise inherent in the observations. This could be for example sensor noise or motion noise, resulting in uncertainty that cannot be reduced even if more data were to be collected. _Model (epistemic) uncertainty_ derives from the uncertainty on the model parameters, i.e., the weights in case of a neural network (Blundell et al., 2015). This uncertainty captures our ignorance about which model generated our collected data. While aleatoric uncertainty remains even for an infinite number of samples, model uncertainty can be explained away given enough data. For an overview on methods for estimating the uncertainty in deep neural networks see Gal (2016); Gawlikowski et al. (2021).
While NNs often achieve high train and test accuracy, the uncertainty of their predictions is miscalibrated (Guo et al., 2017). In particular, in the classification setting, interpreting softmax outputs as per-class probabilities is not well-founded from a statistical perspective. The Bayesian paradigm, by contrast, provides well-founded and well-calibrated uncertainty estimates (Kristiadi et al., 2020), by dealing with stochastic predictors and applying Bayes' rule consistently.
#### 4.2.4 Distribution shift.
Traditional machine learning methods are generally built on the _iid assumption_ that training and testing data are independent and identically distributed. However, the iid assumption can hardly be satisfied in real scenarios, resulting in uncertainty problems with _in-domain_, _out-of-domain_ samples, and _domain shifts_. _In-domain_ uncertainty is measured on data taken from the training data distribution, i.e. data from the same domain. _Out-of-domain_ uncertainty of the model is measured on data that does not follow the same distribution as the training dataset. Out-of-domain data can include data naturally corrupted with noise or relevant transformations, as well as data corrupted adversarially. Under corruption, the test domain and the training domain differ significantly. However, the model should still not be overconfident in its predictions.
Hein et al. (2019) demonstrate that rectified linear unit (ReLU) networks are always overconfident on out-of-distribution examples: scaling a training point \(\mathbf{x}\in\mathbb{R}^{N}\) with a scalar \(a\) yields predictions of arbitrarily high confidence in the limit \(a\to\infty\). Modas et al. (2021); Fawzi et al. (2016) discuss that neural networks in the case of classification can suffer from reduced accuracy in the presence of common corruptions. A common remedy is training on appropriately designed data transformations (Modas et al., 2021). However, the Bayesian paradigm should again be beneficial. It is expected that the resulting _Bayesian_ neural networks will give more uncertainty in regions far from the training data, thus degrading as images become gradually more corrupted, and diverging from the training data.
#### 4.2.5 Adversarial robustness.
As previously mentioned, modern image classifiers achieve high accuracy on iid test sets but are not robust to small, adversarially-chosen perturbations of their inputs. Given an image \(\mathbf{x}\) correctly classified by a neural network, an adversary can usually engineer an adversarial perturbation \(\mathbf{\delta}\) so small that \(\mathbf{x}+\mathbf{\delta}\) looks just like \(\mathbf{x}\) to the human eye, yet the network classifies \(\mathbf{x}+\mathbf{\delta}\) as a different, incorrect class. Bayesian neural networks
with distributions placed over their weights and biases enable principled quantification of their predictions' uncertainty. Intuitively, the latter can be used to provide a natural protection against adversarial examples, making BNNs particularly appealing for safety-critical scenarios, in which the safety of the system must be provably guaranteed.
**Interpretability.** Deep neural networks are highly opaque because they cannot produce human-understandable accounts of their reasoning processes or explanations. There is a clear need for deep learning models that offer explanations that users can understand and act upon (Lipton, 2018). Some models are designed explicitly with interpretability in mind (Montavon et al., 2018; Selvaraju et al., 2017). At the same time, a number of techniques have been developed to interpret neural network predictions, including among others gradient-based methods (Sundararajan et al., 2017; Selvaraju et al., 2017) which create "heatmaps" of the most important features, as well as influence-function-based approaches (Koh and Liang, 2017). The Bayesian paradigm allows for an elegant treatment of interpretability. Defining a prior is central to the Bayesian paradigm, and selecting it helps analyze which tasks are similar to the current task, how to model the task noise, etc. (see Fortuin et al., 2021; Fortuin, 2022). Furthermore, the Bayesian paradigm incorporates a function-space view of predictors (Khan et al., 2019). Compared to the weight-space view, this can result in more interpretable architectures.
**Generalization bounds.** It is well known that traditional approaches to proving generalization using generalization bounds fail for deterministic deep neural networks. Such generalization bounds are very useful for cases where we have little training data. In such cases, we might not be able to both train the predictor sufficiently and keep a large enough additional set for validation and testing. Therefore, a generalization bound could ensure that we both train on the full data available while at the same time proving generalization. For example, Zhang et al. (2017); Golowich et al. (2017) generalization bounds based on Rademacher complexity and the VC dimension provide vacuous bounds on the true error rate (they provide upper bounds larger than 100%). On the contrary, the Bayesian paradigm currently results in the tightest generalization bounds for deep neural networks, in conjunction with a frequentist approach termed PAC-Bayes (Dziugaite and Roy, 2017). Thus following the Bayesian paradigm is a promising direction for tasks with difficult-to-obtain data.
We introduce the Bayesian paradigm in Section 3 and then review its application to neural networks in Section 4.
Bayesian machine learning
Achieving a simultaneous design of adaptive and robust systems presents a significant challenge. In their work, Khan and Rue (2021) propose that effective algorithms that strike a balance between robustness and adaptivity often exhibit a Bayesian nature, as they can be viewed as approximations of Bayesian inference. The Bayesian approach has long been recognized as a well-established paradigm for working with probabilistic models and addressing uncertainty, particularly in the field of machine learning (Ghahramani, 2015). In this section, we will outline the key aspects of the Bayesian paradigm, aiming to provide the necessary technical foundation for the application of Bayesian neural networks.
### Bayesian paradigm
The fundamental idea behind the Bayesian approach is to quantify the uncertainty in the inference by using probability distributions. Considering parameters as random variables is in contrast to non-Bayesian approaches, also referred to as frequentist or classic, where parameters are assumed to be deterministic quantities. A Bayesian acts by updating their beliefs as data are gathered according to Bayes' rule, an inductive learning process called Bayesian inference. The choice of resorting to Bayes' rule instead of any other has mathematical justifications dating back to works by Cox and by Savage (Cox, 1961; Savage, 1972).
Recall the following notations: let a dataset \(\mathcal{D}=\{(\mathbf{x}_{1},\mathbf{y}_{1}),\ldots,(\mathbf{x}_{n},\mathbf{y}_{n})\}\), modeled with a data generating process characterized by a _sampling model_ or _likelihood_\(p(\mathcal{D}|\mathbf{w})\). Let parameters \(\mathbf{w}\) belong to some parameter space denoted by \(\mathbf{\mathcal{W}}\), usually a subset of the Euclidean space \(\mathbb{R}^{d}\). A _prior distribution_\(p(\mathbf{w})\) represents our prior beliefs about the distribution of the parameters \(\mathbf{w}\) (more details in Section 3.2). Note that simultaneously specifying a prior \(p(\mathbf{w})\) and a sampling model \(p(\mathcal{D}|\mathbf{w})\) amounts to describing the _joint distribution_ between parameters \(\mathbf{w}\) and data \(\mathcal{D}\), in the form of the product rule of probability \(p(\mathbf{w},\mathcal{D})=p(\mathbf{w})p(\mathcal{D}|\mathbf{w})\). The prior and the model are combined with Bayes' rule to yield the _posterior distribution_\(p(\mathbf{w}|\mathcal{D})\) as follows:
\[p(\mathbf{w}|\mathcal{D})=\frac{p(\mathbf{w})p(\mathcal{D}|\mathbf{w})}{p(\mathcal{D})}. \tag{4}\]
The normalizing constant \(p(\mathcal{D})\) in Bayes' rule is called the model _evidence_ or _marginal likelihood_. This normalizing constant is irrelevant to the posterior since it does not depend on the parameter \(\mathbf{w}\), which is why Bayes' rule is often written in the form
\[\text{posterior}\propto\text{prior}\times\text{likelihood}.\]
Nevertheless, the model evidence remains critical in _model comparison_ and _model selection_, notably through _Bayes factors_. See for example Chapter 28 in MacKay (2003), and Lotfi et al. (2022) for a detailed exposition in Bayesian deep learning. It can be computed by integrating over all possible values of \(\mathbf{w}\):
\[p(\mathcal{D})=\int p(\mathcal{D}|\mathbf{w})p(\mathbf{w})\mathrm{d}\mathbf{w}. \tag{5}\]
Using a Bayesian approach, all information conveyed by the data is encoded in the posterior distribution. Often statisticians are asked to communicate scalar summaries in the form of point estimates of the parameters or quantities of interest. A convenient way to proceed for Bayesians
is to compute the _posterior mean_ of some quantity of interest \(f(\mathbf{w})\) of the parameters. The problem therefore comes down to numerical computation of the integral
\[\mathbb{E}[f(\mathbf{w})|\mathcal{D}]=\int f(\mathbf{w})p(\mathbf{w}|\mathcal{D})\mathrm{d} \mathbf{w}. \tag{6}\]
This includes the posterior mean if \(f(\mathbf{w})=\mathbf{w}\), as well as _predictive_ distributions. More specifically, let \(\mathbf{y}^{*}\) be a new observation associated to some input \(\mathbf{x}^{*}\) in a regression or classification task; then the prior and posterior predictive distributions are respectively
\[p(\mathbf{y}^{*}|\mathbf{x}^{*}) =\mathbb{E}[p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{w})]\] \[=\int p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{w})p(\mathbf{w})\mathrm{d}\mathbf{w},\] \[\text{and}\quad p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathcal{D}) =\mathbb{E}[p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{w})|\mathcal{D}]\] \[=\int p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{w})p(\mathbf{w}|\mathcal{D})\mathrm{ d}\mathbf{w}.\]
The posterior predictive distribution is typically used in order to assess model fit to the data, by performing posterior predictive checks. More generally, it allows us to account for _model uncertainty_, or _epistemic uncertainty_, in a principled way, by averaging the sampling distribution \(p(\mathbf{y}^{*}|\mathbf{x}^{*},\mathbf{w})\) over the posterior distribution \(p(\mathbf{w}|\mathcal{D})\). This model uncertainty is in contrast to the uncertainty associated with data measurement, also called _aleatoric uncertainty_ (see Section 2.5).
### Priors
Bayes' rule (4) tells us how to update our beliefs, but it does not provide any hint about what those beliefs should be. Often the choice of a prior may be dictated by computational convenience. Let us mention the case of _conjugacy_: a prior is said to be _conjugate_ to a sampling model if the posterior remains in the same parametric family. Classic examples of such conjugate pairs of [prior, model] include the [Gaussian, Gaussian], [beta, binomial], [gamma, Poisson], among others. These three pairs have in common the fact that their model belongs to the exponential family. More generally, any model from the exponential family possesses some conjugate prior. However, the existence of conjugate priors is not a distinguishing feature of the exponential family (for example, the Pareto distribution is a conjugate prior for the uniform model on the interval \([0,\mathbf{w}]\), for a positive scalar parameter \(\mathbf{w}\)).
Discussing the choice of a prior often comes with the question of _how much information it conveys_? with the distinction of _objective priors_ as opposed to _subjective priors_. For example, Jeffreys' prior, defined as being proportional to the square root of the determinant of the Fisher information matrix, is considered an objective prior in the sense that it is invariant to parameterization changes. Uninformative priors often have the troublesome oddity of being _improper_, in the sense of having a density that does not integrate to a finite value (for example, a uniform distribution on an unbounded parameter space). As surprising as it may seem, such priors are commonplace in Bayesian inference and are considered valid ones as soon as they yield a proper posterior, from which one can draw practical conclusions. However, note that an improper prior hinders the use of the prior predictive (which is de facto improper, too), as well as Bayes factors. Somehow in the opposite direction to objective priors, subjective priors lie at the roots of the Bayesian approach, where one's beliefs are encoded through a prior. Eliciting a prior distribution is a delicate issue, see for instance Mikkola et al. (2023) for a recent review.
Critically, encoding prior beliefs becomes more and more difficult with more complex models, where parameters may not have a direct interpretation, and with higher-dimensional parameter spaces, where the design of a prior that adequately covers the space gets intricate. In this case, direct computation of the posterior distribution may become intractable. If exact Bayesian inference is intractable for a model, its performance hinges critically on the form of approximations made due to computational constraints and the nature of the prior distribution over parameters.
### Computational methods
Posterior computation involves three terms: the prior \(p(\mathbf{w})\), likelihood \(p(\mathcal{D}|\mathbf{w})\), and evidence \(p(\mathcal{D})\). The evidence integral (5) is typically not available in closed form and becomes intractable for high-dimensional problems. The impossibility to obtain a precise posterior as a closed-form solution has led to the development of different approximation methods. The inference can be made by considering _sampling strategies_ like Markov chain Monte Carlo (MCMC) procedures, or _approximation methods_ based on optimization approaches like _variational inference_ and the _Laplace method_.
In recent years, the development of probabilistic programming languages allowed to simplify the implementation of Bayesian models in numerous programming environments: we can mention Stan (Carpenter et al., 2017), PyMC3 (Salvatier et al., 2016), Nimble (de Valpine et al., 2017), but also some probabilistic extensions of deep learning libraries like TensorFlow Probability (Dillon et al., 2017) and Pyro (Bingham et al., 2019), among others. Nevertheless, there are still many options to be tuned and challenges for each step of a Bayesian model, which we briefly summarize in the following sections. We refer to Gelman et al. (2020) for a detailed overview of the Bayesian workflow.
#### 3.3.1 Variational inference
Variational inference (Jordan et al., 1999; Blei et al., 2017) approximates the true posterior \(p(\mathbf{w}|\mathcal{D})\) with a more tractable distribution \(q(\mathbf{w})\) called variational posterior distribution. More specifically, variational inference hypothesizes an approximation (or variational) family of simple distributions \(q\), e.g., isotropic Gaussians, to approximate the posterior: \(p(\mathbf{w}|\mathcal{D})\approx q(\mathbf{w}|\theta)\).
Variational inference seeks the distribution parameter \(\theta\) in this family by minimizing the KL divergence between approximate posteriors and the true posterior. The KL divergence from \(q(\cdot|\theta)\) (denoted simply \(q\) hereafter) to \(p(\cdot|\mathcal{D})\) is defined as
\[\text{KL}(q||p(\cdot|\mathcal{D}))=\int q(\mathbf{w})\log\frac{q(\mathbf{w})}{p(\mathbf{w} |\mathcal{D})}\text{d}\mathbf{w}.\]
Then, Bayesian inference is performed with the intractable posterior \(p(\mathbf{w}|\mathcal{D})\) replaced by the tractable variational posterior approximation \(q(\mathbf{w})\). It is easy to see that
\[\text{KL}(q||p(\cdot|\mathcal{D}))=-\int q(\mathbf{w})\log\frac{p(\mathbf{w})p( \mathcal{D}|\mathbf{w})}{q(\mathbf{w})}\text{d}\mathbf{w}+\log p(\mathcal{D}).\]
Since the log evidence does not depend on the choice of the approximate posterior \(q\), minimizing
the KL is equivalent to maximizing the so-called evidence lower bound (ELBO):
\[\text{ELBO}(q) =\int q(\mathbf{w})\log\frac{p(\mathbf{w})p(\mathcal{D}|\mathbf{w})}{q(\mathbf{w})} \text{d}\mathbf{w}\] \[=-\text{KL}(q||p)+\int q(\mathbf{w})\log p(\mathcal{D}|\mathbf{w})\text{d} \mathbf{w}.\]
To illustrate how to optimize the above objective, let us take the common approach where the prior \(p(\mathbf{w})\) and posterior \(q(\mathbf{w})\) are modeled as Gaussians: \(p(\mathbf{w})=\mathcal{N}(\mathbf{w}|\mathbf{w}_{p},\mathbf{\Sigma}_{p})\) and \(q(\mathbf{w})=\mathcal{N}(\mathbf{w}|\mathbf{w}_{q},\mathbf{\Sigma}_{q})\), respectively. Then the first term in the ELBO can be computed in closed-form by noting that \(2\text{KL}(q||p)\) is equal to
\[\text{tr}(\mathbf{\Sigma}_{p}^{-1}\mathbf{\Sigma}_{p})-d+(\mathbf{w}_{p}-\mathbf{w}_{q})^{\top} \mathbf{\Sigma}_{p}^{-1}(\mathbf{w}_{p}-\mathbf{w}_{q})+\log\left(\frac{\det\mathbf{\Sigma}_{ p}}{\det\mathbf{\Sigma}_{q}}\right),\]
where \(d\) is the dimension of \(\mathbf{w}\). The second term can be approximated through Monte Carlo sampling as
\[\int q(\mathbf{w})\log p(\mathcal{D}|\mathbf{w})\text{d}\mathbf{w}\approx\sum_{i=1}^{S} \log p(\mathcal{D}|\mathbf{w}_{i}),\]
where \(\mathbf{w}_{i}\sim q(\mathbf{w})\), \(i=1,\ldots,S\) are Monte Carlo samples. The resulting objective can be typically optimized by gradient descent, by using the reparametrization trick for Gaussians (Kingma et al., 2015).
#### 3.3.2 Laplace approximation
Another popular method is _Laplace approximation_ that uses a normal approximation centered at the maximum of the posterior distribution, or maximum a posteriori (MAP). Let us illustrate the Laplace method for approximating a distribution \(g\) (typically a posterior distribution) known up to a constant, \(g(\mathbf{w})=f(\mathbf{x})/Z\), defined over a \(d\)-dimensional space \(\mathbf{\mathcal{W}}\). At a stationary point \(\mathbf{w}_{0}\), the gradient \(\nabla f(\mathbf{w})\) vanishes. Expanding around this stationary point yields
\[\log f(\mathbf{w})\simeq\log f(\mathbf{w}_{0})-\frac{1}{2}(\mathbf{w}-\mathbf{w}_{0})^{\top} \mathbf{A}(\mathbf{w}-\mathbf{w}_{0}),\]
where the Hessian matrix \(\mathbf{A}\in\mathbb{R}^{d\times d}\) is defined by
\[\mathbf{A}=-\nabla\nabla\log f(\mathbf{w})|_{\mathbf{w}=\mathbf{w}_{0}},\]
and \(\nabla\) is the gradient operator. Taking the exponential of both sides we obtain
\[f(\mathbf{w})\simeq f(\mathbf{w}_{0})\exp\left\{-\frac{1}{2}(\mathbf{w}-\mathbf{w}_{0})^{\top }\mathbf{A}(\mathbf{w}-\mathbf{w}_{0})\right\}.\]
The distribution \(g(\mathbf{w})\) is proportional to \(f(\mathbf{w})\) and the appropriate normalization coefficient can be found by inspection, giving
\[g(\mathbf{w}) =\frac{|\mathbf{A}|^{1/2}}{(2\pi)^{d/2}}\exp\left\{-\frac{1}{2}( \mathbf{w}-\mathbf{w}_{0})^{\top}\mathbf{A}(\mathbf{w}-\mathbf{w}_{0})\right\}\] \[=\mathcal{N}(\mathbf{w}|\mathbf{w}_{0},\mathbf{A}^{-1}),\]
where \(|\mathbf{A}|\) denotes the determinant of \(\mathbf{A}\). This Gaussian distribution is well-defined provided its precision matrix \(\mathbf{A}\) is positive-definite, which implies that the stationary point \(\mathbf{w}_{0}\) must be a local maximum, not a minimum or a saddle point. Identifying \(f(\mathbf{w})=p(\mathcal{D}|\mathbf{w})p(\mathbf{w})\) and \(Z=p(\mathcal{D})\) and applying the above formula results in the typical Laplace approximation to the posterior. To find a maximum \(\mathbf{w}_{0}\), one can simply run a gradient descent algorithm on \(\log f(\mathbf{w})=\log p(\mathcal{D}|\mathbf{w})+\log p(\mathbf{w})\).
#### 3.3.3 Sampling methods
Sampling methods refer to classes of algorithms that use sampling from probability distributions. They are also referred to as Monte Carlo (MC) methods when used in order to approximate integrals and have become fundamental in data analysis. In simple cases, rejection sampling or adaptive rejection sampling can be implemented to return independent samples from a distribution. For more complex distributions, typically multidimensional ones, one can resort to _Markov chain Monte Carlo_ (MCMC) methods which have become ubiquitous in Bayesian inference (Robert and Casella, 2004). This class of methods consists in devising a Markov chain whose equilibrium distribution is the target posterior distribution. Recording the chain samples, after an exploration phase known as the burn-in period, provides a sample approximately distributed according to the posterior.
The Metropolis-Hastings (MH) method uses some proposal kernel that depends on the previous sample of the chain. MH proposes an acceptance/rejection rule for the generated samples. The choice of kernel defines different types of MH. For example, random walk MH uses a Gaussian kernel with mean at the previous sample and some heuristic variance. In the multidimensional case, Gibbs sampling is a particular case of MH when the full-conditional distributions are available. Gibbs sampling is appealing in the sense that samples from the full-conditional distributions are never rejected. However, full-conditional distributions are not always available in closed-form. Another drawback is that the use of full-conditional distributions often results in highly correlated iterations. Many extensions adjust the method to reduce these correlations. Metropolis-Adjusted Langevin Algorithm (MALA) is another special case of MH algorithm that proposes new states according to so-called Langevin dynamics. Langevin dynamics evaluate the gradient of the target distribution in such a way that proposed states in MALA are more likely to fall in high-probability density regions.
Hamiltonian Monte Carlo (HMC) is an improvement over the MH algorithm, where the chain's trajectory is based on the Hamiltonian dynamic equations. In Hamilton's equations, there are two parameters that should be computed: a random variable distribution and its moment. Therefore, the exploration space of a given posterior is expended with its moment. After generating a sample from a given posterior and computing its moment, the stationary principle of Hamilton's equations gives level sets of solutions. HMC parameters -a step size and a number of steps for a numerical integrator -define how far one should slide the level sets from one space point to the next one in order to generate the next sample. The No-U-Turn Sampler (NUTS) is a modification of the original HMC which has a criterion to stop the numerical integration. This makes NUTS a more automatic algorithm than plain HMC because it avoids the need to set the step size and the number of steps.
The main advantage of sampling methods is that they are asymptotically exact: when the number of iterations increases, the Markov chain distribution converges to the (target) posterior distribution. However, constructing efficient sampling procedures with good guarantees of
convergence and satisfactory exploration of the sample parameter space can be prohibitively expensive, especially in the case of high dimensions. Note that the initial samples from a chain do not come from the stationary distribution, and should be discarded. The amount of time it takes to reach stationarity is called the mixing time or burn-in time, and reducing it is a key factor for making a sampling algorithm fast. Evaluating convergence of the chain can be done with numerical diagnostics (see for instance Gelman and Rubin, 1992; Vehtari et al., 2021; Moins et al., 2023).
### Model selection
The Bayesian paradigm provides a principled approach to model selection. Let \(\{\mathcal{M}_{i}\}_{i=1}^{M}\) be a set of \(M\) models. We suppose that the data is generated from one of these models but we are uncertain about which one. The uncertainty is expressed through a prior probability distribution \(p(\mathcal{M}_{i})\) which allows us to express a preference for different models, although a typical assumption is that all models are given equal prior probability \(\nicefrac{{1}}{{M}}\). Given a dataset \(\mathcal{D}\), we then wish to evaluate the posterior distribution
\[p(\mathcal{M}_{i}|\mathcal{D})\propto p(\mathcal{M}_{i})p(\mathcal{D}| \mathcal{M}_{i}).\]
The _model evidence_\(p(\mathcal{D}|\mathcal{M}_{i})\) describes the probability that the data were generated from each individual model \(\mathcal{M}_{i}\)(Bishop and Nasrabadi, 2006). For a model governed by a set of parameters \(\mathbf{w}\), the model evidence is obtained by integrating out the parameters \(\mathbf{w}\) from the joint distribution \((\mathcal{D},\mathbf{w})\), see Equation (5):
\[p(\mathcal{D}|\mathcal{M}_{i}) =\int p(\mathcal{D},\mathbf{w}|\mathcal{M}_{i})\mathrm{d}\mathbf{w}\] \[=\int p(\mathcal{D}|\mathbf{w},\mathcal{M}_{i})p(\mathbf{w}|\mathcal{M}_{ i})\mathrm{d}\mathbf{w}.\]
The model evidence is also sometimes called the _marginal likelihood_ because it can be viewed as a likelihood function over the space of models, in which the parameters have been marginalized out. From a sampling perspective, the marginal likelihood can be viewed as the probability of generating the dataset \(\mathcal{D}\) from a model whose parameters are sampled from the prior. If the prior probability over models is uniform, Bayesian _model selection_ corresponds to choosing the model with the highest marginal likelihood. The ratio of model evidences \(p(\mathcal{D}|\mathcal{M}_{i})/p(\mathcal{D}|\mathcal{M}_{j})\) for two models is known as a _Bayes factor_(Kass and Raftery, 1995).
The marginal likelihood serves as a criterion for choosing the best model with different hyperparameters. When derivatives of the marginal likelihood are available (such as for Gaussian process regression), we can learn the optimal hyperparameters for a given marginal likelihood using an optimization procedure. This procedure, known as _type 2 maximum likelihood_(Bishop and Nasrabadi, 2006), results in the _most likely model_ that generated the data. It differs from Bayesian inference which finds the posterior over the parameters for a given model. In the Gaussian process literature, type 2 maximum likelihood optimization often results in better hyperparameters than cross-validation (Lotti et al., 2022). For models other than Gaussian processes, one needs to resort to an approximation of the marginal likelihood, typically using the Laplace approximation (Bishop and Nasrabadi, 2006).
What are Bayesian neural networks?
We have seen now that neural networks are a popular class of models due to their expressivity and generalization abilities, while Bayesian inference is a statistical technique heralded for its adaptivity and robustness. It is therefore natural to pose the question of whether we can combine these ideas to yield the best of both worlds. Bayesian neural networks (BNNs) are an attempt at achieving just this.
As outlined in Section 2, we aim to infer the parameters of a neural network \(\mathbf{w}\in\mathbf{\mathcal{W}}\), which might be the weights and biases of a fully-connected network, the convolutional kernels of a CNN, the recurrent weights of an RNN, etc. However, in contrast to just using the SGD procedure from Eq. (3) to get a point estimate for \(\mathbf{w}\), we will try to use the Bayesian strategy from Eq. (4) to yield a posterior distribution \(p(\mathbf{w}|\mathcal{D})\) over parameters. This distribution enables the quantification of uncertainty associated with the model's predictions and can be updated as new data is observed. While this approach seems straightforward on paper, we will see in the following that it leads to many unique challenges in the context of BNNs, especially when compared to more conventional Bayesian models, such as Gaussian processes (Rasmussen and Williams, 2006).
Firstly, the weight-space \(\mathbf{\mathcal{W}}\) of the neural network is often high-dimensional, with modern architectures featuring millions or even billions of parameters. Moreover, understanding how these weights map to the functions implemented by the network is not trivial. Both of these properties therefore strongly limit our ability to formulate sensible priors \(p(\mathbf{w})\), as illustrated in Fig. 5. We will discuss these challenges as well as strategies to overcome them in more detail in Section 4.1, focusing primarily on the theoretical understanding and explanation of empirically observed phenomena, such as the Gaussian process limit in function-space and the relationship between prior selection and implicit and explicit regularization in conventional neural networks.
Secondly, due to the complicated form of the likelihood function (which is parameterized by the neural network itself), neither of the integrals in Eq. (5) and Eq. (6) are tractable. We thus have to resort to approximations, which are again made more cumbersome by the high dimensionality of \(\mathbf{\mathcal{W}}\). We will discuss different approximation techniques and their specific implementations in the context of BNNs in Section 4.2, contrasting their tradeoffs and offering guidance for practitioners.
Whether the aforementioned challenges relating to priors and inference in BNNs are surmountable in practice often depends on the particular learning problem at hand and on the modeling effort and computational resources one is willing to spend. We will critically reflect on this question in the following and also offer some reconciliation with frequentist approaches later in Section 5.
### Priors
Specifying a prior distribution can be delicate for complex and extremely high-dimensional models such as neural networks. Reasoning in terms of parameters is challenging due to their high dimension, limited interpretability, and the over-parameterization of the model. Moreover, since the true posterior can rarely be recovered, it is difficult to isolate a prior's influence, even empirically (Wenzel et al., 2020). This gives rise to the following question: _do the specifics of the prior even matter?_ This question is all the more important since inference is usually blunted by posterior approximations and enormous datasets.
The machine learning interpretation of the _no free lunch_ theorem states that any supervised learning algorithm includes some _implicit prior_(Wolpert, 1996). From the Bayesian perspective, priors are explicit. Thus, there is an impossibility of the existence of a universal prior valid for any task. This line of reasoning leads to carefully choosing the prior distribution since it can considerably help to improve the performance of the model.
On the other hand, assigning priors to complex models is often thought of as imposing soft constraints, like regularization, or via data transformations like data augmentation. The idea behind this type of prior is to help and stabilize computation. These priors are sometimes called _weakly informative_ or _mildly informative_ priors. Moreover, most regularization methods used for point-estimate neural networks can be understood from a Bayesian perspective as setting a prior, see Section 4.1.3.
We review recent works on the influence of the prior in _weight-space_, including how it helps to connect classical and Bayesian approaches applied to deep learning models. More discussion on the influence of the prior choice can be found in Nalisnick (2018) and Fortuin (2021). The choice of the prior and its interaction with the approximate posterior family are studied in Hron et al. (2018).
#### 4.1.1 Weight priors (parameter-space)
The Gaussian distribution is a common and default choice of prior in Bayesian neural networks. Looking for the maximum-a-posteriori (MAP) of such a Bayesian model is equivalent to training a standard neural network under a weighted \(\mathscr{L}_{2}\) regularization (see discussion in Section 4.1.3). There is no theoretical evidence that the Gaussian prior is preferable over other prior distribution choices (Murphy, 2012). Yet, its well-studied mathematical properties lead to having Gaussian distribution as a default prior. Further, we review the works that show how different weight priors influence the resulting model.
**Adversarial robustness and priors.** In BNNs, one can evaluate adversarial robustness with the posterior predictive distribution of the model (Blaas and Roberts, 2021). A Lipschitz constant arising from the model can be used in order to quantify this robustness. The posterior predictive depends on the model structure and the weights' prior distribution. In quantifying how the prior distribution influences the Lipschitz constant, Blaas and Roberts (2021) establish that for BNNs with Gaussian priors, the model's Lipschitz constant is monotonically increasing
Figure 5: Bayesian neural network architecture, where weights \(\mathbf{w}^{(\ell)}\) at layer \(\ell\) follow some prior distribution \(p^{(\ell)}\).
with respect to the prior variance. It means that lower variance should lead to a lower Lipschitz constant, thus, should lead to higher robustness.
**Gaussian process inducing.** A body of works imposes weight priors so that the induced priors over functions have desired properties, e.g., be close to some Gaussian process (GP). For instance, Flam-Shepherd et al. (2017), and further extended Flam-Shepherd et al. (2018), propose to tune priors over weights by minimizing the Kullback-Leibler divergence between BNN functional priors and a desired GP. However, the Kullback-Leibler divergence is difficult to work with due to the need to estimate an entropy term based on samples. To overcome this, Tran et al. (2020) suggest using the Wasserstein distance and provide an extensive study on performance improvements when imposing such priors. Similarly, Matsubara et al. (2021) use the ridgelet transform (Candes, 1998) to approximate the covariance function of a GP.
**Priors based on knowledge about function-space.** Some works suggest how to define priors using information from the function-space since it is easier to reason about than in weight-space. Nalisnick et al. (2021) propose _predictive complexity priors_ (PREDCPs) that constrain the Bayesian prior by comparing the predictions between the model and some less complex reference model. These priors are constructed hierarchically with first-level priors over weights (for example, Gaussian) and second-level hyper-priors over weight priors parameters (for example, over Gaussian variances). The hyper-priors are defined to encourage functional regularization, e.g., depth selection.
During training, the model sometimes needs to be updated concerning the architecture, training data, or other aspects of the training setup. Khan and Swaroop (2021) propose _knowledge-adaptation priors_ (K-priors) to reduce the cost of retraining. The objective function of K-priors combines the weight and function-space divergences to reconstruct past gradients. Such priors can be viewed as a generalization of weight-space priors. More on the function-space priors can be found in the next section.
#### 4.1.2 Unit priors (function-space)
Arguably, the prior that matters the most from a practitioner's point of view is the prior induced in function-space, not in parameter space or weight-space (Wilson, 2020). The prior seen at the function level can provide insight into what it means in terms of the functions it parametrizes. To some extent, priors on BNNs' parameters are often challenging to specify since it is unclear what they actually mean. As a result, researchers typically lack interpretable semantics on what each unit in the network represents. It is also hard to translate some subjective domain knowledge into the neural network parameter priors. Such subjective domain knowledge may include feature sparsity or signal-to-noise ratio (see for instance Cui et al., 2021). A way to address this problem is to study the priors in the function-space, thus raising the natural question: _how to assign a prior on functions of interest for classification or regression settings?_
The priors over parameters can be chosen carefully by reasoning about the functions that these priors induce. Gaussian processes are perfect examples of how this approach works (Rasmussen and Williams, 2006). There is a body of work on translating priors on functions given by GPs into BNN priors (Flam-Shepherd et al., 2017, 2018; Tran et al., 2020; Matsubara et al., 2021). Recent studies establish a closer connection between infinitely-wide BNNs and GPs which we review next.
**Infinite-width limit.** Pioneering work of Neal (1996) first connected Bayesian neural networks and Gaussian processes. Applying the central limit theorem, Neal showed that the output distribution of a one-hidden-layer neural network converges to a Gaussian process for appropriately scaled weight variances. Recently, Matthews et al. (2018); Lee et al. (2018) extended Neal's results to deep neural networks showing that their units' distribution converges to a Gaussian process when _the width of all the layers_ goes to infinity. These observations have recently been significantly generalized to a variety of architectures, including convolutional neural networks (Novak et al., 2020; Garriga-Alonso et al., 2019), batch norm and weight-tying in recurrent neural networks (Yang, 2019), and ResNets (Hayou, 2022). There is also a correspondence between GPs and models with _attention layers_, i.e., particular layers with an attention mechanism relating different positions of a single sequence to compute a representation of the sequence, see e.g. Vaswani et al. (2017). For multi-head attention architectures, which consist of several attention layers running in parallel, as the number of heads and the number of features tends to infinity, the outputs of an attention model also converge to a GP (Hron et al., 2020). Generally, if an architecture can be expressed solely via matrix multiplication and coordinate-wise nonlinearities (i.e., a tensor program), then it has a GP limit Yang (2019).
Further research builds upon the limiting Gaussian process property to devise novel architecture rules for neural networks. Specifically, the neural network Gaussian process (NNGP) (Lee et al., 2018) describes the prior on function-space that is realized by an iid prior over the parameters. The function-space prior is a GP with a specific kernel defined recursively with respect to the layers. For the rectified linear unit (ReLU) activation function, the Gaussian process covariance function is obtained analytically (Cho and Saul, 2009). Stable distribution priors for weights also lead to stable processes in the infinite-width limit (Favaro et al., 2020).
When the prior over functions behaves like a Gaussian process, the resulting BNN posterior in function-space also weakly converges to a Gaussian process, which was firstly empirically shown in Neal (1996) and Matthews et al. (2018) and then theoretically justified by Hron et al. (2020). However, given the wide variety of structural assumptions that GP kernels can represent (Rasmussen and Williams, 2006; Lloyd et al., 2014; Sun et al., 2018), BNNs outperform GPs by a significant gap in expressive power (Sun et al., 2019). Adlam et al. (2020) show that the resulting NNGP is better calibrated than its finite-width analogue. The downside is its poorer performance in part due to the complexity of training GPs with large datasets because of matrix inversions. However, this limiting behavior triggers a new line of research to find better approximation techniques. For example, Yaida (2020) shows that finite-width corrections are beneficial to Bayesian inference.
Nevertheless, infinite-width neural networks are valuable tools to obtain some theoretical properties on BNNs in general and to study the neural networks from a different perspective. It results in learning dynamics via the _neural tangent kernel_(Jacot et al., 2018), and an _initialization procedure_ via the so-called _Edge of Chaos_(Poole et al., 2016; Schoenholz et al., 2017; Hayou et al., 2019). We describe below the aforementioned aspects in detail.
**Neural tangent kernel.** Bayesian inference and the GP limit give insights into how well over-parameterized neural networks can generalize. Then, the idea is to apply a similar scheme to neural networks after training and study the dynamics of gradient descent on infinite width. For any parameterized function \(f(\mathbf{x},\mathbf{w})\) let:
\[K_{\mathbf{w}}(\mathbf{x},\mathbf{x}^{\prime})=\langle\nabla_{\mathbf{w}}f(\mathbf{x},\mathbf{w}), \nabla_{\mathbf{w}}f(\mathbf{x}^{\prime},\mathbf{w})\rangle. \tag{7}\]
When \(f(\mathbf{x},\mathbf{w})\) is a feedforward neural network with appropriately scaled parameters, a con
vergence \(K_{\mathbf{w}}\to K_{\infty}\) occurs to some fixed kernel called neural tangent kernel (NTK) when the network's widths tend to infinity one by one starting from the first layer (Jacot et al., 2018). Yang (2019) generalizes the convergence of NTK to the case when widths of different layers tend to infinity together.
If we choose some random weight initialization for a neural network, the initial kernel of this network approaches a deterministic kernel as the width increases. Thus, NTK is independent of specific initialization. Moreover, in the infinitely wide regime, NTK stays constant over time during optimization. Therefore, this finding enables to study learning dynamics in infinitely wide feed-forward neural networks. For example, Lee et al. (2019) show that NNs in this regime are simplified to linear models with a fixed kernel.
While this may seem promising at first, empirical results show that neural networks in this regime perform worse than practical over-parameterized networks (Arora et al., 2019; Lee et al., 2020). Nevertheless, this still provides theoretical insight into some aspects of neural network training.
**Finite width.** While infinite-width neural networks help derive theoretical insights into deep neural networks, neural networks at finite-width regimes or approximations of infinite-width regimes are the ones that are used in real-world applications. It is still not clear when the GP framework is more amenable to describe the BNN behavior. In some cases, finite-width neural networks outperform their infinite-width counterparts (Lee et al., 2018; Garriga-Alonso et al., 2019; Arora et al., 2019; Lee et al., 2020). Arora et al. (2019) show that convolutional neural networks outperform their corresponding limiting NTK. This performance gap is likely due to the finite width effect where a fixed kernel cannot fully describe the CNN dynamics. The evolution of the NTK along with training has its benefits on generalization as shown in further works (Dyer and Gur-Ari, 2020; Huang and Yau, 2020).
Thus, obtaining a unit prior description for finite-width neural networks is essential. One of the principal obstacles in pursuing this goal is that hidden units in BNNs at finite-width regime are dependent (Vladimirova et al., 2021). The induced dependence makes it difficult to analytically obtain distribution expressions for priors in function-space of neural networks. Here, we review works on possible solutions such as the introduction of finite-width corrections to infinite-width models and the derivation of distributional characterizations amenable for neural networks.
**Corrections.** One of the ways to describe priors in the function-space is to impose corrections to BNNs at infinite width. In particular, Antognini (2019) shows that ensembles of finite one-hidden-layer NNs with large width can be described by Gaussian distributions perturbed by a fourth Hermite polynomial. The scale of the perturbations is inversely proportional to the neural network's width. Similar corrections are also proposed in Naveh et al. (2020). Additionally, Dyer and Gur-Ari (2020) propose a method using Feynman diagrams to bound the asymptotic behavior of correlation functions in NNs. The authors present the method as a conjecture and provide empirical evidence on feed-forward and convolutional NNs to support their claims. Further, Yaida (2020) develops the perturbative formalism that captures the flow of pre-activation distributions to deeper layers and studies the finite-width effect on Bayesian inference.
**Full description.** Springer and Thompson (1970) show that the probability density function of the product of independent normal variables can be expressed through a Meijer G-function. It results in an accurate description of unit priors induced by Gaussian priors on weights and linear
or ReLU activation functions (Zavatone-Veth and Pehlevan, 2021; Noci et al., 2021). It is the first full description of function-space priors but under strong assumptions, requiring Gaussian priors on weights and linear or ReLU activation functions, and with fairly convoluted expressions. Though this is an accurate description, it is hard to work with due to its complex structure. However, this accurate characterization is in line with works on heavy-tailed properties for hidden units which we discuss further.
**Distributional characteristics.** Concerning the distributional characteristics of neural networks units, a number of alternative analyses to the Gaussian Process limit have been developed in the literature. Bibi et al. (2018) provides the expression of the first two moments of the output units of a one-hidden-layer neural network. Obtaining moments is a preliminary step to characterizing a whole distribution. However, the methodology of Bibi et al. (2018) is also limited to one-hidden-layer neural networks. Later, Vladimirova et al. (2019, 2020) focuses on the moments of hidden units and shows that moments of any order are finite under mild assumptions on the activation function. More specifically, the _sub-Weibull_ property of the unit distributions is shown, indicating that hidden units become heavier-tailed when going _deeper_ in the network. This result is refined by Vladimirova et al. (2021) who show that hidden units are _Weibull-tail_ distributed. Weibull-tail distributions are characterized in a different manner than sub-Weibull distributions, not based on moments but on a precise description of their tails. These tail descriptions reveal differences between hidden units' distributional properties in finite and infinite-width BNNs, since they are in contrast with the GP limit obtained when going _wider_.
**Representation learning.** The _representation learning_ (when the model is provided with data and learned how to represent the features) in finite-width neural networks is not yet well-understood. However, the infinitely wide case gives rise to studying representation learning from a different perspective. For instance, Zavatone-Veth et al. (2021) compute the leading perturbative finite-width corrections. Aitchison (2020) studies the prior over representations in finite and infinite Bayesian neural networks. The narrower, deeper networks, the more flexibility they offer because the covariance of the outputs gradually vanishes as the network size increases. The results are obtained by considering the variability in the top-layer kernel induced by the prior over a finite neural network.
#### 4.1.3 Regularization
Since deep learning models are over-parametrized, it is essential to avoid overfitting to help these systems generalize well. Several explicit regularization strategies are used, including Lasso \(\mathscr{L}_{1}\) and weight-decay \(\mathscr{L}_{2}\) regularization of the parameters. Another way is to inject some stochasticity into the computations that implicitly prevents certain pathological behaviors and thus helps the network to prevent overfitting. The most popular methods in this line of research are dropout (Srivastava et al., 2014) and batch normalization (Ioffe and Szegedy, 2015). It has also been observed that the stochasticity in stochastic gradient descent (which is normally considered as a drawback) can itself serve as an implicit regularizer (Zhang et al., 2017).
Here we draw connections between popular regularization techniques in neural networks and weight priors in their Bayesian counterparts. Khan and Rue (2021); Wolinski et al. (2020) have discussed how different regularization methods implicitly correspond to enforcing different priors.
**Priors as regularization.** Given a dataset \(\mathcal{D}=\{\mathbf{x}_{i},\mathbf{y}_{i}\}_{i}\), where \((\mathbf{x}_{i},\mathbf{y}_{i})\) are pairs of inputs and outputs, the _maximum-a-posteriori_ (MAP) can be used to obtain point estimation of the parameters:
\[\hat{\mathbf{w}}_{\mathrm{MAP}} =\operatorname*{arg\,max}_{\mathbf{w}}\log p(\mathbf{w}|\mathcal{D}) \tag{8}\] \[=\operatorname*{arg\,max}_{\mathbf{w}}\left[\log p(\mathcal{D}|\mathbf{w })+\log p(\mathbf{w})\right].\]
Performing classification with a softmax link function, \(-\log p(\mathcal{D}|\mathbf{w})\), corresponds to the cross-entropy loss. Performing regression with Gaussian noise such that \(p(\mathcal{D}|\mathbf{w})=\prod_{i}p(\mathbf{y}_{i}|\mathbf{w},\mathbf{x}_{i})=\prod_{i} \mathcal{N}\left(\mathbf{y}_{i}|f(\mathbf{x}_{i},\mathbf{w}),\sigma^{2}\right)\), then \(-\log p(\mathcal{D}|\mathbf{w})\) is a mean-squared error loss. In this context, the MAP estimation with a Gaussian prior \(p(\mathbf{w})\) is equivalent to optimization of the mean-squared error loss with \(\mathscr{L}_{2}\) regularization, or weight-decay for NNs. Similarly, assigning a Laplace prior to the weights \(\mathbf{w}\) leads to \(\mathscr{L}_{1}\) regularization.
In case of a flat prior (uniform and improper) distribution \(p(\mathbf{w})\propto 1\), the optimization (8) boils down to the _maximum likelihood estimator_ (MLE):
\[\hat{\mathbf{w}}_{\mathrm{MLE}}=\operatorname*{arg\,max}_{\mathbf{w}}\log p(\mathcal{ D}|\mathbf{w}).\]
However, it is important to note that point solutions like \(\hat{\mathbf{w}}_{\mathrm{MAP}}\) or \(\hat{\mathbf{w}}_{\mathrm{MLE}}\) are not Bayesian per se, since they do not use _marginalization_ with respect to the posterior, a distinguishing property of the Bayesian approach (Wilson, 2020).
**Dropout.** In this regularization technique due to Srivastava et al. (2014), each individual unit is removed with some probability \(\rho\) by setting its activation to zero. This can be recast as multiplying the activations \(h_{ij}^{(\ell)}\) by a mask variable \(m_{ij}^{(\ell)}\), which randomly takes the values 0 or 1: \(h_{ij}^{(\ell)}=m_{ij}^{(\ell)}\phi(g_{ij}^{(\ell)})\). Significant work has focused on the effect of _dropout_ as a weight regularizer (Wager et al., 2013). Inductive bias (see Section 2.3) of dropout was studied in Mianjy et al. (2018): for single hidden-layer linear neural networks, they show that dropout tends to make the norm of incoming/outgoing weight vectors of all hidden nodes equal.
The dropout technique can be reinterpreted as a form of approximate Bayesian variational inference (Kingma et al., 2015; Gal and Ghahramani, 2016). Gal and Ghahramani (2016) build a connection between dropout and the Gaussian process representation, while Kingma et al. (2015) propose a way to interpret Gaussian dropout. They develop a _variational dropout_ where each weight of a model has its individual dropout rate. _Sparse variational dropout_, proposed by Molchanov et al. (2017), extends _variational dropout_ to all possible values of dropout rates and leads to a sparse solution. The approximate posterior is chosen to factorize either over rows or over individual entries of the weight matrices. The prior usually factorizes in the same way. Therefore, performing dropout can be used as a Bayesian approximation. However, as noted by Duvenaud et al. (2014), dropout has no regularization effect on infinitely-wide hidden layers.
Nalisnick et al. (2019) propose a Bayesian interpretation of regularization via multiplicative noise, with dropout being the particular case of Bernoulli noise. They find that noise applied to hidden units ties the scale parameters in the same way as the automatic relevance determination (ARD) algorithm (Neal, 1996), a well-studied shrinkage prior. See Section 4.2.3 for more details.
### Approximate inference for Bayesian neural networks
Exact inference is intractable for Bayesian deep neural networks (DNNs) due to them being highly non-linear functions. Therefore, practitioners resort to approximate inference techniques. Typically, Bayesian approximate inference techniques fall into the following groups: 1) _variational inference_, 2) _Laplace approximation_, and 3) _Monte Carlo sampling_. These approaches for DNNs have strong similarities to the general approaches described in Section 3.3. However, the following problems arise in the deep learning setting:
* Inference is difficult or intractable: deep learning models have a very large number of parameters and the training datasets have many samples;
* The DNNs' loss landscape is multimodal: deep learning models have many local minima with near equivalent training loss.
To address these issues, researchers propose more efficient approaches to performing inferences in DNNs than those that usually strictly follow the Bayesian paradigm. Depending on one's point of view, these approaches can be seen as either very rough approximations to the true posterior distribution, or as non-Bayesian approaches that still provide useful uncertainty estimates (see more discussion on this in Section 5). In this section, we give an overview of inference methods in DNNs and describe the tractability and multimodality problems in more detail.
#### 4.2.1 Variational inference
The first _variational approach_ applied to simple neural networks is proposed by Hinton and Van Camp (1993). They use an analytically tractable Gaussian approximation with a diagonal covariance matrix to the true posterior distribution. Further, Barber and Bishop (1998) show that this approximation can be extended to a general covariance matrix remaining tractable. However, these methods were not deemed fully satisfactory due to their limited practicality. It took eighteen years after the pioneering work of Hinton and Van Camp (1993) to design more practical variational techniques with the work of Graves (2011) who suggests searching for variational distributions with efficient numerical integration. It allows variational inference for very complex neural networks but remains computationally extremely heavy. Later, Kingma and Welling (2014) introduce a _reparameterization trick_ for the variational evidence lower bound (ELBO), yielding a lower bound estimator (see Section 3.3.1 for a definition of the ELBO). This estimator can be straightforwardly optimized using standard stochastic gradient methods.
Along with the advances in variational methods and scalable inference, Blundell et al. (2015) propose a novel yet efficient algorithm named _Bayes by Backprop_ (BBB) to quantify the uncertainty of the neural network weights. It is amenable to backpropagation and returns an approximate posterior distribution, still allowing for complex prior distributions. This method achieves performance on par with neural networks combined with dropout. However, it requires twice more training parameters than the original non-Bayesian neural network due to the need for Gaussian variance parameters. At the same time, Hernandez-Lobato and Adams (2015) suggest the _probabilistic backpropagation procedure_ (PBP), which propagates expectations and performs backpropagation in a standard way. In addition, both BBB and PBP assume independence between weights when optimizing the variational evidence lower bound. While they achieve good results on small datasets, this substantial restrictive assumption on the posterior distribution is likely to result in underestimating the overall posterior uncertainty.
Variational inference with the _mean-field_ assumption (Blundell et al., 2015; Khan et al., 2018; Kingma et al., 2015; Khan et al., 2017) achieved early success for BNNs due to being computationally cheap and easy to adapt to modern automatic differentiation libraries. However, the mean-field assumption is too restrictive to achieve a reliable posterior approximation.
A whole body of research focuses on adapting variational inference to deep learning models under different optimization methods to find flexible solutions (Louizos and Welling, 2016; Sun et al., 2017; Osawa et al., 2019; Zhang et al., 2018; Dusenberry et al., 2020; Mishkin et al., 2018). Typically, more expressive variational posteriors achieve lower test negative log-likelihood and misclassification error, as well as better uncertainty calibration. But variational inference methods are known to suffer from _mode collapse_(Lakshminarayanan et al., 2017), i.e., tend to focus on a single mode of the posterior distribution. Thus, the resulting variational posterior distributions still lack expressiveness. Moreover, accurate variational inference for DNNs is difficult for practitioners as it often requires tedious optimization of hyperparameters (Wen et al., 2018).
#### 4.2.2 Laplace approximation
The Laplace approximation can be seen as an intermediate step between variational inference and sampling approaches (see Section 3.3.2 for details). It is computationally relatively cheap and useful for theoretical analyses, resulting in an expressive posterior. The main advantage is bypassing the need to optimize the data likelihood of the stochastic predictor. Furthermore, once at a minimum of the loss landscape, Gaussian posteriors can be calculated using simple vector products. It brings significant benefits for DNNs, as optimization of the data likelihood for a stochastic neural network is challenging in practice, as we mentioned in the previous section.
Works that conventionally popularized BNNs are MacKay (1992) and Neal (1992, 1996). MacKay (1992) is the first to perform an extensive study using the Laplace method. He experimentally shows that BNNs have high predictive uncertainty in the regions outside of the training data. The approach has recently seen a resurgence in interest due to these appealing properties. For a Gaussian posterior, the primary problem is choosing an appropriate approximation to the Hessian (and, therefore, the Gaussian covariance) that is computationally tractable for modern deep networks. Ritter et al. (2018) propose the Kronecker-factored Approximate Curvature (K-FAC) approximation for the Hessian (Martens and Grosse, 2015). This results in a block diagonal covariance that can be efficiently estimated using the outer products of the gradients.
Daxberger et al. (2021) introduced Laplace Redux, a Python package that automatically computes the Laplace approximation of a given network, for various approximations to the covariance. It has led to a flurry of research on the Laplace approximation that includes works on improving predictions (Immer et al., 2021; Antoran et al., 2022), the use the marginal likelihood for model selection (Immer et al., 2021; Lotfi et al., 2022), as well as learning architectures that are invariant to transformations of the dataset (Immer et al., 2022). The Laplace method can also be used to efficiently compute a posterior on a subnetwork, resulting in a more expressive posterior of the whole network (Daxberger et al., 2021).
#### 4.2.3 Sampling methods
While the Laplace approximation offers comparable or even better posterior expressiveness and is more stable to optimize than variational inference methods, it still suffers from exploring only a single mode of the loss landscape. Sampling-based approaches offer a potential solution to
this problem (see Section 3.3.3). While having a heavy computational burden, they provide (asymptotically) samples from the true posterior and should be able to explore all modes.
**MCMC/HMC.** Neal (1993) proposes the first Markov chain Monte Carlo (MCMC) sampling algorithm for Bayesian neural networks. He presents _Hamiltonian Monte Carlo_ (HMC), a sophisticated gradient-based MCMC algorithm. However, HMC is prohibitively expensive, requiring full gradient estimates as well as long burn-in periods before providing a single sample from the posterior. Only recently, Izmailov et al. (2021) revisit this approach and apply it to modern deep learning architectures. They use a large number of Tensor Processing Units (TPUs) to perform inference, which is not typically practical. Huang et al. (2023) propose a sampling approach based on adaptive importance sampling which exploits some geometric information on the complex (often multimodal) posterior distribution.
**Monte Carlo dropout.** Gal and Ghahramani (2016) establish that neural networks with dropout applied before every weight layer are mathematically equivalent to an approximation to the probabilistic deep Gaussian process (Damianou and Lawrence, 2013). This gives rise to the MC dropout method, a prevalent approach to obtaining uncertainty estimates using dropout without additional cost. More specifically, the idea of Monte Carlo dropout is simple and consists of performing random sampling at test time. Instead of turning off the dropout layers at test time (as is usually done), hidden units are randomly dropped out according to a Bernoulli(\(p\)) distribution. Repeating this operation \(M\) times provides \(M\) versions of the MAP estimate of the network parameters \(\mathbf{w}^{m}\), \(m=1,\ldots,M\) (where some units of the MAP are dropped), yielding an approximate posterior predictive in the form of the equal-weight average:
\[p(y|x,\mathcal{D})\approx\frac{1}{M}\sum_{m=1}^{M}p(y|x,\mathbf{w}^{m}). \tag{9}\]
However, the obtained approximate posterior exhibits some pathologies which can result in overconfidence (Foong et al., 2019). Also, Monte Carlo dropout captures some uncertainty from out-of-distribution (OOD) inputs but is nonetheless incapable of providing valid posterior uncertainty. Indeed, Monte Carlo dropout changes the Bayesian model under study, which modifies also the properties of the approximate Bayesian inference performed. Specifically, Folgoc et al. (2021) show that the Monte Carlo dropout posterior predictive (9) assigns zero probability to the true model posterior predictive distribution.
**Stochastic gradient Markov chain Monte Carlo (SG-MCMC).** The seminal work of Welling and Teh (2011) combines SGD and Langevin dynamics providing a highly scalable sampling scheme as an efficient alternative to a full evaluation of the gradient. The tractability of gradient mini-batches evaluations in SGD is a common feature behind many subsequent proposals (Ahn et al., 2012; Chen et al., 2014; Neiswanger et al., 2014; Korattikara Balan et al., 2015; Wang et al., 2015).
However, posterior distributions in deep learning often have complex geometries including multimodality, high curvatures, and saddle points. The presence of these features heavily impacts the efficacy of SG-MCMC in properly exploring the posterior. In order to partially alleviate this problem, Ma et al. (2015); Li et al. (2016) use adaptive preconditioners to mitigate the rapidly changing curvature. Borrowing ideas from the optimization literature, preconditioners use local information of the posterior geometry at each step to provide more efficient proposals. To address the multimodality problem, Zhang et al. (2019) propose an SG-MCMC with a cyclical step-size
schedule. Alternating large and small step-size proposals, the sampler explores a large portion of the posterior, moving from one mode to another along with a local exploration of each mode. Combining these two approaches of adaptive preconditioning and cyclical step-size scheduling yields a state-of-the-art sampling algorithm in Bayesian deep learning (Wenzel et al., 2020).
Both MCMC and stochastic gradient-MCMC based methods often result in state-of-the-art results with respect to the test negative log-likelihood error and accuracy (Izmailov et al., 2021), albeit with significant additional computation and storage costs compared to variational inference and the Laplace approximation.
To be Bayesian or not to be?
This section highlights several areas where Bayesian and frequentist approaches overlap, sometimes in a controversial way. In some cases, this overlap brings mutual benefits to both perspectives, resulting in theoretical and empirical advances. However, some topics do not appear to be resolved and remain open for discussion.
In Section 5.1, we first discuss how the Bayesian framework can lead to insights and improvements for standard NNs and vice versa. In Section 5.1.1, we describe the connections between randomized initialization schemes for deterministic neural networks and priors in the Bayesian framework. Section 5.1.2 discusses connections between the optimization methods used for deterministic neural networks (such as SGD and ADAM) and posterior distributions in the Bayesian framework. To make BNNs competitive with their deterministic counterparts, down-weighting the effect of the prior in approximate inference is often necessary for what is known as _cold_ or _tempered_ posteriors (Wilson, 2020; Wenzel et al., 2020). We discuss this effect and its possible interpretations given in the literature in Section 5.1.3. In Section 5.1.4, we discuss the connection between deep ensembles and approximate inference methods.
In Section 5.2, we discuss certificates that can be obtained for the performance on out-of-sample data for Bayesian neural networks and relate these to the frequentist setting. In Section 5.2.1, we detail how frequentist guarantees are often used in posterior contraction, showing that the posterior converges to the true posterior when the sample size grows to infinity. In Section 5.2.2, we describe how PAC-Bayes theorems can be used to certify the performance of Bayesian neural networks on out-of-sample data with high probability. In Section 5.2.3, we discuss the use of the marginal likelihood for model selection. The marginal likelihood has been a subject of debate and various interpretations in recent years, and we detail its connections to frequentist guarantees on out-of-sample performance.
Finally in Section 5.3, we describe the difficulties encountered when benchmarking Bayesian neural networks. In Section 5.3.1, we discuss various popular datasets used to evaluate uncertainty in Bayesian deep learning. In Section 5.3.2, we discuss the different evaluation metrics that are being used for evaluation. Finally in Section 5.3.3, we describe subtle differences in how neural network outputs can be interpreted. These differences can result in different conclusions across different researchers.
### Frequentist and Bayesian connections
Deep neural networks have been typically treated as deterministic predictors. This has been mainly due to the significant computational costs of training. Significant research has been conducted in deriving good initialization schemes for deep neural network parameters and good optimizers. In this section, we explore the connections between the design choices in this frequentist setting and the Bayesian setting. Furthermore, we make connections between deep ensembles and Bayesian inference and provide some possible explanations as to why deterministic neural networks often outperform Bayesian ones.
* Empirical studies have demonstrated that SGD tends to induce heavy-tailed distributions on the weights of neural networks. This deviates from the prevalent assumption of Gaussian distributions in variational inference. By adopting Bayesian principles, frequentist optimizers can be reinterpreted, leading to enhanced outcomes in uncertainty estimation. However, to achieve competitive performance, it is often necessary to down-weight the influence of the prior distribution. The underlying reasons for this requirement are currently a subject of active debate within the research community. Despite ongoing efforts, Bayesian approaches often struggle to surpass the performance of deep ensembles in various tasks.
#### 5.1.1 Priors and initialization schemes
This section reviews techniques for choosing initialization distributions over weights and biases in neural networks. This is by essence a frequentist procedure, but can be interpreted as well as prior elicitation from a Bayesian standpoint. Initialization schemes often consider Gaussian distributions on the pre-activations. As such they are closely related to the Bayesian wide regime limit when the number of hidden units per layer tends to infinity, because this regime results in a Gaussian process distribution for the weights (Section 4.1.2). Therefore, approaches to choosing deep neural network initializations should be fruitful in designing better deep neural network priors, and vice versa.
In deep learning, initializing neural networks with appropriate weights is crucial to obtaining convergence. If the weights are too small, then the variance of the input signal is bound to decrease after several layer passes through the network. As a result, the input signal may drop under some critical minimal value, leading to inefficient learning. On the other hand, if the weights are too large, then the variance of the input signal tends to grow rapidly with each layer. This leads to a saturation of neurons' activations and to gradients that approach zero. This problem is sometimes referred to as _vanishing gradients_. Opposite to the vanishing problem is accumulating large error gradients during backpropagation. The gradient grows exponentially by repetitively multiplying gradients, leading to _exploding gradients_. So, initialization must help with _vanishing_ and _exploding gradients_. In addition, the _dying ReLU_ problem is very common when depth increases (Lu et al., 2020).
Initialization also must induce _symmetry breaking_, i.e., forcing neurons to learn different functions so that the effectiveness of a neural network is maximized. Usually, this issue is solved with the _randomization procedure_. Randomized asymmetric initialization helps to deal with the dying ReLU problem (Lu et al., 2020).
Frankle and Carbin (2019) proposed an iterative algorithm for parameter pruning in neural networks while saving the original initialization of the weights after pruning, also known as the _winning ticket_ of the initialization "lottery". Neural networks with such winning tickets could outperform unpruned neural networks; see Malach et al. (2020) for theoretical investigations. These findings illustrate that neural networks' initialization influences their structure, even without looking like it. This also opens a crucial question in deep learning research: _how to best assign network weights before training starts?_
The standard option for the initialization distribution is independent Gaussian. The Gaussian distribution is easy to specify as it is defined solely in terms of its mean and variance. It is also
straightforward to sample from, which is an essential consideration when picking a sampling distribution in practice. In particular, to initialize a neural network, we independently sample each bias \(b_{i}^{(\ell)}\) and each weight \(w_{ij}^{(\ell)}\) from zero-mean Gaussian distributions:
\[b_{i}^{(\ell)}\sim\mathcal{N}\left(0,\sigma_{b}^{2}\right),\quad w_{ij}^{(\ell) }\sim\mathcal{N}\left(0,\frac{\sigma_{w}^{2}}{H_{\ell-1}}\right), \tag{10}\]
for all \(i=1,\ldots,H_{\ell}\) and \(j=1,\ldots,H_{\ell-1}\). Here, the normalization of weight variances by \(1/H_{\ell-1}\) is conventional to avoid the variance explosion in wide neural networks. The bias variance \(\sigma_{b}^{2}\) and weight variance \(\sigma_{w}^{2}\) are called _initialization hyperparameters_. Note that these could depend on the layer index \(\ell\). The next question is _how to set the initialization hyperparameters_ so that the output of the neural network is well-behaved.
**Xavier's initialization.** An active line of research studies the propagation of deterministic inputs in neural networks. Some heuristics are based on the information obtained before and after backpropagation, such as variance and covariance between the neurons or units corresponding to different inputs. Glorot and Bengio (2010) suggest sampling weights from a uniform distribution, saving the variance of activations in the forward and gradients backward passes, which are respectively \(1/H_{\ell-1}\) and \(1/H_{\ell}\). Since both conditions are incompatible, the initialization variance is a compromise between the two: \(2/(H_{\ell-1}+H_{\ell})\). The initialization distribution, called _Xavier's_ or _Glorot's_, is the following:
\[w_{ij}^{(\ell)}\sim\mathcal{U}\left(-\frac{\sqrt{6}}{\sqrt{H_{\ell-1}+H_{\ell }}},\frac{\sqrt{6}}{\sqrt{H_{\ell-1}+H_{\ell}}}\right),\]
with biases \(b_{i}^{(\ell)}\) assigned to zero. The same reasoning can be applied with a zero-mean normal distribution:
\[w_{ij}^{(\ell)}\sim\mathcal{N}\left(0,\frac{1}{H_{\ell-1}}\right),\quad\text{ or}\quad w_{ij}^{(\ell)}\sim\mathcal{N}\left(0,\frac{2}{H_{\ell-1}+H_{\ell}} \right).\]
This heuristic, based on an analysis of linear neural networks, has been improved by He et al. (2015). First, they show that the variance of the initialization can be indifferently set to \(1/H_{\ell-1}\) or \(1/H_{\ell}\) (up to a constant factor) without damaging either information propagation or back-propagation, thus making any compromise unnecessary. Second, they show that for the ReLU activation function, the variance of the Xavier initialization should be multiplied by \(2\), that is:
\[w_{ij}^{(\ell)}\sim\mathcal{N}\left(0,\frac{2}{H_{\ell-1}}\right).\]
**Edge of Chaos.** Other works explore the covariance between pre-activations corresponding to two given different inputs. Poole et al. (2016) and Schoenholz et al. (2017) obtain recurrence relations by using Gaussian initializations and under the assumption of Gaussian pre-activations. They conclude that there is a critical line, so-called _Edge of Chaos_, separating signal propagation into two regions. The first one is an ordered phase in which all inputs end up asymptotically fully correlated, while the second region is a chaotic phase in which all inputs end up asymptotically independent. To propagate the information deeper in a neural network, one should choose initialization hyperparameters \((\sigma_{b}^{2},\sigma_{w}^{2})\) corresponding to the separating Edge of Chaos line, which we describe below in more detail.
Let \(\mathbf{x}_{a}\) be a deterministic input vector of a data point \(a\), and \(g_{i,a}^{(\ell)}\) be the \(i\)th pre-activation at layer \(\ell\) given a data point \(a\). Since the weights and biases are randomly initialized according to a centered distribution (some Gaussian), the pre-activations \(g_{i,a}^{(\ell)}\) are also random variables, centered and identically distributed. Let
\[q_{aa}^{(\ell)} =\mathbb{E}\left[\left(g_{i,a}^{(\ell)}\right)^{2}\right],\quad q _{ab}^{(\ell)}=\mathbb{E}\left[g_{i,a}^{(\ell)}g_{i,b}^{(\ell)}\right],\] \[\text{and}\quad c_{ab}^{(\ell)} =q_{ab}^{(\ell)}/\sqrt{q_{aa}^{(\ell)}q_{bb}^{(\ell)}},\]
be respectively their variance according to input \(a\), covariance and correlation according to two inputs \(a\) and \(b\). Assume the Gaussian initialization rules (or priors) of Equation (10) for the weights \(w_{ij}^{(\ell)}\) and biases \(b_{i}^{(\ell)}\) for all \(\ell\), \(i\) and \(j\), independently. Then, under the assumption that pre-activations \(g_{i,a}\) and \(g_{i,b}\) are Gaussian, the variance and covariance defined above satisfy the following two-way recurrence relations:
\[q_{aa}^{(\ell)} =\sigma_{w}^{2}\int\phi^{2}\left(u_{1}^{(\ell-1)}\right)\mathcal{ D}g_{i,a}+\sigma_{b}^{2},\] \[q_{ab}^{(\ell)} =\sigma_{w}^{2}\int\phi(u_{1}^{(\ell-1)})\phi(u_{2}^{(\ell-1)}) \mathcal{D}g_{i,a}\mathcal{D}g_{i,b}+\sigma_{b}^{2}.\]
Here, \(\mathcal{D}g_{i,a}\) and \(\mathcal{D}g_{i,b}\) stand for the distributions of standard Gaussian pre-activations \(g_{i,a}\) and \(g_{i,b}\). Also, \((u_{1}^{(\ell-1)},u_{2}^{(\ell-1)})\) correspond to the following change of variables
\[u_{1}^{(\ell-1)} =\sqrt{q_{aa}^{(\ell-1)}}g_{i,a},\] \[u_{2}^{(\ell-1)} =\sqrt{q_{bb}^{(\ell-1)}}\left(c_{ab}^{(\ell-1)}g_{i,a}+\sqrt{1-( c_{ab}^{(\ell-1)})^{2}}g_{i,b}\right).\]
For any \(\sigma_{w}^{2}\) and \(\sigma_{b}^{2}\), there exist limiting points \(q^{*}\) and \(c^{*}\) for the variance, \(q^{*}=\lim_{\ell\to\infty}q_{aa}^{(\ell)}\), and for the correlation, \(c^{*}=\lim_{\ell\to\infty}c_{ab}^{(\ell)}\). Two regions can be defined depending on the value of \(c^{*}\): (i) an _ordered_ region if \(c^{*}=1\), as any two inputs \(a\) and \(b\), even far from each other, tend to be fully correlated in the deep limit \(\ell\to\infty\); (ii) a _chaos_ region if \(c^{*}<1\), as any two inputs \(a\) and \(b\), even close to each others, tend to decorrelate as \(\ell\to\infty\).
To study whether the point \(c^{*}=1\) is _stable_, we need to check the values of the derivative: \(\chi_{1}=\frac{\partial c_{ab}^{(\ell)}}{\partial c_{ab}^{(\ell-1)}}\Big{|}_{ c_{ab}^{(\ell)}=1}^{(\ell)}\). There are three cases: (i) _order_, when \(\chi_{1}<1\), i.e., the point \(c^{*}=1\) is stable; (ii) _transition_, when \(\chi_{1}=1\); (iii) _chaos_, when \(\chi_{1}>1\), i.e., the point \(c^{*}=1\) is unstable. Therefore, there exists a separating line in the hyperparameters \((\sigma_{w}^{2},\sigma_{b}^{2})\) space when \(c^{*}=1\) and \(\chi_{1}=1\), that is referred to as _Edge of Chaos_. By assigning the hyperparameters on the Edge of Chaos line, the information propagates as deep as possible from inputs to outputs. Note that all of this procedure assumes that pre-activations \(g_{i,a}\) and \(g_{i,b}\) are Gaussian. Wolinski and Arbel (2023) analyze the Edge of Chaos framework without the Gaussian hypothesis.
#### 5.1.2 Posteriors and optimization methods
Neural networks without explicit regularization perform well on out-of-sample data (Zhang et al., 2017a). This could mean that neural network models, and their architecture or optimization procedure in particular, have an inductive bias which leads to implicit regularization during
training. A number of works aim at understanding this topic by analyzing the SGD training process.
One can relate this research direction to the Bayesian perspective. In particular, especially in variational inference, Bayesian practitioners are greatly concerned with the family of posterior distributions they optimize. Insights into the distribution of solutions found by common optimizers could inform the design of better parametric families to optimize. Nevertheless, research on the posterior distributions induced by constant step SGD remains in its infancy. Here we review some recent results and argue that it will be fruitful to see their implications for Bayesian inference.
Some works establish that SGD induces implicit regularization. For instance, Soudry et al. (2018) show that SGD leads to \(\mathscr{L}_{2}\) regularization for linear predictors. Further, SGD applied to convolutional neural networks of depth \(L\) with linear activation function induces \(\mathscr{L}_{2/L}\) regularization (Gunasekar et al., 2018). This type of regularization can be explicitly enforced in the Bayesian setting, for example by the use of an isotropic Gaussian prior. Recent research also proposes that SGD induces heavy-tailed distributions in deep neural networks and connects this with compressibility. Mahoney and Martin (2019) empirically assess the correlation matrix between the weights. Using spectral theory, they show that the correlation matrix converges to a matrix with heavy-tailed entries during training, a phenomenon known as heavy-tailed self-regularization. Gurbuzbalaban et al. (2021) also argue that the gradient noise is heavy-tailed. This has important implications for a Bayesian practitioner. In particular heavy tailedness of the posterior contrasts with the Gaussian distribution assumption typically made in variational inference and the Laplace approximation. Other parametric distributions have been explored in the literature (Fortuin, 2022).
Conversely, different optimizers have been proposed, partly inspired by Bayesian inference (Neelakantan et al., 2016; Foret et al., 2021; Khan and Rue, 2021). Neelakantan et al. (2016) inject noise into gradient updates, partly inspired by the SGLD algorithm, from Bayesian inference. They show significant improvements in out-of-sample performance. Foret et al. (2021) relax a PAC-Bayesian objective so as to obtain an optimizer called Sharpness Aware Minimizer (SAM). The SAM optimizer makes gradient steps that have been adversarially perturbed so as to improve generalization by converging to flatter minima. SAM significantly improves performance on diverse datasets and architectures. The connections with Bayesian inference are deep; Mollenhoff and Khan (2022) show that SAM is an optimal relaxation of the ELBO objective from variational inference. Finally Mandt et al. (2017) show that SGD can be interpreted as performing approximate Bayesian inference.
The line between frequentist and Bayesian approaches is blurred and has been fruitful in both directions. A significant line of works, including Khan et al. (2017, 2018); Khan and Rue (2021); Osawa et al. (2019); Mollenhoff and Khan (2022), explores existing optimizers that work well in the frequentist setting, and reinterprets them as approximate Bayesian algorithms, subsequently proposing novel (Bayesian) optimizers. Khan et al. (2018) propose a Bayesian reinterpretation of ADAM which has favorable Bayesian inference properties compared to other VI schemes. Mollenhoff and Khan (2022) propose a Bayesian reformulation of SAM which often outperforms the conventional SAM across different metrics. Refer to Khan and Rue (2021) for a detailed treatment of this research direction.
#### 5.1.3 Cold and tempered posteriors
A tempered posterior distribution with temperature parameter \(T>0\) is defined as \(p(\mathbf{w}|D)\propto\exp(-U(\mathbf{w})/T)\), where \(U(\mathbf{w})\) is the posterior energy function
\[U(\mathbf{w})\coloneqq-\log p(\mathcal{D}|\mathbf{w})-\log p(\mathbf{w}).\]
Here \(p(\mathbf{w})\) is a proper prior density function, for example, a Gaussian density. It was recently empirically found that posteriors obtained by exponentiating the posterior to some power greater than one (or, equivalently, dividing the energy function \(U(\mathbf{w})\) by some temperature \(T<1\)), performs better than an untempered one, an effect termed the _cold posterior effect_ by Wenzel et al. (2020).
The effect is significant for Bayesian inference, as Bayesian inference should in principle result in the most likely parameters given the training data, and thus to optimal predictions. Bayesian inference could be deemed sub-optimal due to the need for cold posteriors, an observation that cannot go unnoticed.
In order to explain the effect, Wenzel et al. (2020) suggest that Gaussian priors might not be appropriate for Bayesian neural networks, while in other works Adlam et al. (2020) suggest that misspecification might be the root cause. In some works, data augmentation is argued to be the main reason for this cold posterior effect (Izmailov et al., 2021; Nabarro et al., 2021; Bachmann et al., 2022): indeed, artificially increasing the number of observed data naturally leads to higher posterior contraction (Izmailov et al., 2021). At the same time, taking into consideration data augmentation does not entirely remove the cold posterior effect for some models. In addition, Aitchison (2021) demonstrates that the problem might originate in a wrong likelihood specification of the model which does not take into account the fact that common benchmark datasets are highly curated, and thus have low aleatoric uncertainty. Nabarro et al. (2021) hypothesize that using an appropriate prior incorporating knowledge of the data augmentation might provide a solution. Finally, heavy-tailed priors such as Laplace and Student-t are shown to mitigate the cold posterior effect (Fortuin et al., 2021). Kapoor et al. (2022) argue that for Bayesian classification we typically use a categorical distribution in the likelihood with no mechanism to represent our beliefs about aleatoric uncertainty. This leads to likelihood misspecification. With detailed experiments, Kapoor et al. (2022) show that correctly modeling aleatoric uncertainty in the likelihood partly (but not completely) alleviates the cold posterior effect. Pitas and Arbel (2022) discuss how the commonly used Evidence Lower Bound Objective (a sub-case in the cold posterior effect literature) results in a bound on the KL divergence between the true and the approximate posterior, but not a direct bound on the test misclassification rate. They discuss how some of the tightest PAC-Bayesian generalization bounds (which directly bound the test misclassification rate) naturally incorporate a temperature parameter, that trades off the effect of the prior compared to the training data.
Despite the aforementioned research, the cold and tempered posterior effect has still not been completely explained, posing interesting and fruitful questions for the Bayesian deep learning community.
#### 5.1.4 Deep ensembles
Lakshminarayanan et al. (2017) suggest using an _ensemble of networks_ for uncertainty estimation, which does not suffer from mode collapse but is still computationally expensive. Neural network ensembles are multiple MAP estimates of the deep neural network weights. The predictions
of these MAP estimates are then averaged to make an ensemble prediction. Subsequent methods such as _snapshot ensembling_(Huang et al., 2017), _fast geometric ensembling_(FGE: Garipov et al., 2018), _stochastic weight averaging_(SWA: Izmailov et al., 2019), _SWA-Gaussian_(SWAG: Maddox et al., 2019), greatly reduce the computation cost but at the price of a lower predictive performance (Ashukha et al., 2020). While Lakshminarayanan et al. (2017) frame ensemble approaches as an essentially non-Bayesian technique, they can also be cast as a Bayesian model averaging technique (Wilson and Izmailov, 2020; Pearce et al., 2020), and can even asymptotically converge to true posterior samples when adding repulsion (D'Angelo and Fortuin, 2021). Specifically they can be seen as performing a very rough Monte Carlo estimate of the posterior distribution over weights. Ensembles are both cheap, but more importantly, typically outperform Bayesian approaches that have been carefully crafted (Ashukha et al., 2020). This has been empirically explained as resulting from the increased functional diversity of different modes of the loss landscape (Fort et al., 2019). These are sampled by definition using deep ensembles, and this sampling is hard to beat using Bayesian inference.
### Performance certificates
```
TL:DR Bayesian inference is renowned for its ability to provide guarantees on accurate inference of the true posterior distribution given a sufficient amount of data. However, such guarantees pertain to the accurate estimation of the posterior distribution itself, rather than ensuring performance on out-of-sample data. To address the latter, it becomes necessary to rely on generalization bounds, such as the PAC-Bayes framework. Within this framework, model comparison utilizing the marginal likelihood offers guarantees on the performance of the selected model on out-of-sample data, provided that the inference process has been conducted accurately.
```
#### 5.2.1 Frequentist validation of the posterior
Recent works address generalization and approximation errors for the estimation of smooth functions in a nonparametric regression framework using sparse deep NNs and study their posterior mass concentration depending on data sample size. Schmidt-Hieber (2020) shows that sparsely connected deep neural network with ReLU activation converges at near-minimax rates when estimating Holder-smooth functions, preventing the curse of dimensionality. Based on this work, Polson and Rockova (2018) introduce a Spike-and-Slab prior for deep ReLU networks which induces a specific regularization scheme in the model training. The obtained posterior in such neural networks concentrates around smooth functions with near-minimax rates of convergence. Further, Kohler and Langer (2021) extend the consistency guarantees for Holder-smooth functions of Schmidt-Hieber (2020) and Polson and Rockova (2018) to fully connected neural networks without the sparsity assumption. Alternatively, Suzuki (2018) provides generalization error bounds for more general functions in Besov spaces and variants with mixed smoothness.
One of the ways to visualize the obtained uncertainty is using credible sets around some parameter estimator, where the credible region contains a large fraction of the posterior mass (Szabo et al., 2015). Hadji and Szabo (2021) study the uncertainty resulting from using Gaussian process priors. Franssen and Szabo (2022) provide Bayesian credible sets with frequentist coverage
guarantees for standard neural networks trained with gradient descent. Only the last layer is assigned a prior distribution on the parameters and the output obtained from the previous layer is used to compute the posterior.
#### 5.2.2 Posterior concentration and generalization to out-of-sample data
It is interesting to take a step back and evaluate the difference in _goals_ between the frequentist and Bayesian approaches to machine learning. The Bayesian approach emphasizes that the posterior concentrates around the true parameter as we increase the training set size, see the previous section. The primary goal of the frequentist approach is the performance on out-of-sample data, i.e., generalization, see Section 2.4. This performance is quantified with validation and test sets. These two goals frequently align, although posterior concentration guarantees and performance on out-of-sample data are typically not mathematically equivalent problems.
When the number of parameters is smaller than the number of samples \(n\), typically in parametric models, the posterior concentrates on the true set of parameters when \(n\) approaches to infinity. In such cases, the posterior tends to a Dirac delta mass centered on the true parameters. In this setting, we can then argue that we are making predictions using the true predictive distribution, and frequentist and Bayesian goals align. We have inferred the true predictor (according to Bayesian goals) and can be sure that we cannot improve the predictor loss on new out-of-sample data, such as validation and test sets (according to the frequentist approach priorities).
However, neural networks do not operate in this regime. They are heavily overparametrized, so that Bayesian model averaging always occurs empirically. Usually, we are not interested in the proposed model itself but in its predictions based on new data. Also, due to misspecification, we cannot even assume that we are concentrating around the true predictor. At this point, the frequentist and Bayesian goals diverge. But it is clear that in a non-asymptotic setting and where performance on out-of-sample data is crucial, we need a more detailed description of the predictor's loss on new data.
One way to approach this problem is through generalization bounds (Vapnik, 1999) which directly link the empirical loss on the training set with the loss on new data. Of particular interest are PAC-Bayes generalization bounds (McAllester, 1999; Germain et al., 2016; Dziugaite and Roy, 2017; Dziugaite et al., 2021), which directly bound the true risk of a stochastic predictor. Minimizing the ELBO objective in variational inference corresponds to minimizing a PAC-Bayes bound (Dziugaite and Roy, 2017), and thus a bound on the true risk. If alternatively one samples _exactly_ from the Gibbs posterior (for example using MCMC), then one is still minimizing a PAC-Bayes bound on the true risk (Germain et al., 2016). Furthermore, in this setting, maximizing the _marginal likelihood_ of the model is equivalent to minimizing a PAC-Bayes bound (Germain et al., 2016) and it has been shown that PAC-Bayes bounds can be used to meta-learn better priors for BNNs (Rothfuss et al., 2021, 2022).
Of particular interest in this discussion is that performing Bayesian inference is equivalent to minimizing _some_ PAC-Bayes bound and not necessarily _the tightest_ bound. PAC-Bayes bounds typically include a temperature parameter that trades-off the empirical risk with the KL complexity term, and plays a crucial role in the bound tightness (see Section 5.1.3). An interesting open question is whether this temperature parameter provides a justification for the _cold posterior effect_, with a number of works providing evidence to support this view (Grunwald, 2012; Pitas and Arbel, 2022).
#### 5.2.3 Marginal likelihood and generalization
The marginal likelihood (MacKay, 2003) has been explored for model selection, architecture search and hyperparameter learning for deep neural networks. While estimating the marginal likelihood and computing its gradients is relatively straightforward for simple models such as Gaussian processes (Bishop and Nasrabadi, 2006), deep neural networks often require to resort to approximations.
One approach is the Laplace approximation as previously discussed in Section 4.2.2. Daxberger et al. (2021); Immer et al. (2021, 2022) use the Laplace approximation to the marginal likelihood to select the best-performing model on out-of-sample data. They also use the marginal likelihood to learn hyperparameters, in particular the prior variance and the softmax temperature parameter. For the case of the Laplace approximation, the marginal likelihood of training data \(\mathcal{D}\) given the deep neural network architecture \(\mathcal{M}\) can be written as
\[\log p(\mathcal{D}|\mathcal{M}) =\log p(\mathcal{D}|\hat{\mathbf{w}}_{\text{MAP}},\mathcal{M})+\log p (\hat{\mathbf{w}}_{\text{MAP}}|\mathcal{M})\] \[+\frac{d}{2}\log 2\pi+\frac{1}{2}\log\left|\mathbf{\Lambda}_{\hat{ \mathbf{w}}_{\text{MAP}}}\right|, \tag{11}\]
where \(d\) is the number of weights of the neural network, \(\hat{\mathbf{w}}_{\text{MAP}}\) is a MAP estimate of the network parameters, and \(\mathbf{\Lambda}_{\hat{\mathbf{w}}_{\text{MAP}}}\) is the precision matrix of the Gaussian posterior distribution under the Laplace approximation. Similarly to the discussion in Section 4.2.2, the primary computational problem is forming the precision matrix and estimating its determinant. Again the generalized Gauss-Newton approximation and the Empirical Fisher approximation to the Hessian (and correspondingly to the precision matrix) are the most common and efficient approximations, and are the ones used in Daxberger et al. (2021); Immer et al. (2021). On a conceptual level, a main criticism of the Laplace approximation for the marginal likelihood of deep neural networks is that it is unimodal while the loss landscape of deep neural networks has multiple minima (Lotfi et al., 2022). This might severely underestimate the volume of good solutions with respect to bad solutions given the prior, which is essentially what the marginal likelihood estimates. A further criticism is that this approximation to the marginal likelihood is sensitive to the prior variance. Indeed for a fixed prior variance across different neural network architectures, Lotfi et al. (2022) show that the marginal likelihood performs poorly for model selection. However optimizing a common prior covariance across layers, or optimizing different prior variances for different layers, results in a better empirical correlation of the marginal likelihood with out-of-sample performance. Overall, the marginal likelihood provides reasonable predictive power for out-of-sample performance for deep neural networks, and as such constitutes a reasonable approach to model selection.
A different approach is to resort to the product decomposition of the marginal likelihood as
\[\log p(\mathcal{D}|\mathcal{M}) =\log\prod_{i=1}^{n}p(\mathcal{D}_{i}|\mathcal{D}_{<i},\mathcal{M}) \tag{12}\] \[=\sum_{i=1}^{n}\log[\mathbf{E}_{p(\theta|\mathcal{D}_{<i})}p( \mathcal{D}_{i}|\theta,\mathcal{M})]\]
which measures how good the model is at predicting each data point \(\mathcal{D}_{i}\) in sequence given every data point before it, \(\mathcal{D}_{<i}\). Based on this observation, Lyle et al. (2020); Ru et al. (2021) propose the sum of losses of the different batches across an epoch as an approximation to the marginal
likelihood. Then, they use this as a measure of the ability of a model to generalize to out-of-sample data. They also propose different heuristics, such as taking the average of the sum of the losses over multiple epochs. A further heuristic is keeping only the last epochs of training while rejecting the sum of the losses of the first epochs. Finally the authors propose to train the neural network for a limited number of epochs, for example only half of the number of epochs that would be typically used to train to convergence. As such the approach is computationally efficient, requiring only partial convergence of the deep neural network and a calculation of the training losses over batches, which are efficient to estimate.
Ru et al. (2021) compare their approach to the task of architecture search to other common approaches. These approaches are a mixture of heuristics and frequentist statistics. The first is the sum of validation losses up to a given epoch. The second is the validation accuracy at an early epoch, which corresponds to the early-stopping practice whereby the user estimates the final test performance of a network using its validation accuracy at an early epoch. The third is the learning curve extrapolation method, which was proposed in Baker et al. (2017) and which trains a regression model on previously evaluated architecture data to predict the final test accuracy of new architectures. The inputs for the regression model comprise architecture meta-features and learning curve features up to a given epoch. They also compare to zero-cost baselines: an estimator based on input Jacobian covariance (JavCov, Mellor et al., 2021) and two adapted from pruning techniques (SNIP and SynFlow, Abdelfattah et al., 2021). The authors demonstrate significantly better rank-correlation in neural architecture search (NAS) for the marginal likelihood approach compared to the baselines. These results have been further validated in Lotfi et al. (2022).
Ru et al. (2021) have however been criticized for using the term "training speed" (as in the number of steps needed to reach a certain training error) to describe their approach. In short, they claim that Equation (12) corresponds to some measure of training speed, and thus they claim that _training faster corresponds to better generalization_. This however is not generally true as pointed out in Lotfi et al. (2022). The marginal likelihood can be _larger_ for a model that converges _in more steps_ (than another model) if the marginal likelihood at step \(i=1\) in decomposition (12) is higher.
There is a debate as to whether the marginal likelihood is appropriate for model selection at all. Lotfi et al. (2022) make a distinction between the question "what is the probability that a prior model generated the training data?" and the question "how likely is the posterior, conditioned on the training data, to have generated withheld points drawn from the same distribution?". They claim that the marginal likelihood answers the first question and not the second. However, high marginal likelihood also provides frequentist guarantees on _out-of-sample_ performance through PAC-Bayesian theorems (Germain et al., 2016). If one selects a model based on the marginal likelihood and also performs Bayesian inference correctly, then the resulting model and its posterior over parameters are guaranteed to result in good performance on out-of-sample data. Overall, the debate is far from concluded, and in light of the good empirical performance of the marginal likelihood, more research is warranted in its direction.
### Benchmarking
BNNs present unique challenges in terms of their evaluation and benchmarking. Two main challenges are the choice of the evaluation _datasets_ and _metrics_ that do not have a consensus in the society. Non-consensus reflects a difficulty with clearly defining the goals of Bayesian deep
learning in a field traditionally viewed through a _frequentist_ lens, and more specifically through performance on out-of-sample data.
[leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=topsep,topsep=0pt,topsep=topsep,topsep=topsep,topsep=topsep
#### 5.3.2 Evaluation metrics-tasks
For most popular machine learning tasks, the community has reached a consensus on the appropriate evaluation metric of choice, such as the mean-squared error (MSE) for regression and zero-one loss for classification. In the case of Bayesian deep learning, there is not yet a clear choice. Should the Bayesian approach improve on frequentist metrics such as misclassification rate on held-out data? Should it provide solutions to known issues of traditional approaches, such as improved robustness to adversarial and non-adversarial noise? Or should Bayesian approaches be evaluated on different metrics altogether or on metrics that capture _uncertainty_?
**Standard losses.** Practitioners propose several metrics (and corresponding tasks) for the evaluation of Bayesian deep learning approaches. By far the most popular choice is to evaluate frequentist metrics on held-out data that are the MSE for regression and the zero-one loss for classification (Khan et al., 2018; Khan and Swaroop, 2021; Gal and Ghahramani, 2016; Izmailov et al., 2021; Wenzel et al., 2020). The intuition behind this choice is that the posterior predictive distribution should improve upon deterministic predictions as multiple predictions from the posterior are averaged. For example, in the case of classification, the posterior predictive is meant to better approximate the _probability_ that a given class is correct.
One problem with this approach is that Bayesian approaches have typically provided inconsistent gains for this task-metric combination. For example, sometimes Bayesian approaches improve upon a deterministic neural network and sometimes provide worse results. See for example Figure 5 in Izmailov et al. (2021) where the MSE is evaluated on UCI regression tasks. Similarly, Figure 4.a. in Daxberger et al. (2021) shows that the Laplace approximation to a DNN posterior does not improve upon the MAP solution.
Wenzel et al. (2020) point out that one can improve upon deterministic neural networks by using heuristics such as cold posteriors which however deviate from the Bayesian paradigm. One common switch away from MSE and zero-one loss consists in evaluating the (negative) log-likelihood of the test data. Here, Bayesian approaches often outperform frequentist ones, but exceptions remain (Wenzel et al., 2020).
**Calibration.** By far, the metric on which Bayesian neural networks consistently outperform is the _calibration_ for a classification task, i.e., if a classifier has \(x\%\) confidence when classifying samples from a sample set, it should also be correct \(x\%\) of the time. The two most popular metrics for evaluating calibration are the _expected calibration error_ (ECE: DeGroot and Fienberg, 1983), and the _thresholded adaptive calibration error_ (TACE: Nixon et al., 2019). For this type of task-metric combination Bayesian and Bayesian-like approaches such as ensembles (see Section 5.1.4) consistently outperform deterministic neural networks (Izmailov et al., 2021; Daxberger et al., 2021; Maddox et al., 2019). Ashukha et al. (2020) provide a detailed discussion on evaluation metrics for uncertainty estimation as well as common pitfalls. They argue that for a given metric one should always compare a Bayesian method to an ensemble. Ensembles provide good gains in different uncertainty metrics for each new ensemble member. Bayesian methods, often do not result in the same gains for each new sample from the posterior.
Other methods for evaluating calibration include reliability diagrams (Vaicenavicius et al., 2019) and calibration curves (Maddox et al., 2019). A strength of these metrics is that they are generally clear, direct and intuitive. One weakness of them is that like other visual methods, they are subject to misinterpretation. For example, calibration curves provide a simple and intuitive way to determine which classifier is better calibrated than others when the difference between
the classifiers is large. However, when the difference is small or the classifier is miscalibrated only for certain confidence levels, then deriving reliable conclusions becomes more tedious. One caveat is that a classifier that is guessing completely at random and assigns the marginal class frequencies as predictive probabilities to each data point would trivially achieve a perfect ECE of 0 (Gruber and Buettner, 2022). Moreover, it has been argued that while many of these metrics measure marginal uncertainties on single data points, joint uncertainties across many points might be more relevant in practice, e.g., for sequential decision-making (Osband et al., 2022).
**Robustness.** There are many works that explored robustness to adversarial noise (Louizos and Welling, 2017; Rawat et al., 2017; Liu et al., 2019; Grosse et al., 2019; Bekasov and Murray, 2018) and to non-adversarial noise (Gal and Ghahramani, 2016; Daxberger et al., 2021; Dusenberry et al., 2020; Daxberger et al., 2021; Izmailov et al., 2021), including Gaussian noise, image rotations, among others. Band et al. (2021) analyze a form of distribution shift whereby classifiers are trained on a set of images for which diabetic retinopathy exists at moderate levels. Then, the evaluation of the classifiers is assessed on a test set where diabetic retinopathy is more severe. The intuition is that Bayesian approaches should correctly classify these corrupted samples and assign low confidence in their predictions. The results in these tasks-metrics are mixed. In the adversarial setting, BNNs are typically far from the state-of-the-art defenses against adversarial attacks. In the non-adversarial setting, some works show _improved_ robustness (Daxberger et al., 2021), while others show _reduced_ robustness (Izmailov et al., 2021).
#### 5.3.3 Output interpretation
We conclude by analyzing the output of BNNs with the question of its probabilistic interpretation and its relation to evaluation metrics. We restrict the discussion to classification models, though the discussion for other tasks is similar. Both frequentist and Bayesian practitioners recognize that the outputs of a deep neural network classifier often do not accurately reflect the probability of choosing the correct class. That is, the NNs are not well calibrated. However, frequentist and Bayesian communities propose different solutions. The frequentist solution is to transform the outputs of the classifier through a post-processing step to obtain well-calibrated outputs. Common approaches include _histogram binning_(Zadrozny and Elkan, 2001), _isotonic regression_(Zadrozny and Elkan, 2002), _Bayesian binning into quantiles_(Naeini et al., 2015) as well as _Platt scaling_(Platt, 1999).
In a Bayesian setting, the predictive distribution has a clear interpretation, that is the confidence of the model in each class for a given input signal. Confusion can arise from the fact that scaling is sometimes considered part of an evaluation metric. For example, Guo et al. (2017) consider _Platt scaling_ as a post-processing step (therefore it defines a new model), while Ashukha et al. (2020) propose that it be incorporated into a new evaluation metric. The choice of which of the two is true is important as the impact of recalibration methods can be significant in improving the calibration of a model. Thus, if one considers recalibration as defining a new model as in Ashukha et al. (2020), then a K-FAC Laplace BNN outperforms its corresponding frequentist one significantly in calibration. If recalibration is part of the evaluation metric, then the gains become marginal.
Conclusion
The present review encompasses various topics, such as the selection of prior (Section 3.2), computational methods (Section 3.3), and model selection (Section 3.4), which pertain to Bayesian problems in a general sense as well as Bayesian neural networks specifically. This comprehensive perspective enables the contextualization of the diverse inquiries that emerge within the Bayesian deep learning community.
Despite the growing interest and advancements in inference techniques for Bayesian deep learning models, the considerable computational burden associated with Bayesian deep learning approaches remains a primary hindrance. Consequently, the community dedicated to Bayesian deep learning remains relatively small, and the adoption of these approaches in the industry remains limited.
The establishment of a consensus regarding evaluation metrics and benchmarking datasets for Bayesian deep learning has yet to be attained. The lack of consensus stems from the challenge of precisely defining the objectives of Bayesian deep learning within a domain traditionally perceived through a _frequentist_ framework, particularly emphasizing performance on out-of-sample data.
This review provides readers with a thorough exposition of the challenges intrinsic to Bayesian deep learning, while also shedding light on avenues that warrant additional exploration and enhancement. With this cohesive resource, our objective is to empower statisticians and machine learners alike, facilitating a deeper understanding of Bayesian neural networks (BNNs) and promoting their wider practical implementation. |
2309.12172 | SANPO: A Scene Understanding, Accessibility, Navigation, Pathfinding,
Obstacle Avoidance Dataset | We introduce SANPO, a large-scale egocentric video dataset focused on dense
prediction in outdoor environments. It contains stereo video sessions collected
across diverse outdoor environments, as well as rendered synthetic video
sessions. (Synthetic data was provided by Parallel Domain.) All sessions have
(dense) depth and odometry labels. All synthetic sessions and a subset of real
sessions have temporally consistent dense panoptic segmentation labels. To our
knowledge, this is the first human egocentric video dataset with both large
scale dense panoptic segmentation and depth annotations. In addition to the
dataset we also provide zero-shot baselines and SANPO benchmarks for future
research. We hope that the challenging nature of SANPO will help advance the
state-of-the-art in video segmentation, depth estimation, multi-task visual
modeling, and synthetic-to-real domain adaptation, while enabling human
navigation systems.
SANPO is available here:
https://google-research-datasets.github.io/sanpo_dataset/ | Sagar M. Waghmare, Kimberly Wilber, Dave Hawkey, Xuan Yang, Matthew Wilson, Stephanie Debats, Cattalyya Nuengsigkapian, Astuti Sharma, Lars Pandikow, Huisheng Wang, Hartwig Adam, Mikhail Sirotenko | 2023-09-21T15:28:04Z | http://arxiv.org/abs/2309.12172v1 | # Sanpo
###### Abstract
We introduce SANPO4, a large-scale egocentric video dataset focused on dense prediction in outdoor environments. It contains stereo video sessions collected across diverse outdoor environments, as well as rendered synthetic video sessions 5 All sessions have (dense) depth and odometry labels. All synthetic sessions and a subset of real sessions have _temporally consistent_ dense panoptic segmentation labels. To our knowledge this is the first human egocentric video dataset with both large scale dense panoptic segmentation and depth annotations.
Footnote 4: [https://google-research-datasets.github.io/sanpo_dataset/](https://google-research-datasets.github.io/sanpo_dataset/)
Footnote 5: Synthetic data was provided by Parallel Domain.
In addition to the dataset we also provide zero-shot baselines and SANPO benchmarks for future research. We hope that the challenging nature of SANPO will help advance the state-of-the-art in video segmentation, depth estimation, multi-task visual modeling, and synthetic-to-real domain adaptation, while enabling human navigation systems.
## 1 Introduction
Egocentric scene understanding is an important research area with many applications in robotics, autonomous driving, augmented reality, and accessibility. It includes a range of tasks, such as video semantic and panoptic segmentation, depth estimation, object tracking among others. To advance this field, the community needs high-quality, large-scale datasets. In the last 10 years, growing interest in autonomous driving has resulted in the creation of several large-scale video datasets Kang et al. (2019); Mao et al. (2022); Wilson et al. (2023) that have panoptic segmentation masks, depth maps, camera poses, and other related annotations. However, outside of the autonomous driving domain, to the best of our knowledge, there is no publicly available video dataset annotated with both panoptic segmentation and depth maps. Autonomous driving datasets, though plenty, have limited generalization to egocentric human scene understanding. Videos taken from the human perspective have their own challenges, such as unorthodox viewpoints, motion artifacts, and dynamic or unpredictable interactions between other humans and objects in the scene. Unlike cars, humans operate in environments that are more cluttered, unpredictable, and less regulated. We believe that a comprehensive human egocentric dataset should not only help to build systems for related applications, but also _serve as a challenging benchmark for the scene understanding community._
This work introduces **SAMPO**, a dataset built to support research in outdoor human egocentric scene understanding. Although we focus on human navigation tasks, SANPO supports a wide variety of dense prediction tasks in outdoor environments and is challenging enough to be beyond the capabilities of current models. SANPO includes both real and synthetic data, with 112K and 113K video panoptic masks, respectively. It also includes 617K and 113K of real
and synthetic depth maps, respectively. The dataset was collected in various locations in the United States and covers different environments with varying weather conditions, times of day, and types of egomotion. Each real session also has videos from two stereo cameras, which can help to advance multi-view methods.
In addition to the dataset, we also set baselines for monocular depth estimation, semantic and panoptic segmentation, using state-of-the-art models.
## 2 Related Work
The closest publicly available datasets to ours are SCAND Karnan et al. (2022), MuSoHu Nguyen et al. (2023), and Ego4D Grauman et al. (2022), which are collected with a human egocentric perspective. SCAND is an autonomous robot navigation dataset collected with a front facing stereo camera, among other sensors, fitted on robots which are teleoperated. MuSoHu is collected with human ego motion with front facing stereo camera along with Lidar, microphone array and a \(360^{\circ}\) camera. SCAND and MuSoHu provide depth and odometry labels. MuSoHu also exhibits the camera motion artifacts caused by human motion. Ego4D is large and showcases a wide variety of activities. But MuSoHu, SCAND, and Ego4D lack semantic segmentation labels, and the first two are primarily developed for enabling robot navigation in social environments.
MOTSynth Fabbri et al. (2021) is another dataset that comes relatively close. It is a synthetic dataset for pedestrian detection and tracking, and it has both segmentation and depth annotations. However, this dataset has some limitations: (a) It only includes pedestrian segmentation and tracking annotations. (b) Only a small portion of the samples provide an egocentric view similar to what you would expect in egocentric human navigation.
Autonomous navigation is a well researched field Wen and Jo (2022), Shi et al. (2017) and the literature is teeming with various real-world Qiao et al. (2020), Kang et al. (2019), Wilson et al. (2023), Karnan et al. (2022), Nguyen et al. (2023), Cordts et al. (2016), Liao et al. (2022), Lin et al. (2014), Xu et al. (2018), Caelles et al. (2019), Brostow et al. (2009), Caesar et al. (2019) and synthetic datasets Mao et al. (2022), Richter et al. (2017), Fabbri et al. (2021). The
Figure 1: **SANPO** is the only human-egocentric dataset with panoptic masks, multi-view stereo, depth, camera pose, and both real and synthetic data. SANPO has the largest number of panoptic frames among related work and a respectable number of depth annotations. (Note: \({}^{1}\): multi-view, \({}^{2}\): partial coverage, \({}^{3}\): sparse depth)
majority of the datasets available fall in either self driving car category Mei et al. (2022); Mao et al. (2022); Wilson et al. (2023); Cordts et al. (2016); Liao et al. (2022); Richter et al. (2017); Caesar et al. (2019); Pham et al. (2020) or general purpose scene understanding category Grauman et al. (2022); Lin et al. (2014); Xu et al. (2018); Caelles et al. (2019); Brostow et al. (2009). The well known Cityscapes dataset Cordts et al. (2016); Qiao et al. (2020) is a daytime stereo video dataset with vehicle ego motion and segmentation & depth labels. Similarly, Wilson et al. (2023) is self driving car dataset with stereo video but with only 3D object detection labels. The datasets Richter et al. (2017); Mei et al. (2022); Mao et al. (2022); Brostow et al. (2009) are also self driving car video datasets with segmentation labels, except Mao et al. (2022), which includes 3D object detection labels instead.
Other existing datasets, such as MSCOCO Lin et al. (2014), DAVIS-2017 Caelles et al. (2019), and YouTube-VOS Xu et al. (2018), are either general-purpose scene understanding or domain-specific datasets, but they are not specifically designed for human navigation. MSCOCO Lin et al. (2014) is an image-based dataset, whereas DAVIS-2017 and Youtube-VOS Caelles et al. (2019); Xu et al. (2018) are video datasets. All of them are segmentation and/or object detection datasets but are not relevant to human navigation.
While there are many datasets available (see the supplementary material for an overview), there is a clear need for a challenging human egocentric dataset featuring unconstrained environments and comprehensive dense prediction annotations.
## 3 SANPO Dataset
SANPO dataset consists of two parts - SANPO-Real and SANPO-Synthetic. In this section we give an overview of both parts and describe how the dataset was collected and labeled.
### SANPO-Real
This dataset consists of 701 sessions recorded from two stereo cameras simultaneously (thus each session has four RGB streams in total). Each video is approximately 30 seconds long with a frame rate of 15 frames per second (FPS). at 15 FPS. 597 sessions are recorded at a resolution of \(2208\times 1242\) pixels, and the remainder are recorded at a
Figure 2: **Data capture methodology for SANPO.** SANPO contains a mix of both real and synthetic data. The real data is captured from a chest-mounted camera and a head-mounted camera, while the synthetic data comes from a virtual environment. Our videos have depth maps and panoptic segmentations.
resolution of \(1920\times 1080\) pixels. We provided all videos in a lossless format to help facilitate stereo vision research. All videos were rectified using ZED software.
Each session is annotated with high-level attributes such as human traffic, vehicular traffic, number of obstacles, environment type, camera information and intrinsics, etc.6. Every stereo camera recording has camera poses provided by the ZED software using fused IMU and VIO measurements.
Footnote 6: Please see the appendix for additional details.
Each camera has both a sparse depth map from the ZED SDK and a dense depth map from CRESetero Li et al. [2022a], a recent ML-based stereo depth model. This model converts stereo frames to disparity maps7, which we then convert to depth using camera intrinsics and clip to 0-80 meters. Note that these CRESetero depth maps have a resolution of 1280\(\times\)720 pixels; this is smaller than the RGB stream, but is the maximum resolution that pre-trained CRESetero supports Li et al. [2022a].
Footnote 7: We compute disparity before blurring the sensitive information because blurry patches can create inaccurate or misleading results.
We provide semantic segmentation annotations for a of 237 videos: 146 long-range ZED 2i videos and not-from-same-session 91 wide-angle ZED M videos. Our segmentation taxonomy covers 31 categories: 15 _"thing"_ classes and 16 _"stuff"_ classes. We developed this taxonomy with a focus on egocentric scene understanding, balancing annotation practicality with the desire to be maximally useful for understanding the navigation environment.
A detailed taxonomy of these categories is provided in the appendix. The SANPO-Real dataset contains a total of 975,207 masks, including 195,187 human-annotated masks and 780,020 propagated masks (more details in the following section).Figure 3 shows an example of a SANPO-Real session.
Figure 3: **SANPO Real Sample.** Top row shows a stereo left frame from a session along with its ML depth and segmentation annotation. Bottom row shows the 3D scene of the session built using the annotations we provide. Points from several seconds of video are accumulated and aligned with ICP.
#### 3.1.1 SANPO-Real Data Collection
In order to collect the real data, we designed a custom data collection rig (see supplementary material for details). Our volunteers wear a head-mounted ZED-M stereo camera and a chest-mounted ZED-2i stereo camera, as well as a backpack full of supporting hardware.
The chest-mounted ZED-2i captured 308,957 stereo frames with its 4mm lens, providing long range depth at a stable mounting point to mitigate motion blur. The lightweight head-mounted ZED-M provided wide range video and depth for 308,451 stereo frames. A team of volunteers collected data from various geographic locations across the United States covering different environments, including urban, suburban, city streets, and parks. Volunteers ran through different weather conditions (including snow and rain), times of the day (excluding low light conditions), ground types, obstacles, run/walk speeds, traffic levels, etc. We asked each volunteer to prefer diverse, dynamic scenarios and rare instances and events.
Figure 4: **Temporally Consistent Segmentation Annotation.** Top and bottom rows: Human-annotated segmentation masks for consecutive frames. Middle two rows: AOT-propagated segmentation masks for the intermediate frames (out of five) that were skipped during human annotation.
#### 3.1.2 Panoptic Segmentation Annotation
Our segmentation annotation protocol is as follows: We divide each video into 30-second sub-videos and annotate every fifth frame for a total of 90 frames per sub-video. To make process more efficient and less error-prone we use two techniques. For dealing with a large number of classes we use cascaded annotation approach. We split all the labels in our taxonomy into five mutually exclusive subsets of co-occurring labels. A given sub-video is annotated for each subset in a prescribed order. When annotating a subset, all the annotations from the previous subset(s) are frozen and shown to the annotator. This approach helps reduce annotation time while improving boundary precision. We include all the labels in the last subset to facilitate the annotation of any missing regions from the previous subsets. We use AOT Yang et al. (2021) to both propagate masks from the previous frame to the next one during the annotation process and to infer the segmentation annotations for the intermediate frames, using the manually annotated preceding and following frames. This approach ensures that the annotations are temporally consistent for up to 30 seconds. We also provide information on whether each frame was annotated by a human or propagated by machine. The figure 4 shows an example of human annotated preceding and following frames along with the AOT propagated intermediate frames.
#### 3.1.3 Privacy
All data collection is done in compliance with local, state, and city laws. Every volunteer was able to review each video in the data collection app before uploading it. All videos are processed to remove personally identifiable information (PII) such as faces and license plates before sending them for annotation.
Figure 5: **SANPO-Synthetic Sample.** Top row shows a single frame from a synthetic session along with its depth and segmentation annotation. Bottom row shows the 3D scene of the session built using the annotations we provide. Points come from the accumulated depth maps and camera locations across many frames.
### SANPO-Synthetic
Data captured and annotated under real-world conditions unfortunately has imperfect ground truth labels. These imperfections come from hardware (for example, motion blur), algorithms (i.e. depth from stereo), and human rating mistakes. Whereas, synthetic data has near perfect ground truth and can have any predefined properties. We partnered with _Parallel Domain_ to supplement SANPO-Real with high-quality synthetic training and evaluation data. The synthetic environment was optimized to match real-world capture conditions as closely as possible, including camera parameters, placement and scenery. _SANPO-Synthetic and SANPO-Real are intended to be drop-in replacements for each other_, so researchers can study domain transfer tasks or take advantage of synthetic data during training without changing many domain-specific assumptions.
SANPO-Synthetic has 113,794 monocular and single-view video frames across 1961 sessions. 960 sessions are synthesized with a simulated chest-level ZED-2i camera and the other 1001 are taken with a simulated head-mounted ZED-M. Each virtual camera parameters match corresponding ZED camera parameters. Frame rate varies between 5 FPS, 14.28 FPS, and 33.33 FPS. Each synthetic video has dense depth maps and panoptic segmentation maps using the same taxonomy as SANPO-Real.
One advantage of synthetic data is its pixel-perfect instance segmentations, even with many small and distant instances. This is particularly beneficial for developing a challenging dataset to mimic the complexity of real-world scenes. _Over half of the synthetic frames contain \(\geq\)60 unique instance segmentations_, and a sixth of the data has \(\geq\)150 instances. Most of these masks are challenging: _80% of SANPO-Synthetic masks have less than \(32^{2}\) pixels_, compared to \(8.1\%\) of masks in SANPO-Real. Instance IDs persist across frames and occlusions, which may be useful for tracking/reacquisition studies. Overall, there are 393,000 unique instance IDs in the synthetic data.
## 4 Experiments
In this section, we establish SANPO baselines in two evaluation settings:
1. Zero shot baseline: In this setting, we evaluate and report the generalization capability of published model checkpoints to SANPO dataset.
2. SANPO benchmark: We report and establish a baseline for a couple of state-of-the-art architectures on dense prediction tasks using SANPO dataset.
Figure 6: **Synthetic vs real. A sample of SANPO-Real and SANPO-Synthetic data. _How quickly can you tell which of these images is synthetic?_ Answer key in base64: ‘c31udGg6IEFCRUZILCByZWFsoiBDREADJ’
#### 4.0.1 Metrics
We report mean intersection over union (mIoU) and panoptic quality (PQ) for semantic segmentation and panoptic segmentation, respectively, as in Yu et al. (2023). For depth, we report depth inliers (\(\mathbb{E}\left[\max(\frac{y}{y},\frac{y^{\prime}}{y})\leq 1.25\right]\), denoted as \(\delta_{\leq 1.25}\)) as in Bhat et al. (2023). All metrics are computed per image and then averaged over all images. Higher values are better for all metrics.
### Zero shot evaluation
We intend for SANPO to be representative of outdoor human navigation tasks from an egocentric perspective. Human-centric tasks are distinct from other well-studied domains, such as autonomous driving. Our objective with this evaluation is to establish zero-shot baseline while evaluating how challenging our dataset is for zero-shot prediction. To this end, we evaluate various existing models on both depth estimation and semantic segmentation tasks.
For depth estimation, we used the publicly released checkpoints for DPT Ranftl et al. (2021) and ZoeDepth-M12 NK Bhat et al. (2023), which, according to the authors, were trained on a collection of both proprietary and public datasets. SANPO is a metric depth dataset, but for this zero-shot comparison, we found it necessary to give both these models the best possible advantage by calculating \(\delta_{\leq 1.25}\) in a scale-invariant way: we used RANSAC to find alignment coefficients \(\alpha,\beta\) that best aligned each image \(x\) with its groundtruth \(y\); namely, \(\operatorname*{arg\,min}_{\alpha,\beta}\|\alpha f(x)+\beta-y\|^{2}\), taking \(y^{\prime}=\alpha f(x)+\beta\) as the output for each model.
For semantic segmentation, we used Kmax-Deeplab Yu et al. (2023) and Mask2Former Cheng et al. (2022) checkpoints trained on the Cityscapes dataset Cordts et al. (2016). For a fair comparison, we mapped Cityscapes labelmap to the SANPO labelmap and excluded the SANPO classes (18 in total) that do not have a one-to-one correspondence. We do not report panoptic quality for this baseline because the SANPO _"thing"_ labels differ from those of Cityscapes8.
Footnote 8: Details about the SANPO labelmap, its mapping to and from the Cityscapes labelmap, and the list of ignored labels are provided in the supplementary material.
We also included SAM Kirillov et al. (2023), a recent foundation model. For SAM, we used the center point prompt and reported instance-level mIoU, adhering to the conventional evaluation procedure for interactive segmentation Sofiuk et al. (2022). For the purpose of ensuring a streamlined evaluation process, we excluded very small instances which were less than 2% of image in size.
Our findings are summarized in Table 1. Overall, SANPO seems to be a challenging dataset for both depth and segmentation models. For example, DPT reports good depth estimation performance (\(\delta_{\leq 1.25}\)\(>0.9\)) on KITTI, but we observe \(\sim 0.67\) on SANPO-Real and \(\sim 0.8\) on SANPO-Synthetic.
ZoeDepth Bhat et al. (2023) is designed to estimate metric depth for out-of-domain datasets, but still requires alignment on this data (unaligned \(\delta_{\leq 1.25}\approx 0.2\) on SANPO-Real). The performance difference may be due to the lack of metric depth data available to the community. ZoeD-M12-NK was trained on total of 12 datasets, only two of which (NYUv2 and KITTI) are metric depth datasets.
On the segmentation side, Mask2Former (Swin-L) achieves an mIoU of 0.83 on Cityscapes validation set but \(\sim 0.49\) on SANPO-Real.
In general, SANPO is a challenging and novel dataset that focuses on the domain of egocentric human navigation, with plenty of headroom.
\begin{table}
\begin{tabular}{|l|c c|c c|} \hline \hline & \multicolumn{2}{c|}{**Depth**} & \multicolumn{2}{c|}{**Prompt Based**} & \multicolumn{2}{c|}{**Semantic Segmentation**} \\ \cline{3-5} & & \multicolumn{1}{c|}{**Instance Segmentation**} & \multicolumn{1}{c|}{**Semantic Segmentation**} \\ \hline
**Dataset** & DPT & ZoeDepth & SAM & Kmax-Deeplab & Mask2Former \\ & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{ConvNeXt-L} & \multicolumn{1}{c|}{Swin-L} \\ \hline & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline SANPO-Real & 0.6703 & 0.6978 & 0.4896 & 0.3234 & 0.497 \\ SANPO-Synthetic & 0.7955 & 0.8032 & 0.5121 & 0.4639 & 0.535 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Zero-shot evaluation. In this setting, we evaluated the ability of state-of-the-art models trained on other relevant datasets to generalize to the SANPO test set for depth estimation and semantic segmentation. SANPO challenges these models’ generalization capabilities.**
### SANPO Benchmark
In these experiments, we evaluated two state-of-the-art architectures: BinsFormer Li et al. (2022) for depth estimation and Kmax-Deeplab Yu et al. (2023) for panoptic segmentation. Our objective is to establish baseline performance on the SANPO dataset for future research.
#### 4.2.1 Experimental Setup
We trained the models on the SANPO train sets and evaluated them on the test sets. For the SANPO-Real experiments, we trained on \(\sim\)494K samples from the SANPO-Real train set. We evaluated depth estimation on both the real and synthetic test sets. For panoptic segmentation, we trained on \(\sim\)89K samples from the SANPO-Real train set and
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{**Panoptic Segmentation**} & \multicolumn{3}{c|}{**Depth Estimation**} \\ \hline \multirow{2}{*}{**Dataset**} & Kmax-Deeplab-R50 & Kmax-Deeplab-R50 & BinsFormer & BinsFormer \\ & SANPO-Real & SANPO-Combined & SANPO-Real & SANPO-Combined \\ \hline & mIoU\(\uparrow\) & PQ\(\uparrow\) & mIoU\(\uparrow\) & PQ\(\uparrow\) & \(\delta_{\leq 1.25}\uparrow\) \\ \hline \multicolumn{5}{|c|}{**Initialized with random weights**} \\ \hline SANPO-Real & 0.3416 & 0.3173 & 0.3409 & 0.3210 & 0.4523 & 0.4702 \\ \hline SANPO-Synth & 0.2735 & 0.2277 & 0.5549 & 0.4483 & 0.2744 & 0.8546 \\ \hline \multicolumn{5}{|c|}{**Pretrained with Cityscapes**} & \multicolumn{3}{c|}{**Pretrained with Cityscapes-DVPS**} \\ \hline SANPO-Real & 0.4370 & 0.4298 & 0.4381 & 0.4234 & 0.4524 & 0.4862 \\ \hline SANPO-Synth & 0.3900 & 0.3387 & 0.7109 & 0.5714 & 0.3235 & 0.8639 \\ \hline \end{tabular}
\end{table}
Table 2: **SAMPO Benchmark. Baseline performance of Kmax-Deeplab and BinsFormer, using ResNet-50 backbone, on SANPO for panoptic segmentation and depth estimation with limited training budget and standard hyperparameters.**
Figure 7: **Segment Anything Model (SAM) on SANPO. We evaluated SAM on SANPO images. The middle column shows sample instance masks in SANPO and the selected point used to prompt SAM. The last column shows the predicted masks generated by SAM.**
evaluated on the SANPO real and synthetic test sets. For the SANPO-Combined experiments, we combined the SANPO-Real and SANPO-Synthetic train sets for training and used the test sets for evaluation.
We resized the data to 1025\(\times\)2049 (with padding to maintain the aspect ratio) for training and evaluation. We trained two sets of models using a ResNet-50 backbone architecture:
1. Models initialized with random weights.
2. Models initialized with weights from models trained on the Cityscapes dataset for panoptic segmentation, and the Cityscapes-DVPS dataset 9 for depth estimation. Footnote 9: Cityscapes-DVPS Qiao et al. (2020) is based on Cordts et al. (2016)
To ensure fair comparison and reproducibility, we limited the training budget to 50,000 steps with a batch size of 32 and used the standard hyperparameters as defined in Weber et al. (2021). For reference, this training budget results in:
1. \(\sim\)540 epochs of the Cityscapes panoptic segmentation dataset.
2. \(\sim\)18 epochs of the SANPO-Real panoptic segmentation dataset.
3. \(\sim\)3.3 epochs of the SANPO-Real depth estimation dataset.
Table 2 shows the baseline performance for panoptic segmentation and depth estimation on SANPO. Similar to the zero-shot evaluation, we observe that SANPO is a challenging dataset for both dense prediction tasks.
Additionally, we also observe, here and in the zero-shot experiments, that synthetic data has higher accuracy than real data. This performance gap between the real and synthetic sets could be attributed to two factors:
1. **Complexity of the environments & domain gap:** Real-world environments are more complex than synthetic data, with more variation in objects, backgrounds, and their interactions. The synthetic data also differs from the real data in appearance and lighting, although it can sometimes be hard to tell.
2. **Accuracy of the segmentation annotations:** Segmentation annotations are more precise in the synthetic data than in the real data.
Exact quantification of these factors would require additional domain adaptation experiments, which are beyond the scope of this work. We built the SANPO-Synthetic dataset to facilitate this line of research.
## 5 Conclusion
We presented the SANPO dataset, a large-scale video dataset for egocentric human navigation. It consists of 617k real stereo frames and 113k synthetic frames. All real frames have dense depth annotations, and \(\sim 20\%\) of them have dense segmentation annotations. All synthetic frames have both depth and segmentation annotations. In addition to the depth and segmentation annotations, we also provide visual odometry readings (camera/ego-person poses).
This work also evaluated the dataset and presented benchmarks for cross-dataset zero-shot generalization and training on some state-of-the-art architectures. We hope that this dataset will help fellow researchers build visual navigation systems for the visually impaired and push the frontiers of visual scene understanding.
|
2310.00415 | Wieler solenoids: non-Hausdorff expansiveness, Cuntz-Pimsner models, and
functorial properties | Building on work of Williams, Wieler proved that every irreducible Smale
space with totally disconnected stable sets can be realized via a stationary
inverse limit. Using this result, the first and fourth listed authors of the
present paper showed that the stable $C^*$-algebra associated to such a Smale
space can be obtained from a stationary inductive limit of a Fell algebra. Its
spectrum is typically non-Hausdorff and admits a self-map related to the
stationary inverse limit. With the goal of understanding the fine structure of
the stable algebra and the stable Ruelle algebra, we study said self-map on the
spectrum of the Fell algebra as a dynamical system in its own right. Our
results can be summarized into the statement that this dynamical system is an
expansive, surjective, local homeomorphism of a compact, locally Hausdorff
space and from its $K$-theory we can compute $K$-theoretical invariants of the
stable and unstable Ruelle algebra of a Smale space with totally disconnected
stable sets. | Robin J. Deeley, Menevse Eryüzlü, Magnus Goffeng, Allan Yashinski | 2023-09-30T15:37:21Z | http://arxiv.org/abs/2310.00415v1 | # Wieler Solenoids: non-Hausdorff expansiveness, Cuntz-Pimsner models, and functorial properties
###### Abstract.
Building on work of Williams, Wieler proved that every irreducible Smale space with totally disconnected stable sets can be realized via a stationary inverse limit. Using this result, the first and fourth listed authors of the present paper showed that the stable \(C^{*}\)-algebra associated to such a Smale space can be obtained from a stationary inductive limit of a Fell algebra. Its spectrum is typically non-Hausdorff and admits a self-map related to the stationary inverse limit. With the goal of understanding the fine structure of the stable algebra and the stable Ruelle algebra, we study said self-map on the spectrum of the Fell algebra as a dynamical system in its own right. Our results can be summarized into the statement that this dynamical system is an expansive, surjective, local homeomorphism of a compact, locally Hausdorff space and from its \(K\)-theory we can compute \(K\)-theoretical invariants of the stable and unstable Ruelle algebra of a Smale space with totally disconnected stable sets.
RJD was partially supported by NSF Grants DMS 2000057 and DMS 2247424. MG was supported by the Swedish Research Council Grant VR 2018-0350.
Introduction
Let \(X\) be a Banach space and let \(\mathcal{F}\) be a Banach space. A Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F}\) such that \(\mathcal{F}\) is \(\mathcal{F}\)-measurable. The Banach space \(\mathcal{F}\) is said to be _\(\mathcal{F}\)-measurable_ if there exists a Banach space \(\mathcal{F
**Acknowledgements** The authors wish to thank Jamie Gabe for the examples leading up to Remark 4.8. The first listed author thanks the University of Hawaii and the Fields Institute for visits during which time the paper was completed.
## 1. Preliminaries
In this section, we will present some preliminary material from the literature. As the paper aims for a rather broad view on Wieler-Smale spaces, we carefully review the known results.
### Smale spaces
Although we are only interested in Wieler solenoids some definitions and basic properties of general Smale spaces are required. The reader can find more on Smale spaces in [22, 31, 32, 34].
**Definition 1.1**.: A Smale space is a metric space \((X,d)\) along with a homeomorphism \(\varphi:X\to X\) with the following additional structure: there exists global constants \(\epsilon_{X}>0\) and \(0<\lambda<1\) and a continuous map, called the bracket map,
\[[\ \cdot\,\ \cdot\ ]:\{(x,y)\in X\times X:d(x,y)\leq\epsilon_{X}\}\to X\]
such that the following axioms hold
1. \([x,x]=x\);
2. \([x,[y,z]]=[x,z]\) assuming both sides are defined;
3. \([[x,y],z]=[x,z]\) assuming both sides are defined;
4. \(\varphi[x,y]=[\varphi(x),\varphi(y)]\) assuming both sides are defined;
5. For \(x,y\in X\) such that \([x,y]=y\), \(d(\varphi(x),\varphi(y))\leq\lambda d(x,y)\);
6. For \(x,y\in X\) such that \([x,y]=x\), \(d(\varphi^{-1}(x),\varphi^{-1}(y))\leq\lambda d(x,y)\).
A Smale space is denoted simply by \((X,\varphi)\) and to avoid certain trivialities, throughout the paper we assume that \(X\) is infinite.
**Definition 1.2**.: Suppose \((X,\varphi)\) is a Smale space and \(x\), \(y\) are in \(X\). We write
\[x\sim_{s}y\text{ if }\lim_{n\to\infty}d(\varphi^{n}(x),\varphi^{n}(y))=0\]
and we write
\[x\sim_{u}y\text{ if }\lim_{n\to\infty}d(\varphi^{-n}(x),\varphi^{-n}(y))=0.\]
The \(s\) and \(u\) stand for stable and unstable respectively.
The global stable and unstable set of a point \(x\in X\) are defined as follows:
\[X^{s}(x)=\{y\in X\mid y\sim_{s}x\}\text{ and }X^{u}(x)=\{y\in X\mid y\sim_{u}x\}\]
Furthermore, the local stable and unstable sets of \(x\) are defined as follows: Given, \(0<\epsilon\leq\epsilon_{X}\), we have
\[X^{s}(x,\epsilon) =\{y\in X\mid[x,y]=y\text{ and }d(x,y)<\epsilon\}\text{ and} \tag{1.1}\] \[X^{u}(x,\epsilon) =\{y\in X\mid[y,x]=y\text{ and }d(x,y)<\epsilon\}. \tag{1.2}\]
The following result is a standard, see for example [31, 34].
**Theorem 1.3**.: _Suppose \((X,\varphi)\) is a Smale space and \(x\), \(y\) are in \(X\) with \(d(x,y)<\epsilon_{X}\). Then the following hold: for any \(0<\epsilon\leq\epsilon_{X}\)_
1. \(X^{s}(x,\epsilon)\cap X^{u}(y,\epsilon)=\{[x,y]\}\) _or is empty;_
2. \(X^{s}(x)=\bigcup_{n\in\mathbb{N}}\varphi^{-n}(X^{s}(\varphi^{n}(x), \epsilon))\)_;_
3. \(X^{u}(x)=\bigcup_{n\in\mathbb{N}}\varphi^{n}(X^{u}(\varphi^{-n}(x), \epsilon))\)_._
A Smale space is mixing if for each pair of non-empty open sets \(U\), \(V\), there exists \(N\) such that \(\varphi^{n}(U)\cap V\neq\emptyset\) for all \(n\geq N\). When \((X,\varphi)\) is mixing, \(X^{u}(x)\) and \(X^{s}(x)\) are each dense as subsets of \(X\). However, one can use the previous theorem to give \(X^{u}(x)\) and \(X^{s}(x)\) locally compact, Hausdorff topologies, see for example [22, Theorem 2.10].
### Background on Wieler solenoids
Inspired by work of Williams [41], Wieler [39] proved that every Smale space with totally disconnected stable sets \(X^{s}(x)\) can be realized as a solenoid. Such Smale spaces will be referred to as Wieler solenoids or Wieler-Smale spaces. More precisely, she characterized Wieler-Smale spaces in terms of the following axioms on the pre-solenoid.
**Definition 1.4** (Wieler's Axioms).: Let \((Y,\mathrm{d}_{Y})\) be a compact metric space, and \(g:Y\to Y\) be a continuous surjective map. Then, the triple \((Y,\mathrm{d}_{Y},g)\) satisfies Wieler's axioms if there exists global constants \(\beta>0\), \(K\in\mathbb{N}^{+}\), and \(0<\gamma<1\) such that the following hold:
**Axiom 1:**: If \(x,y\in Y\) satisfy \(\mathrm{d}_{Y}(x,y)\leq\beta\), then
\[\mathrm{d}_{Y}(g^{K}(x),g^{K}(y))\leq\gamma^{K}\mathrm{d}_{Y}(g^{2K}(x),g^{2K }(y)).\]
**Axiom 2:**: For all \(x\in V\) and \(0<\epsilon\leq\beta\)
\[g^{K}(B(g^{K}(x),\epsilon))\subseteq g^{2K}(B(x,\gamma\epsilon)).\]
**Definition 1.5**.: Suppose \((Y,\mathrm{d}_{Y},g)\) satisfies Wieler's axioms and form the inverse limit space
\[X:=\varprojlim(Y,g)=\{(y_{n})_{n\in\mathbb{N}}=(y_{0},y_{1},y_{2},\ldots)\mid g (y_{i+1})=y_{i}\text{ for each }i\geq 0\}.\]
Consider the map \(\varphi:X\to X\) defined via
\[\varphi(x_{0},x_{1},x_{2},\ldots)=(g(x_{0}),g(x_{1}),g(x_{2}),\ldots)=(g(x_{0 }),x_{0},x_{1},\ldots).\]
Following Wieler, we take a metric on \(X\), \(\mathrm{d}_{X}\), given by
\[\mathrm{d}_{X}((x_{n})_{n\in\mathbb{N}},(y_{n})_{n\in\mathbb{N}})=\sum_{i=0}^ {K}\gamma^{i}\mathrm{d}^{\prime}_{X}(\varphi^{i}(x_{n})_{n\in\mathbb{N}}, \varphi^{i}(y_{n})_{n\in\mathbb{N}}),\]
where \(\mathrm{d}^{\prime}_{X}((x_{n})_{n\in\mathbb{N}},(y_{n})_{n\in\mathbb{N}})= \sup_{n\in\mathbb{N}}(\gamma^{n}\mathrm{d}_{Y}(x_{n},y_{n}))\). We note that the topology induced by \(\mathrm{d}_{X}\) is the product topology. The triple \((X,\mathrm{d}_{X},\varphi)\) is called a Wieler solenoid.
_Remark 1.6_.: We will assume that \(Y\) is infinite. In particular, this ensures that \(X\) is infinite and that \(g\) is not a homeomorphism, see [11]. The pair \((Y,g)\) will be called a presolenoid and \((X,\varphi)\) the associated solenoid or Smale space.
**Theorem 1.7**.: _[_39_, Theorems A and B on page 4]_ _Suppose that \((Y,\mathrm{d}_{Y},g)\) satisfies Wieler's axioms. Then the associated Wieler solenoid \((X,\mathrm{d}_{X},\varphi)\) is a Smale space with totally disconnected stable sets. The constants in Wieler's definition give Smale space constants: \(\epsilon_{X}=\frac{\beta}{2}\) and \(\lambda=\gamma\). Moreover, if \(\mathbf{x}=(x_{n})_{n\in\mathbb{N}}\in X\) and \(0<\epsilon\leq\frac{\beta}{2}\), the locally stable and unstable sets of \((X,\mathrm{d}_{X},\varphi)\) are given by_
\[X^{s}(\mathbf{x},\epsilon)=\{\mathbf{y}=(y_{n})_{n\in\mathbb{N}}\,|\,y_{m}=x_{ m}\text{ for }0\leq m\leq K-1\text{ and }\mathrm{d}_{X}(\mathbf{x},\mathbf{y})\leq\epsilon\}\]
_and_
\[X^{u}(\mathbf{x},\epsilon)=\{\mathbf{y}=(y_{n})_{n\in\mathbb{N}}\,|\,\mathrm{d }_{Y}(x_{n},y_{n})<\epsilon\,\forall n\text{ and }\mathrm{d}_{X}(\mathbf{x},\mathbf{y})\leq\epsilon\}\]
_respectively._
_Conversely, if \((X,\varphi)\) is an irreducible Smale space with totally disconnected stable sets, then there exists a triple \((Y,\mathrm{d}_{Y},g)\) satisfying Wieler's axioms such that \((X,\varphi)\) is conjugate to the Wieler solenoid associated to \((Y,\mathrm{d}_{Y},g)\)._
_Remark 1.8_.: Wieder's axioms and the previous theorem should be compared with work of Williams [41]. As mentioned in the introduction, an important difference between the two is that Wieder's are purely metric space theoretic. If a triple \((Y,\mathrm{d}_{Y},g)\) satisfies Williams' axioms the inverse limit space \(X\) with \(\varphi\) as in definition 1.5 is also Smale space and we will refer to such Smale spaces as Williams solenoids. However, we are most concerned with the more general case of a Wieder solenoid so we will not review Williams' axioms but rather direct the reader to [41] for more details.
An important special case occurs when \(g\) is a local homeomorphism that satisfies Wieder's axioms. This special case was studied in detail in [8] (also see [37, Section 4.5]). We recall a salient characterization of how refinements of Wieder's axioms (Definition 1.4) are equivalent to \(g\) being a local homeomorphism, more detailed statements can be found in [8, Section 3].
**Theorem 1.9** (Lemma 3.7 and 3.8 of [8]).: _Let \((Y,\mathrm{d}_{Y})\) be a compact metric space, and \(g:Y\to Y\) be a continuous surjective map. The following are equivalent:_
* \((Y,\mathrm{d}_{Y},g)\) _satisfies Wieder's axioms and_ \(g\) _is a local homeomorphism._
* \((Y,\mathrm{d}_{Y},g)\) _satisfies Wieder's axiom 1 and_ \(g\) _is open._
* \((Y,\mathrm{d}_{Y},g)\) _satisfies Wieder's axiom_ \(2\) _and_ \(g^{K}\) _is locally expanding (for the_ \(K\) _in Wieder's axiom 2)._
_Remark 1.10_.: The reader is encouraged to compare Theorem 1.9 to the more satisfying situation arising from going to a non-Hausdorff setting in Theorem 3.10.
A list of examples of Wieder solenoids can be found in [11]. Three explicit examples that are relevant and illustrative of the results in this paper are the following:
_Example 1.11_ (\(n\)-solenoid).: Let \(S^{1}\subseteq\mathbb{C}\) be the unit circle. Take \(n>1\) and define \(g:S^{1}\to S^{1}\) via \(z\mapsto z^{n}\). Since \(g\) is open and expansive (notice that \(|g^{\prime}(z)|=n\)), one readily verifies Wieder's axioms for \((S^{1},g)\) (see Theorem 1.9). Hence the associated inverse limit is a Smale space. It is worth emphasizing that in this case \(g\) is a local homeomorphism.
_Example 1.12_ (\(ab/ab\)-solenoid).: Let \(Y=S^{1}\lor S^{1}\) be the wedge sum of two circles as in Figure 1.
The map \(g:Y\to Y\) is defined using Figure 1. In Figure 1, we consider the outer circle to be the \(a\)-circle and the inner circle to be the \(b\)-circle. Each line segment labelled with
Figure 1. \(ab/ab\) pre-solenoid
\(a\) in Figure 1 is mapped onto the \(a\)-circle (i.e., the outer circle); while, each line segment labelled with \(b\) in Figure 1 is mapped onto the \(b\)-circle (i.e., the inner circle). The mapping is done in an orientation-preserving way, provided we have oriented both circles the same way, say clockwise. Note that \(g\) is not a local homeomorphism in this example. For more details on this specific example and one-solenoids in general, see [38, 40, 42]. The next example is also of this form.
_Example 1.13_ (\(aab/ab\)-solenoid).: Again, we take \(Y=S^{1}\lor S^{1}\) to be the wedge sum of two circles but with labels as in Figure 2. This example has been studied in [10, 11].
The map \(g:Y\to Y\) is defined from Figure 2 via the same process as in Example 1.12. The map \(g\) is of course different than the one in Example 1.12 because the labels are different. Again, the resulting map \(g\) is not a local homeomorphism and is an example of a one-solenoid, again see [38, 40, 42] for more on this class of examples.
### Fell algebras and their spectrum
Let us recall the basic facts about Fell algebras that we make use of. The spectrum of a separable \(C^{*}\)-algebra \(A\) is defined as the set \(\hat{A}\) of equivalence classes of irreducible representations (see [12, Chapter 3]). The spectrum \(\hat{A}\) can be topologized in several different ways, for instance in the Fell topology or the Jacobson topology. For a Fell algebra, the topologies coincide. The spectrum \(\hat{A}\) is locally quasi-compact by [12, Corollary 3.3.8].
A \(C^{*}\)-algebra \(A\) is a Fell algebra if every \([\pi_{0}]\in\hat{A}\) admits a neighborhood \(U\) and an element \(b\in A\) such that \(\pi(b)\) is a rank one projection for all \([\pi]\in U\). An equivalent definition of a Fell algebra is that \(A\) is generated by its abelian elements. For details, see [20, Chapter 3]. A Fell algebra has locally Hausdorff spectrum (i.e. any \([\pi]\in\hat{A}\) has a Hausdorff neighborhood) by [2, Corollary 3.4]. The spectrum of a \(C^{*}\)-algebra is always locally quasi-compact. The properties of the spectrum of a Fell algebra can be summarized as being locally Hausdorff and locally locally compact (see [6, Chapter 3]).
**Definition 1.14**.: Let \(\tilde{Y}\) be a topological space. A Hausdorff resolution of \(\tilde{Y}\) is a surjective local homeomorphism \(\psi:X\to\tilde{Y}\) from a locally compact, Hausdorff space \(X\).
_Example 1.15_.: The main example of a Fell algebra that we will concern ourselves with arises in a rather explicit way from a Hausdorff resolution. The construction can be found in [6, Corollary 5.4]. Suppose that \(\psi:X\to\tilde{Y}\) is a Hausdorff resolution of a topological
Figure 2. \(aab/ab\) pre-solenoid
space \(\tilde{Y}\). It follows that \(\tilde{Y}\) is locally Hausdorff and locally locally compact, and second countable if \(X\) is. We define the equivalence groupoid
\[R(\psi):=X\times_{\psi}X:=\{(y_{1},y_{2})\in X\times X:\psi(y_{1})=\psi(y_{2})\}.\]
By declaring the domain and range mappings \(d(y_{1},y_{2}):=y_{2}\) and \(r(y_{1},y_{2}):=y_{1}\) to be local homeomorphisms, \(R(\psi)\) becomes an etale groupoid over \(X\). By [6, Corollary 5.4], \(C^{*}(R(\psi))\) is a Fell algebra with vanishing Dixmier-Douady invariant and spectrum \(\tilde{Y}\). We also note that \(R(\psi)\) is amenable so \(C^{*}(R(\psi))=C^{*}_{r}(R(\psi))\).
The theory of Dixmier-Douady invariants of Fell algebras was introduced and developed in [20], also see [6]. We only need to consider Fell algebras with vanishing Dixmier-Douady class in which case the following theorem (see [20, 6]) reduces the problem to a more manageable situation.
**Theorem 1.16**.: _We have the following relationship between Fell algebras and non-Hausdorff spaces._
1. _Let_ \(A\) _be a separable Fell algebra with vanishing Dixmier-Douady invariant. Then the locally Hausdorff and locally locally compact space_ \(\hat{A}\) _determines_ \(A\) _up to stable isomorphism in the sense that whenever_ \(A^{\prime}\) _is a separable Fell algebra with vanishing Dixmier-Douady invariant then a homeomorphism_ \(h:\hat{A}\to\widehat{A^{\prime}}\) _can be lifted to a stable isomorphism_ \(A\otimes\mathbb{K}\cong A^{\prime}\otimes\mathbb{K}\)_._
2. _A topological space_ \(\tilde{Y}\) _is locally Hausdorff and locally locally compact if and only if it admits a Hausdorff resolution_ \(\psi:X\to\tilde{Y}\)_. If_ \(\tilde{Y}\) _is second countable then_ \(X\) _can also be choosen second countable. In particular, any second countable, locally Hausdorff, locally locally compact topological space is the spectrum of a separable Fell algebra with vanishing Dixmier-Douady invariant._
_Remark 1.17_.: One can view the previous theorem as follows. Taking the spectra defines an equivalence between the category of stable isomorphism classes of separable Fell algebras with vanishing Dixmier-Douady invariants and that of second countable, locally Hausdorff and locally locally compact spaces. However, it should be emphasized that the morphisms in the category in the previous sentence are homeomorphisms; one cannot generalize to the case of continuous map, even in the case of locally Hausdorff and compact spaces.
For a proof of the first statement on stable uniqueness, see [20, Theorem 7.13]. The existence result in the second statement can be found in [6, Corollary 5.5].
### Correspondences
Results from this section and the next will not be needed until Section 5. Furthermore, the reader only interested in the purely dynamical results of the present paper can skip to Section 2 without issue.
A \(C^{*}\)-correspondence \({}_{A}E_{B}\) is a right Hilbert \(B\)-module equipped with a left action given by a homomorphism \(\varphi_{E}:A\to\mathcal{L}(E)\), where \(\mathcal{L}(E)\) denotes the \(C^{*}\)-algebra of adjointable operators on \(E\). In the literature one also finds the term \(A-B\)-Hilbert \(C^{*}\)-module for a \(C^{*}\)-correspondence from \(A\) to \(B\). A \(C^{*}\)-correspondence homomorphism \({}_{A}E_{B}\to{}_{A}F_{B}\) is a \(B\)-linear map \(\Phi:E\to F\) satisfying
\[\Phi(a\cdot\xi)=a\cdot\Phi(\xi)\text{ and }\langle\xi,\nu\rangle_{C}=\langle \Phi(\xi),\Phi(\nu)\rangle_{C},\]
for all \(a\in A\), and \(\xi,\nu\in E\).
**Definition 1.18**.: An \(A-B\) bimodule \(E_{0}\) is called a _pre-correspondence_ if it has a \(B\)-valued semi-inner product satisfying
\[\langle\xi,\nu\cdot b\rangle=\langle\xi,\nu\rangle b,\ \ \ \ \langle\xi,\nu \rangle^{*}=\langle\nu,\xi\rangle\]
and \(\langle a\cdot\xi,a\cdot\xi\rangle\leq\|a\|^{2}\langle\xi,\xi\rangle\) for all \(a\in A,b\in B\) and \(\xi,\nu\in E_{0}\). Modding out by the elements of length \(0\) and completing gives a \(C^{*}\)-correspondence \({}_{A}E_{B}\). We call \({}_{A}E_{B}\) the _completion_ of the pre-correspondence \(E_{0}\).
**Proposition 1.19**.: _[_13_, Lemma 1.23]_ _Let \(E_{0}\) be an \(A-B\) pre-correspondence given with the completion \({}_{A}E_{B}\), and let \(F\) be an \(A-B\) correspondence. If there is a map \(\Phi:E_{0}\to F\) satisfying_
\[\Phi(a\cdot\xi)=\varphi_{Z}(a)\Phi(\xi)\qquad\text{and}\qquad\langle\Phi(\xi), \Phi(\nu)\rangle_{B}=\langle\xi,\nu\rangle_{B},\]
_for all \(a\in A\) and \(\xi,\nu\in E_{0},\) then \(\Phi\) extends uniquely to an injective \(A-B\) correspondence homomorphism \(\tilde{\Phi}:E\to F.\)_
The _balanced tensor product_\(E\otimes_{B}F\) of an \(A-B\) correspondence \(E\) and a \(B-C\) correspondence \(F\) is formed as follows: the algebraic tensor product \(E\odot F\) is a pre-correspondence with the \(A-C\) bimodule structure satisfying
\[a(\xi\otimes\nu)c=a\xi\otimes\nu c\qquad\text{for $a\in A,\xi\in E,\nu\in F,c \in C$},\]
and the unique \(C\)-valued semi-inner product whose values on elementary tensors are given by
\[\langle\xi_{1}\otimes\nu_{1},\xi_{2}\otimes\nu_{2}\rangle_{C}=\langle\nu_{1},\langle\xi_{1},\xi_{2}\rangle_{B}\cdot\nu_{2}\rangle_{C}\qquad\text{for $\xi_{1},\xi_{2}\in E,\nu_{1},\nu_{2}\in F$}.\]
This semi-inner product defines a \(C\)-valued inner product on the quotient \(E\odot_{B}F\) of \(E\odot F\) by the subspace generated by elements of form
\[\xi\cdot b\otimes\nu-\xi\otimes\varphi_{Y}(b)\nu\qquad(\xi\in E,\,\nu\in F,\,b\in B)\.\]
The completion \(E\otimes_{B}F\) of \(E\odot_{B}F\) with respect to the norm coming from the \(C\)-valued inner product is an \(A-B\) correspondence, where the left action is given by
\[A\to\mathcal{L}(E\otimes_{B}F),\qquad a\mapsto\varphi_{E}(a)\otimes 1_{F},\]
for \(a\in A.\) In other words, the \(A-C\) correspondence \(E\otimes_{B}F\) is the completion of the pre-correspondence \(E\odot F\) (as in Definition 1.18).
We denote the canonical image of \(\xi\otimes\nu\) in \(E\otimes_{B}F\) by \(\xi\otimes_{B}\nu\). The term _balanced_ refers to the property
\[\xi\cdot b\otimes_{B}\nu=\xi\otimes_{B}b\cdot\nu\qquad\text{for $\xi\in E,b \in B,\nu\in F$},\]
which is a consequence of the construction.
**Definition 1.20**.: _[_36_, Definition 5.7]_ _Hilbert modules \(E_{A}\) and \(F_{B}\) are Morita equivalent if there exists an imprimitivity bimodule \({}_{B}M_{C}\) such that \(E\otimes_{B}M\cong F\) as Hilbert C-modules._
We now put Definition 1.20 in the setting of \(C^{*}\)-correspondences:
**Definition 1.21**.: \(C^{*}\)_-correspondences \({}_{A}E_{B}\) and \({}_{A}F_{C}\) are Morita equivalent if there exists an imprimitivity bimodule \({}_{B}M_{C}\) such that \(E\otimes_{B}M\cong F\) as \(A-C\) correspondences._
### Groupoid Actions and Equivalence
The correspondences reviewed in the last section will in this paper arise from groupoids and their actions. Again, the reader only interested in the purely dynamical results can skip to Section 2 without issue.
**Definition 1.22**.: Suppose \(G\) is a groupoid and that \(X\) is a set together with a map \(\rho:X\to G^{(0)}\) called the _moment map_. Then a left action of \(G\) on \(X\) is a map \((g,x)\mapsto g\cdot x\) from \(G*X=\{(g,x)\in G\times X:s(g)=\rho(x)\}\) to \(X\) such that
* \(\rho(x)\cdot x=x,\) for all \(x\in X\) and
* if \((g,g^{\prime})\in G^{(2)}\) and \((g^{\prime},x)\in G*X\), then \((g,g^{\prime}\cdot x)\in G*X\) and \(gg^{\prime}\cdot x=g\cdot(g^{\prime}\cdot x)\).
_Right actions are defined analogously, and we denote by \(\sigma\) the moment map for a right action._
**Definition 1.23**.: _[_25_, Definition 1.2]_ _Let \(G_{1}\) and \(G_{2}\) be second countable locally compact Hausdorff groupoids and \(Z\) a second countable locally compact Hausdorff space. The space \(Z\) is a groupoid correspondence from \(G_{1}\) to \(G_{2}\) if it satisfies the following properties:_
1. _there exists a left proper action of_ \(G_{1}\) _on_ \(Z\) _such that_ \(\rho\) _is an open map;_
2. _there exists a right proper action of_ \(G_{2}\) _on_ \(Z\);_
3. _the_ \(G_{1}\) _and_ \(G_{2}\) _actions commute;_
4. the map \(\rho\) induces a bijection of \(Z/G_{2}\) onto \(G_{1}^{(0)}\).
**Theorem 1.24**.: _[_25_, Theorem 1.4]_ _Let \(G_{1}\), \(G_{2}\) be second countable locally compact Hausdorff etale groupoids; and let \(Z\) be a groupoid correspondence from \(G_{1}\) to \(G_{2}\). Then the pre-correspondence \({}_{C_{c}(G_{2})}C_{c}(Z)_{C_{c}(G_{1})}\) extends to a correspondence from \(C^{*}(G_{2})\) to \(C^{*}(G_{1})\) with the actions_
\[(\xi\cdot a)(z) =\sum_{\begin{subarray}{c}g\in G_{1}with\\ s(g)=\rho(z)\end{subarray}}\xi(g\cdot z)a(g)\] \[(b\cdot\xi)(z) =\sum_{\begin{subarray}{c}g\in G_{2}with\\ \sigma(z)=r(g^{-1})\end{subarray}}b(g^{-1})\xi(z\cdot g^{-1})\]
_for \(\xi\in C_{c}(Z),a\in C_{c}(G_{1}),b\in C_{c}(G_{2}),z\in Z.\) The inner product is defined by_
\[\langle\xi_{1},\xi_{2}\rangle(g)=\sum_{\begin{subarray}{c}h^{-1}\in G_{2} with\\ r(h^{-1})=\sigma(z)\end{subarray}}\overline{\xi_{1}(z\cdot h^{-1})}\xi_{2}(g^{-1 }\cdot z\cdot h^{-1}),\]
_where \(g\in G_{1},\xi_{1},\xi_{2}\in C_{c}(Z),\) and \(z\in Z\) such that \(r(g)=\rho(z)\)._
## 2. \(C^{*}\)-algebras associated to a Wieder solenoid
The fine structure of the stable algebra of a Wieder solenoid was studied in [11], extending results from [8] in the case of the pre-solenoid being defined from a local homeomorphism. We here review and refine the relevant points of [11], leading up to the stable algebra of a Wieder solenoid being a stationary inductive limit of a Fell algebra defined from the dynamics. This Fell algebra plays an important role in the paper, as its spectrum will be the non-Hausdorff dynamical system \((X^{u}(\mathbf{P}))/{\sim_{0}},\tilde{g})\) of main interest in this paper.
### The stable algebra of a Smale space
Following [32], we construct the stable groupoid of \((X,\varphi)\). Let \(\mathbf{P}\) denote a finite \(\varphi\)-invariant set of periodic points of \(\varphi\) and define
\[X^{u}(\mathbf{P})=\{x\in X\,|\,x\sim_{u}p\text{ for some }p\in\mathbf{P}\}\]
and
\[G^{s}(\mathbf{P}):=\{(x,y)\in X^{u}(\mathbf{P})\times X^{u}(\mathbf{P})\,|\,x \sim_{s}y\}.\]
Still following [32], a topology is defined on \(G^{s}(\mathbf{P})\) by constructing a neighborhood base. Suppose \((x,y)\in G^{s}(\mathbf{P})\). Then there exists \(k\in\mathbb{N}\) such that
\[\varphi^{k}(x)\in X^{s}\left(\varphi^{k}(y),\frac{\epsilon_{X}}{2}\right).\]
Since \(\varphi\) is continuous there exists \(\delta>0\) such that
\[\varphi^{k}(X^{u}(y,\delta))\subseteq X^{u}\left(\varphi^{k}(y),\frac{ \epsilon_{X}}{2}\right).\]
Using this data, we define a function \(h_{(x,y,\delta)}:X^{u}(y,\delta)\to X^{u}(x,\epsilon_{X})\) via
\[z\mapsto\varphi^{-k}([\varphi^{k}(z),\varphi^{k}(x)])\]
and have the following result from [32]:
**Theorem 2.1**.: _The function \(h=h_{(x,y,\delta)}\) is a homeomorphism onto its image and (by letting \(x\), \(y\), and \(\delta\) vary) the sets_
\[V(x,y,h,\delta):=\{(h(z),z)\,|\,z\in X^{u}(y,\delta)\}\]
_forms a neighborhood base for an etale topology on the groupoid \(G^{s}(\mathbf{P})\). Moreover, the groupoid \(G^{s}(\mathbf{P})\) is amenable, second countable, locally compact, and Hausdorff._
**Definition 2.2**.: The stable Ruelle groupoid is the groupoid \(G^{s}(\mathbf{P})\rtimes\mathbb{Z}\) where the \(\mathbb{Z}\)-actions is the one induced from \(\varphi|_{X^{u}(P)}\); the associated \(C^{*}\)-algebra is call the stable Ruelle algebra. It is worth noting that this definition requires that \(\mathbf{P}\) is \(\varphi\)-invariant (as was assumed above).
### The open subrelation, Fell algebra and its spectrum
We review the construction of \(\sim_{0}\) and the associated groupoid \(C^{*}\)-algebra studied in [11].
Let \(\epsilon_{Y}>0\) be the global constant defined in [11] and \(\pi_{0}:X^{u}(\mathbf{P})\to Y\) denote the map defined via
\[\mathbf{x}=(x_{n})_{n\in\mathbb{N}}\mapsto x_{0}.\]
**Definition 2.3**.: Suppose \(\mathbf{x}\) and \(\mathbf{y}\) are in \(X^{u}(\mathbf{P})\). Then \(\mathbf{x}\sim_{0}\mathbf{y}\) if
1. \(\pi_{0}(\mathbf{x})=\pi_{0}(\mathbf{y})\) (i.e., \(x_{0}=y_{0}\));
2. there exists \(0<\delta_{\mathbf{x}}<\epsilon_{Y}\) and open set \(U\subseteq X^{u}(\mathbf{y},\epsilon_{Y})\) such that \[\pi_{0}(X^{u}(\mathbf{x},\delta_{\mathbf{x}}))=\pi_{0}(U).\]
Let \(G_{0}(\mathbf{P})=\{(\mathbf{x},\mathbf{y})\mid\mathbf{x}\sim_{0}\mathbf{y}\}\).
To parse the requirements of Definition 2.3, the reader can return to Theorem 1.7 for a description of the local unstable sets. Results in [11] imply that \(G_{0}(\mathbf{P})\) is an open subgroupoid of \(G^{s}(\mathbf{P})\) and hence that \(C^{*}(G_{0}(\mathbf{P}))\) is a subalgebra of \(C^{*}(G^{s}(\mathbf{P}))\). Furthermore, we can define
\[G_{k}(\mathbf{P})=\{(\mathbf{x},\mathbf{y})\mid\varphi^{k}(\mathbf{x})\sim_{0 }\varphi^{k}(\mathbf{y})\}\]
and one of the main results of [11] is the following:
**Theorem 2.4**.: _Using the notation above, there is a nested sequence of etale subgroupoids_
\[G_{0}(\mathbf{P})\subset G_{1}(\mathbf{P})\subset G_{2}(\mathbf{P})\subset\ldots\]
_of \(G^{s}(\mathbf{P})\) such that \(G^{s}(\mathbf{P})=\bigcup_{k=0}^{\infty}G_{k}(\mathbf{P})\) and each \(G_{k}(\mathbf{P})\) is isomorphic to \(G_{0}(\mathbf{P})\) in the natural way \((\mathbf{x},\mathbf{y})\mapsto(\varphi^{k}(\mathbf{x}),\varphi^{k}(\mathbf{y}))\)._
In practical terms, this theorem reduces the study of \(G^{s}(\mathbf{P})\) to \(G_{0}(\mathbf{P})\) and likewise the study of the \(C^{*}\)-algebra \(C^{*}(G^{s}(\mathbf{P}))\) to \(C^{*}(G_{0}(\mathbf{P}))\). Since \(C^{*}(G_{0}(\mathbf{P}))\) is a type I \(C^{*}\)-algebra many of its properties can be determined from its spectrum, \(X^{u}(\mathbf{P}))/\(\sim_{0}\).
Furthermore, again following [11], we define \(\tilde{g}:X^{u}(\mathbf{P}))/\(\sim_{0}\to X^{u}(\mathbf{P}))/\)\(\sim_{0}\) via
\[[\mathbf{x}]\mapsto[(g(x_{0}),g(x_{1}),\ldots)]\]
and \(r:X^{u}(\mathbf{P}))/\)\(\sim_{0}\to Y\) via
\[[\mathbf{x}]\mapsto x_{0}.\]
Proofs that \(\tilde{g}\) and \(r\) are well-defined can be found in [11]. Moreover, there is a commutative diagram
\[\begin{CD}X^{u}(\mathbf{P})@>{\varphi}>{}>X^{u}(\mathbf{P})\\ @V{q}V{}V@V{q}V{}V\\ X^{u}(\mathbf{P})/\)\(\sim_{0}@>{\tilde{g}}>{}>X^{u}(\mathbf{P})/\)\(\sim_{0}\\ @V{r}V{}V@V{r}V{}V\\ Y@>{g}>{}>Y\end{CD}\]
**Proposition 2.5**.: _The maps \(q:X^{u}(\mathbf{P})\to X^{u}(\mathbf{P})/\)\(\sim_{0}\) and \(\tilde{g}:X^{u}(\mathbf{P}))/\)\(\sim_{0}\to X^{u}(\mathbf{P}))/\)\(\sim_{0}\) are each a local homeomorphisms (but not in general covering maps). In particular, \(q:X^{u}(\mathbf{P})\to X^{u}(\mathbf{P})/\)\(\sim_{0}\) is a Hausdorff resolution of \(X^{u}(\mathbf{P})/\)\(\sim_{0}\)._
Next, we discuss \(G_{0}(\mathbf{P})\) and \(X^{u}(\mathbf{P})/\)\(\sim_{0}\) for the three examples considered in Section 1.2.
_Example 2.6_.: Recall the setup of Example 1.11. The space \(Y\) is the unit circle, \(S^{1}\subseteq\mathbb{C}\) and (with fixed \(n>1\)) \(g:S^{1}\to S^{1}\) is defined via \(z\mapsto z^{n}\). In this case, we take \(\mathbf{P}\) to be the set containing the single point \((1,1,1,\ldots)\). Then \(X^{u}(\mathbf{P})\) is homeomorphic to \(\mathbb{R}\), \(\sim_{0}\) can be identified with the equivalence relation \(x\sim y\) when \(x-y\in\mathbb{Z}\) and \((X^{u}(\mathbf{P})/{\sim_{0}},\tilde{g})\) can be identified with the original system \((S^{1},g)\). In fact the results in this example generalize to the case when \(g:Y\to Y\) is a local homeomorphism, in which case \((X^{u}(\mathbf{P})/{\sim_{0}},\tilde{g})=(Y,g)\), see [11] for details.
_Example 2.7_.: The relations \(\sim_{0}\) and \(\sim_{1}\) associated to the \(aab/ab\)-solenoid defined in Example 1.13 can be viewed as in Figures 3 and 4.
The details for \(G_{0}(\mathbf{P})\) (i.e., \(\sim_{0}\)), which is illustrated in Figure 3, are as follows. We take \(\mathbf{P}\) to be the set containing the fixed point associated to the wedge point (that is, \((p,p,p,\ldots)\) see Figure 2) and then have that \(X^{u}(\mathbf{P})\) is homeomorphic to the real line. Intervals labelled with \(a\) (resp. \(b\)) are mapped by \(\pi_{0}=g\circ q\) to the outer (resp. inner) circle in \(Y\). Identifying the endpoints of these intervals as \(\mathbb{Z}\), we have that two integer points are equivalent if and only if the intervals to the left and right are labelled the same. While non-integer points are equivalent if and only if they are in intervals with the same label and their difference is in \(\mathbb{Z}\).
Figure 4 illustrates \(G_{1}(\mathbf{P})\) in a similar way. The reader can find more details in both cases in [11].
We will show that in general \(G_{0}(\mathbf{P})\) is not closed in \(G_{1}(\mathbf{P})\) using this example. Notice that the points \(p\) and \(q\) are equivalent with respect to \(\sim_{1}\) but not with respect to \(\sim_{0}\). However there is a sequence of points (namely \(p+\frac{1}{n}\sim_{0}q+\frac{1}{n}\) where \((p+\frac{1}{n},q+\frac{1}{n})\) converge to \((p,q)\)). It follows that \(\sim_{0}\) is not closed in \(\sim_{1}\).
Continuing with this example, we discuss \((X^{u}(\mathbf{P})/{\sim_{0}},\tilde{g})\). Notice that the map \(g:Y\to Y\) is not a local homeomorphism. Furthermore, the discussion of \(G_{0}(\mathbf{P})\) above shows that \(X^{u}(\mathbf{P})/{\sim_{0}}\) is given as follows. The point \(p\in Y\) splits into three non-Hausdorff points, denoted \(ab,ba,aa\). This space is illustrated in Figure 5. These points correspond to the three different \(\sim_{0}\)-equivalence classes for "integer" points, as seen in Figure 3. Open neighborhoods of the three points (that is, \(ab,ba,aa\)) are pictured in Figure 6.
Next, we discuss the map \(\tilde{g}:X^{u}(\mathbf{P})/{\sim_{0}}\to X^{u}(\mathbf{P})/{\sim_{0}}\). The map \(\tilde{g}\) takes points labelled \(ab\), \(ba\), and \(aa\) to \(ba\). It takes the point labelled \(q_{1}\) to \(aa\), the point labelled \(q_{2}\) to \(ab\), and the point labelled \(q_{3}\) to \(ab\). The other points (i.e., the ones without labels) are maps in the same way as \(g:Y\to Y\).
Some important properties to notice about \(\tilde{g}:X^{u}(\mathbf{P})/{\sim_{0}}\to X^{u}(\mathbf{P})/{\sim_{0}}\) are the following:
1. \(\tilde{g}\) is a local homeomorphisms, in particular it is open;
2. \(\tilde{g}(r^{-1}(p))=\{ba\}\) where \(r:X^{u}(\mathbf{P})/{\sim_{0}}\to Y\) and \(p\) is the wedge point of \(Y\);
3. informally, \(\tilde{g}\) is locally expanding.
The importance of the second and third properties will be seen in Section 3. In particular, \(\tilde{g}\) will be shown to be "forward orbit expansive" in Section 3, which makes precise the third item in the list above.
_Example 2.8_.: When \((Y,g)\) is the ab/ab-solenoid the map \(g\) is not a local homeomorphism. We take the set \(\mathbf{P}\) to be the set containing the single element \((p,p,\ldots)\) where \(p\) is the wedge point. Then, using a similar method to the one in the previous example, one can show that \(X^{u}(\mathbf{P})\) is homeomorphic to \(\mathbb{R}\), \(X^{u}(\mathbf{P})/{\sim_{0}}\) is homeomorphic to the circle and \(\tilde{g}\) is the two-fold cover of the circle. The main point of this example is that \(X^{u}(\mathbf{P})/{\sim_{0}}\) can be Hausdorff even when \(g\) is not a local homeomorphism.
We restate [11, Lemma 4.4] here for ease of the reader; it will be used a number of times below.
**Lemma 2.9**.: _There exists constant \(K_{0}>0\) such that if \(\mathbf{x}\) and \(\mathbf{y}\) are in \(X^{u}(P)\) and \(x_{i}=y_{i}\) for \(0\leq i\leq K_{0}\), then \(\mathbf{x}\sim_{0}\mathbf{y}\)._
**Theorem 2.10**.: _Suppose that \(K_{0}\) is as in the previous lemma, \([\mathbf{y}]_{0}\in X^{u}(\mathbf{P})/{\sim_{0}}\) and there exists \(V\) open in \(Y\) such that_
1. \(\pi_{0}(\mathbf{y})=y_{0}\in V\) _and_
2. \(g^{K_{0}}|_{V}\) _is injective._
_Then \(r^{-1}(y_{0})=\{[\mathbf{y}]_{0}\}\)._
Figure 6. Open neighborhoods of the three non-Hausdorff points in \(X^{u}(\mathbf{P})/{\sim_{0}}\) for the \(aab/ab\) solenoid.
Proof.: Take \(\delta>0\) such that \(B(y_{0},\delta)\subseteq V\).
Suppose that \(\hat{\mathbf{y}}=(y_{0},\hat{y}_{1},\hat{y}_{2},\ldots)\in X^{u}(\mathbf{P})\). We must show \(\mathbf{y}\sim_{0}\hat{\mathbf{y}}\). By the previous lemma,
\[(g^{K_{0}}(y),\ldots,g(y),y,y_{1},\ldots)\sim_{0}(g^{K_{0}}(y),\ldots,g(y),y, \hat{y}_{1},\ldots).\]
By the definition of \(\sim_{0}\), there exists open sets in \(X^{u}(P)\), \(U\subseteq X^{u}(\varphi^{K_{0}}(\mathbf{y}),\delta)\) and \(\hat{U}\subseteq X^{u}(\varphi^{K_{0}}(\hat{\mathbf{y}}),\delta)\), such that
1. \(\varphi^{K_{0}}(\mathbf{y})=(g^{K_{0}}(y),\ldots,g(y),y,y_{1},\ldots)\in U\);
2. \(\varphi^{K_{0}}(\hat{\mathbf{y}})=(g^{K_{0}}(y),\ldots,g(y),y,\hat{y}_{1}, \ldots)\in\hat{U}\);
3. \(\pi_{0}(U)=\pi(\hat{U})\).
Since \(\varphi|_{X^{u}(\mathbf{P})}\) is a homeomorphism, both \(\varphi^{-K_{0}}(U)\) and \(\varphi^{-K_{0}}(\hat{U})\) are open in \(X^{u}(\mathbf{P})\). Furthermore, \(\mathbf{y}\in\varphi^{-K_{0}}(U)\) and \(\hat{\mathbf{y}}\in\varphi^{-K_{0}}(\hat{U})\). Notice that \(\varphi^{-K_{0}}(U)\subseteq X^{u}(\mathbf{y},\delta)\) and \(\varphi^{-K_{0}}(\hat{U})\subseteq X^{u}(\hat{\mathbf{y}},\delta)\).
We show that \(\pi_{0}(\varphi^{-K_{0}}(U))=\pi_{0}(\varphi^{-K_{0}}(\hat{U}))\). Take \(\mathbf{z}\in\varphi^{-K_{0}}(U)\). Then
\[(g^{K_{0}}(z_{0}),\ldots,g(z_{0}),z_{0},\ldots)\in U\]
and by the third item in the list above, there exists \((\bar{z}_{0},\bar{z}_{1},\ldots)\in\hat{U}\subseteq X^{u}(\hat{\mathbf{y}},\delta)\) with \(\bar{z}_{0}=g^{K_{0}}(z_{0})\). Since \((\bar{z}_{0},\bar{z}_{1},\ldots)\in X^{u}(\varphi^{K_{0}}(\hat{\mathbf{y}}), \delta)\), \(\mathrm{d}_{Y}(z_{K_{0}},y_{0})<\delta\). However, \(g^{K_{0}}\) is injective on \(B(y_{0},\delta)\subseteq V\) hence \(z_{K_{0}}=z_{0}\) (i.e., \(z_{0}\) is the unique preimage of \(g^{K_{0}}(z_{0})\) inside \(V\)). Hence \(\pi_{0}(\mathbf{z})=\pi_{0}(z_{K_{0}},z_{K_{0}+1},\ldots)\) and the result follows.
**Corollary 2.11**.: _If \((Y,g)\) is a Williams presolenoid, then \(r\) is one-to-one on a dense open set of \(X^{u}(\mathbf{P})/{\sim_{0}}\)._
Proof.: The conditions of the previous theorem hold for any non-branched point in \(Y\). Since the set of branched points is closed and nowhere dense the result follows. (The reader can find the precise definition of branched point in [41]. This is the only proof in the present paper that uses this term.)
To summarize, we believe the results of this section along with the three examples gives sufficient motivation to consider the properties of the dynamical systems \((X^{u}(\mathbf{P}))/{\sim_{0}},\tilde{g})\) and its connection to \((X,\varphi)\) and \((Y,g)\).
## 3. Expansive dynamics in the non-Hausdorff setting
Since \(X^{u}(\mathbf{P})/{\sim_{0}}\) is a non-Hausdorff space and we are interested in the dynamics on it, we discuss the general situation for expansive maps in the non-Hausdorff setting. Much of the general theory is based on work of Achigar, Artigue, and Monteverde [1], who consider the case of expansive homeomorphisms in the non-Hausdorff setting. The work in this section differs from [1] in that we relax the homeomorphism assumption and consider a pair \((\tilde{Y},\tilde{g})\) of a compact topological space \(\tilde{Y}\) and a continuous surjective map \(\tilde{g}:\tilde{Y}\to\tilde{Y}\); additional conditions on \(\tilde{g}\) (such as expansiveness and being a local homeomorphism) will be explicitly stated as required. We emphasize that \(\tilde{Y}\) need not be Hausdorff. It might be useful for the reader to have a copy of [1] while reading the present section, so they compare results here to those in [1].
### Expansive maps on non-Hausdorff spaces
**Definition 3.1**.: Suppose that \(\mathcal{U}=\{U_{i}\}_{i\in I}\) is an open cover of a topological space and \((x_{n})_{n\in\mathbb{N}}\) and \((y_{n})_{n\in\mathbb{N}}\) are sequences in the space. Then we write
\[\{x_{n},y_{n}\}_{n\in\mathbb{N}}\prec\mathcal{U}\]
if for each \(n\in\mathbb{N}\), there exists \(i_{n}\in I\) such that
\[x_{n}\text{ and }y_{n}\text{ are elements of }U_{i_{n}}.\]
We use similar notation for two-sided sequences (that is, for \((x_{n})_{n\in\mathbb{Z}}\) and \((y_{n})_{n\in\mathbb{Z}}\))
In [1, Definition 2.1], the reader can find a notion of expansiveness for homeomorphisms of compact, non-Hausdorff spaces. Upon replacing the conditions of [1] involving the \(\mathbb{Z}\)-action associated to a homeomorphism, with an \(\mathbb{N}\)-action we arrive at a completely analogous definition of orbit expansive maps that need not be invertible.
**Definition 3.2**.: We say that \((\tilde{Y},\tilde{g})\) is forward orbit expansive if there exists a finite open cover of \(\tilde{Y}\), \(\mathcal{U}=\{U_{i}\}_{i=1}^{l}\), such that if \(x\), \(y\) are in \(Y\) with
\[\{\tilde{g}^{n}(x),\tilde{g}^{n}(y)\}_{n\in\mathbb{N}}\prec\mathcal{U},\]
then \(x=y\).
**Proposition 3.3**.: _Suppose that \((\tilde{Y},\tilde{g})\) is forward orbit expansive and \((\tilde{X},\tilde{\varphi})\) is the associated solenoid. Then \((\tilde{X},\tilde{\varphi})\) is orbit expansive in the sense of [1, Definition 2.1]._
Proof.: Take an open cover of \(\tilde{Y}\), \(\mathcal{U}_{\tilde{Y}}=\{U_{i}\}_{i=1}^{l}\), as in the definition of forward orbit expansive. Form
\[\mathcal{U}_{X}=\{U_{i}\times\tilde{Y}\times\tilde{Y}\times\ldots\}_{i=1}^{l},\]
which is an open cover of \(\tilde{X}\).
Suppose \(\mathbf{x}=(x_{0},x_{1},\ldots)\) and \(\mathbf{y}=(y_{0},y_{1},\ldots)\) are in \(\tilde{X}\) with
\[\{\varphi^{n}(\mathbf{x}),\varphi^{n}(\mathbf{y})\}_{n\in\mathbb{Z}}\prec \mathcal{U}_{X}.\]
Then there exists \(i_{0}\in\{1,\ldots l\}\) such that \(\mathbf{x}\) and \(\mathbf{y}\) are in \(U_{i_{0}}\times\tilde{Y}\times\tilde{Y}\times\ldots\). Hence, \(x_{0}\) and \(y_{0}\) are in \(U_{i_{0}}\). Using the fact that \(\tilde{\varphi}(\mathbf{x})=(\tilde{g}(x_{0}),\tilde{g}(x_{1}),\ldots)\) and an induction argument give us
\[\{\tilde{g}^{n}(x_{0}),\tilde{g}^{n}(y_{0})\}\prec\mathcal{U}_{Y}.\]
The definition of forward orbit expansive then implies that \(x_{0}=y_{0}\). Using the fact that \(\tilde{\varphi}^{-1}(\mathbf{x})=(x_{1},x_{2},\ldots)\) allows us to repeat the above argument to get \(x_{1}=y_{1}\). The proof is then completed by showing that \(x_{n}=y_{n}\) for each \(n\in\mathbb{N}\) in a similar way; this implies that \(\mathbf{x}=\mathbf{y}\).
**Proposition 3.4**.: _(compare with [1, Proposition 2.5]) Suppose that \((\tilde{Y},\tilde{g})\) is forward orbit expansive, then \(\tilde{Y}\) is \(T_{1}\)._
Proof.: Take an open cover of \(\tilde{Y}\), \(\mathcal{U}=\{U_{i}\}_{i=1}^{l}\), as in the definition of forward orbit expansive. Suppose \(x\neq y\) are in \(\tilde{Y}\). Then, using the definition of forward orbit expansive, there exists \(n\in\mathbb{N}\) and \(i_{x}\neq i_{y}\) such that \(\tilde{g}^{n}(x)\in U_{i_{x}}\) but \(\tilde{g}^{n}(x)\not\in U_{i_{y}}\) and \(\tilde{g}^{n}(y)\in U_{i_{y}}\) but \(\tilde{g}^{n}(y)\not\in U_{i_{x}}\). Since \(\tilde{g}\) is continuous, the sets \(\tilde{g}^{-n}(U_{i_{x}})\) and \(\tilde{g}^{-n}(U_{i_{y}})\) are open and the properties in the previous sentence imply that these sets are a \(T_{1}\)-separation of \(x\) and \(y\).
**Proposition 3.5**.: _(compare with [1, Theorem 2.7]) Suppose that \(Y\) is compact and Hausdorff, \((Y,g)\) is forward orbit expansive, and \(g\) is an open map. Then \(Y\) is metrizable and \((Y,g)\) is forward expansive._
Proof.: To begin, we note that \(g\), in addition to being open, is also a closed map. Let \(\mathcal{U}\) be an open cover of \(Y\) as in the definition of forward orbit expansive. Given \(U\in\mathcal{U}\) and \(y\in U\), there exists \(V_{y}\) open such that
\[y\in V_{y}\subseteq\overline{V}_{y}\subseteq U.\]
Using this fact and the compactness of \(Y\), there exists an open \(\mathcal{V}=\{V_{1},\ldots,V_{m}\}\) such that for each \(V_{i}\) there exists \(U\in\mathcal{U}\) with \(\overline{V}_{i}\subseteq U\). This property and forward orbit expansiveness imply that
\[\operatorname{card}\left(\bigcap_{i\geq 0}g^{i}(\overline{V}_{k_{i}})\right)\leq 1\]
for any \((k_{i})_{i\geq 0}\in\{1,\ldots m\}^{\mathbb{N}}\).
Now suppose that \(y\in Y\) and \(W\) is an open set with \(y\in W\). Since \(\mathcal{V}\) is a cover and \(g\) is onto, there exists \((k_{i})_{i\geq 0}\in\{1,\ldots m\}^{\mathbb{N}}\) such
\[y\in\bigcap_{i\geq 0}g^{i}(V_{k_{i}}).\]
It follows that \(\bigcap_{i\geq 0}g^{i}(\overline{V}_{k_{i}})=\{y\}\) and hence that
\[W^{c}\cap\left(\bigcap_{i\geq 0}g^{i}(\overline{V}_{k_{i}})\right)=\emptyset.\]
By the reformulation of compactness via the finite intersection property (this is also where we use the fact that \(g\) is a closed map) there exists \(N\in\mathbb{N}\) such that
\[W^{c}\cap\left(\bigcap_{i\geq 0}^{N}g^{i}(\overline{V}_{k_{i}})\right)=\emptyset\]
and hence \(\bigcap_{i\geq 0}^{N}g^{i}(V_{k_{i}})\subseteq W\).
It now follows that the collection
\[\left\{\bigcap_{i\geq 0}^{N}g^{i}(V_{k_{i}})\mid N\in\mathbb{N}\text{ and }k_{i}\in\{1,\ldots,m\}\right\}\]
is a basis for the topology on \(Y\). Since this collection is countable (and \(Y\) is compact and Hausdorff), the topology on \(Y\) is metrizable.
For the second part of the theorem, again let \(\mathcal{U}\) be an open cover of \(Y\) as in the definition of forward orbit expansive. Fix a metric on \(Y\) that induces the given topology. Let \(\delta>0\) be the Lebesgue number of \(\mathcal{U}\) with respect to \(d\). One can show that \(\delta\) is an expansive constant for \((Y,g)\); the details are omitted.
**Proposition 3.6**.: _(compare with [1, Proposition 2.14]) Suppose that \((\tilde{Y},\tilde{g})\) is forward orbit expansive, and \(\tilde{g}\) is an open map. Then, for each \(n\in\mathbb{N}\), \(\tilde{g}^{n}\) is forward orbit expansive._
Proof.: Take an open cover of \(\tilde{Y}\), \(\mathcal{U}=\{U_{i}\}_{i=1}^{l}\), as in the definition of forward orbit expansive for \(\tilde{g}\). Using the openness of the map \(\tilde{g}\) we have that
\[\mathcal{U}_{n}=\{U_{i_{1}}\cap\tilde{g}(U_{i_{2}})\cap\ldots\cap\tilde{g}^{n- 1}(U_{i_{n-1}})\mid i_{1},i_{2},\ldots,i_{n-1}\in\{1,\ldots,l\}\}\]
is an open cover of \(\tilde{Y}\). Moreover, the fact that \(\mathcal{U}\) is forward orbit expansive for \(\tilde{g}\) implies that \(\mathcal{U}_{n}\) is forward orbit expansive for \(\tilde{g}^{n}\); the details are omitted.
**Proposition 3.7**.: _Suppose that \((\tilde{Y},\tilde{g})\) is forward orbit expansive with \(\tilde{g}\) open, \((\tilde{Y}_{\mathrm{Haus}},\tilde{g}_{\mathrm{Haus}})\) denotes its Hausdorffization (see [28]), and \(\tilde{r}:\tilde{Y}\to\tilde{Y}_{\mathrm{Haus}}\) denotes the natural map. Furthermore suppose that there exists \(L\in\mathbb{N}\) such that for each \(y\in\tilde{Y}_{\mathrm{Haus}}\), \(\tilde{g}^{L}(\tilde{r}^{-1}(y))\) is a singleton. Then \(\tilde{Y}\) is locally Hausdorff._
Proof.: By Proposition 3.6 and replacing \(\tilde{g}\) with \(\tilde{g}^{L}\), we can assume that \(L=1\). Take \(\mathcal{U}=\{U_{i}\}_{i=1}^{l}\), as in the definition of forward orbit expansive and \(x\in\tilde{Y}\). Let \(U=U_{i_{0}}\) where \(x\in U_{i_{0}}\).
We will show that \(U\) with the subspace topology is Hausdorff. Suppose \(y_{1}\) and \(y_{2}\) in \(U\) cannot be separated. We will show that \(y_{1}=y_{2}\). Since \(\tilde{g}\) is continuous, \(\tilde{g}(y_{1})\) and \(\tilde{g}(y_{2})\) cannot be separated, but then \(\tilde{g}(y_{1})=\tilde{g}(y_{2})\) (since \(L=1\)). Hence \(\{\tilde{g}^{n}(x),\tilde{g}^{n}(y)\}\prec\mathcal{U}\) and by the definition of forward expansive, \(y_{1}=y_{2}\) as required.
The next example shows that an additional condition (e.g., the assumption that \(\tilde{g}^{L}(\tilde{r}^{-1}(y))\) is a singleton in the previous result) is required to ensure the solenoid is Hausdorff.
_Example 3.8_.: Let \(\tilde{Y}\) be the unit circle in \(\mathbb{C}\) with two "1"s, see Figure 7.
We define \(\tilde{g}:\tilde{Y}\to\tilde{Y}\) to be the two fold covering map for points other than \(-1\), \(1^{a}\) and \(1^{b}\); for those points we define
\[-1\mapsto 1^{a}\text{ and }1^{a}\mapsto 1^{b}\text{ and }1^{b}\mapsto 1^{a}.\]
One can check that \(\tilde{g}\) is continuous, onto, and open. Furthermore, the solenoid associated to \((\tilde{Y},\tilde{g})\) is not Hausdorff since the points
\[(1^{a},1^{b},1^{a},\ldots)\text{ and }(1^{b},1^{a},1^{b},\ldots)\]
cannot be separated.
**Theorem 3.9**.: _Suppose that \((\tilde{Y},\tilde{g})\) is forward orbit expansive with \(\tilde{g}\) open (and continuous and onto as usual in this section) and there exists \((Y,g)\) and \(r:\tilde{Y}\to Y\) with_
1. \(Y\) _compact and Hausdorff;_
2. \(g\) _is continuous and onto;_
3. \(r\) _continuous, onto, and_ \(r\circ\tilde{g}=g\circ r\)_;_
4. _there exists_ \(L\in\mathbb{N}\) _such that for each_ \(y\in Y\)_,_ \(g^{L}(r^{-1}(y))\) _is a singleton._
_Then the solenoids associated to \((\tilde{Y},\tilde{g})\) and \((Y,g)\) are conjugate. In particular, the solenoid associated to \((\tilde{Y},\tilde{g})\) is Hausdorff._
Proof.: Following [41], we will define a shift equivalence between \((\tilde{Y},\tilde{g})\) and \((Y,g)\). In addition to the map \(r:\tilde{Y}\to Y\) in the statement of the theorem, we have the map \(s:Y\to\tilde{Y}\) defined via
\[y\mapsto\tilde{g}^{L}(r^{-1}(y))\]
where we have abused notation in the sense that \(\tilde{g}^{L}(r^{-1}(y))\) is really a set that contains a single element. The map \(s\) is onto since \(r\) and \(\tilde{g}\) are onto. One can also show that \(s\) is continuous.
To show that \(r\) and \(s\) define a shift equivalence, we must show that
1. \(r\circ\tilde{g}=g\circ r\);
2. \(s\circ g=\tilde{g}\circ s\);
3. \(s\circ r=\tilde{g}^{L}\);
4. \(r\circ s=g^{L}\).
The first item is an assumption of the theorem. The fourth is similar to the third so only the proofs of the second and third will be considered in detail.
To see that \(s\circ g=\tilde{g}\circ s\), we have that
\[(s\circ g)(y)=\tilde{g}^{L}(r^{-1}(g(y)))\text{ and }(\tilde{g}\circ s)(y)= \tilde{g}^{L+1}(r^{-1}(y)).\]
Figure 7. Circle with two 1s pre-solenoid
Using \(r\circ\tilde{g}=g\circ r\), it follows that \(\tilde{g}(r^{-1}(y))\subseteq r^{-1}(g(y))\). Apply \(\tilde{g}^{L}\) to both sides, we find that \(\tilde{g}^{L}(r^{-1}(g(y)))\) is a singleton by assumption. Hence \(\tilde{g}^{L+1}(r^{-1}(y))\) is the same singleton because it is non-empty and contained in the singleton set \(\tilde{g}^{L}(r^{-1}(g(y)))\).
To see that \(s\circ r=\tilde{g}^{L}\), we have
\[(s\circ r)(\tilde{y})=\tilde{g}^{L}(r^{-1}(r(\tilde{y}))).\]
Now, \(\tilde{y}\in r^{-1}(r(y))\), so \(\tilde{g}^{L}(\tilde{y})\in\tilde{g}^{L}(r^{-1}(r(\tilde{y})))\). The set \(\tilde{g}^{L}(r^{-1}(r(\tilde{y})))\) is a singleton, so it must be equal to \(\{\tilde{g}^{L}(\tilde{y})\}\) as required.
Using work in [41], it follows that the map \(S:X\to\tilde{X}\) defined via
\[(y_{0},y_{1},y_{2},\ldots)\mapsto(s(y_{0}),s(y_{1}),s(y_{2}),\ldots)\]
is a conjugacy from \((X,\varphi)\) (the solenoid associated to \((Y,g)\)) to \((\tilde{X},\tilde{\varphi})\) (the solenoid associated to \((\tilde{Y},\tilde{g})\)). Its inverse is given by \(R:\tilde{X}\to X\) defined via
\[(\tilde{y}_{0},\tilde{y}_{1},\tilde{y}_{2},\ldots)\mapsto(r(\tilde{y}_{0}),r( \tilde{y}_{1}),r(\tilde{y}_{2}),\ldots).\]
For the final part of the theorem, \(X\) is Hausdorff because \(Y\) is Hausdorff. Hence, the solenoid associated to \(\tilde{X}\) is Hausdorff as well because \(X\) and \(\tilde{X}\) are homeomorphic.
### The dynamics on the locally Hausdorff quotient space
We now come to the main application of Subsection 3.1 on Wieler-Smale spaces.
**Theorem 3.10**.: _The dynamical system \((X^{u}(P)/{\sim_{0}},\tilde{g})\) is forward orbit expansive. Moreover, \(\tilde{g}\) is a local homeomorphism and there exists \(L\in\mathbb{N}\) such that for each \(y\in\tilde{Y}_{\mathrm{Haus}}\), \(\tilde{g}^{L}(r^{-1}(y))\) is a singleton._
Proof.: Recall the constants \(\beta>0\) and \(0<\lambda<1\) in the definition of a Wieler presolenoid, see Definition 1.4. Using the nature of the local stable and unstable sets for a Wieler solenoid (see Definition 1.5) there exists \(0<\delta<\frac{\beta}{2}\) such that if \(\mathbf{x}\neq\mathbf{y}\) are in \(X^{u}(\mathbf{z},\delta)\) (for some \(\mathbf{z}\)) then \(x_{0}\neq y_{0}\). Using the compactness of \(X^{u}(P)/{\sim_{0}}\), we have an open cover of the form
\[\mathcal{U}=\{\pi_{0}(X^{u}(\mathbf{z}_{i},\delta))\cap r^{-1}(B(w_{j},\frac{ \beta}{2}))\}_{i\in I,j\in J}.\]
where \(\mathbf{z}_{i}\in X^{u}(P)\), \(w_{j}\in Y\), \(I\) and \(J\) are finite index sets, and the fact that \(\pi_{0}\) is an open map and \(r\) is continuous ensures that the given sets are open.
Let \([\mathbf{x}]_{0}\neq[\mathbf{y}]_{0}\) be in \(X^{u}(P)/{\sim_{0}}\). We will find \(N\in\mathbb{N}\) such that for each \(i\), the two element set \(\{\tilde{g}^{N}([\mathbf{x}]_{0}),\tilde{g}^{N}([\mathbf{y}]_{0})\}\) is not a subset of \(\pi_{0}(X^{u}(\mathbf{z}_{i},\delta))\cap r^{-1}(B(w_{j},\frac{\beta}{2}))\).
We are done unless there exists \(i_{0}\) such that
\[[\mathbf{x}]_{0}\text{ and }[\mathbf{y}]_{0}\text{ are both in }\pi_{0}(X^{u}( \mathbf{z}_{i_{0}},\delta)).\]
Let \(\mathbf{x}\) and \(\mathbf{y}\) be points in \(X^{u}(\mathbf{z}_{i_{0}},\delta)\) representing \([\mathbf{x}]_{0}\) and \([\mathbf{y}]_{0}\) respectively. By assumption, \(x_{0}\neq y_{0}\) and
\[\mathrm{d}_{Y}(x_{K},y_{K})<2\delta<\beta\]
where \(K\) is the constant in Wieler's Axioms. Using the first of Wieler's Axioms,
\[0<\mathrm{d}_{Y}(x_{0},y_{0})\leq\gamma^{K}\mathrm{d}_{Y}(g^{K}(x_{0}),g^{K}(y _{0})).\]
Using this inequality and the fact that \(x_{0}\neq y_{0}\), we have that \(g^{K}(x_{0})\neq g^{K}(y_{0})\). If \(\mathrm{d}_{Y}(g^{K}(x_{0}),g^{K}(y_{0}))>\beta\), then we stop. Otherwise, since
\[\mathrm{d}_{Y}(x_{0},y_{0})<2\delta<\beta,\]
we can again use the first of Wieler's Axioms. We get that
\[\mathrm{d}_{Y}(g^{K}(x_{0}),g^{K}(y_{0}))\leq\gamma^{K}\mathrm{d}_{Y}(g^{2K}(x _{0}),g^{2K}(y_{0})).\]
Using this inequality and the previous one, we obtain
\[0<\mathrm{d}_{Y}(x_{0},y_{0})\leq\gamma^{2K}\mathrm{d}_{Y}(g^{2K}(x_{0}),g^{2K} (y_{0})).\]
Using the fact that \(0<\gamma<1\) and possibly repeating this process, there exists \(N\in\mathbb{N}\) such that \(d(g^{N}(x_{0}),g^{N}(y_{0}))>\beta\). Now,
\[r(\tilde{g}^{N}([\mathbf{x}]_{0}))=g^{N}(x_{0})\text{ and }r(\tilde{g}^{N}([ \mathbf{y}]_{0}))=g^{N}(y_{0}).\]
Therefore, \(\tilde{g}^{N}([\mathbf{x}]_{0})\) and \(\tilde{g}^{N}([\mathbf{y}]_{0})\) cannot both be in \(r^{-1}(B(w_{j},\frac{\beta}{2}))\) for any \(j\). Thus, \(\tilde{g}\) is forward orbit expansive.
For the second part of the theorem, it was already noted that \(\tilde{g}\) is a local homeomorphism and the existence of the required \(L\) follows from Lemma 2.9.
### Inverse limit space associated to the spectrum
The main goal of this section is to show that the inverse limit formed from \((X^{u}(P)/{\sim_{0}},\tilde{g})\) also gives the Wieler solenoid associated to \((Y,g)\). This is perhaps somewhat surprising in light of the fact that \(X^{u}(P)/{\sim_{0}}\) is often non-Hausdorff.
To fix notation, \((Y,g)\) is assumed to satisfy Wieler's axioms, \((X,\varphi)\) is the associated solenoid and \((X^{u}(P)/{\sim_{0}},\tilde{g})\) is as in the previous section. We let
\[\tilde{X}:=\varprojlim(X^{u}(P)/{\sim_{0}},\tilde{g})=\{(\tilde{y}_{n})_{n\in \mathbb{N}}=(\tilde{y}_{0},\tilde{y}_{1},\tilde{y}_{2},\ldots)\,|\,\tilde{g}( \tilde{y}_{i+1})=\tilde{y}_{i}\text{ for each }i\geq 0\}\]
with the map \(\tilde{\varphi}:\tilde{X}\to\tilde{X}\) be defined via
\[\tilde{\varphi}(\tilde{y}_{0},\tilde{y}_{1},\tilde{y}_{2},\ldots)=(\tilde{g}( \tilde{y}_{0}),\tilde{g}(\tilde{y}_{1}),\tilde{g}(\tilde{y}_{2}),\ldots)=( \tilde{g}(\tilde{y}_{0}),\tilde{y}_{0},\tilde{y}_{1},\ldots).\]
Again we will make use of Lemma 4.4 in [11], which we restate here for the reader.
We have the two maps. The first one was defined in [11]; it is
\[r:X^{u}(P)/{\sim_{0}}\to Y\text{ defined via }[\mathbf{x}]_{0}\mapsto x_{0}\]
where we note that the definition of \({\sim_{0}}\) implies that \(r\) is well-defined. Furthermore, it is continuous and surjective.
The second map is
\[s:Y\to X^{u}(P)/{\sim_{0}}\text{ defined via }y\mapsto[g^{K_{0}}(y),g^{K_{0}- 1}(y),\ldots,g(y),y,\ldots]_{0}\]
where we note that the Lemma 4.4 in [11] implies that \(s\) is well-defined.
**Theorem 3.11**.: _Using the notation in the previous paragraphs, the maps \(r\) and \(s\) define a shift equivalence. That is, they satisfy_
\[r\circ\tilde{g}=g\circ r,s\circ g=g\circ s,r\circ s=g^{K_{0}}\text{ and }s \circ r=\tilde{g}^{K_{0}}.\]
_Moreover, the map \(S:X\to\tilde{X}\) defined via_
\[(y_{0},y_{1},y_{2},\ldots)\mapsto(s(y_{0}),s(y_{1}),s(y_{2}),\ldots)\]
_is a conjugacy from \((X,\varphi)\) to \((\tilde{X},\tilde{\varphi})\)._
Proof.: This follows from the statement and proof of Theorem 3.9. We note that the assumptions of Theorem 3.9 hold by Theorem 3.10. Also, the reader can check that the map \(s\) defined just before the statement of the current theorem is equal to the map \(s\) consider in Theorem 3.9.
**Corollary 3.12**.: _If \((X,\varphi)\) is an irreducible Smale space with totally disconnected stable sets, then there exists \((\tilde{Y},\tilde{g})\) such that \(\tilde{g}\) is a surjective local homeomorphism that is forward orbit expansive and \((X,\varphi)\) is conjugate the solenoid associated to \((\tilde{Y},\tilde{g})\)._
Proof.: This follows from the previous result and Wieler's theorem.
The relationship between \((X,\varphi)\), \((X^{u}(\mathbf{P})/{\sim_{0}},\tilde{g})\) and \((Y,g)\)
It follows from Theorem 3.11 that there is a continuous surjection from \(X\) to \(X^{u}(P)/{\sim_{0}}\). However, we can write it is explicitly without reference to the conjugacy in Theorem 3.11:
**Theorem 3.13**.: _Define \(p:X\to X^{u}(P)/{\sim_{0}}\) via_
\[x\mapsto[y]_{0}\]
_where \(y\in X^{u}(P)\cap X^{s}(x,\frac{\epsilon_{X}}{2})\). Then \(p\) is a continuous surjection and for each \(z\in X^{u}(P)/{\sim_{0}}\), \(p^{-1}(z)\) is a Cantor set._
Proof.: To begin, we must show that \(p\) is well-defined. Suppose that \(y^{\prime}\) is another element in \(X^{u}(P)\cap X^{s}(x,\frac{\epsilon_{X}}{2})\). Firstly, since both \(y\) and \(y^{\prime}\) are in \(X^{s}(x,\frac{\epsilon_{X}}{2})\) we have that \(\pi_{0}(x)=\pi_{0}(y)=\pi_{0}(y^{\prime})\). Moreover, properties of the bracket implies that the map \(h:X^{u}(y,\epsilon_{X})\to X^{u}(y^{\prime},\epsilon_{X})\) defined via
\[z\mapsto[z,y^{\prime}]\]
is well defined and that \(\pi_{0}(z)=\pi_{0}(h(z))\) for each \(z\in X^{u}(y,\epsilon_{X})\). This implies that \([y]_{0}=[y^{\prime}]_{0}\).
An equivalent definition of \(p\) is the following: Let \(U(x,\frac{\epsilon_{X}}{2})\) denote the image of the bracket of the set \(X^{s}(x,\frac{\epsilon_{X}}{2})\times X^{u}(x,\frac{\epsilon_{X}}{2})\). Given \(x\in X\) and \(w\in X^{u}(P)\cap U(x,\frac{\epsilon_{X}}{2})\) then \(p(x)=[x,w]\). To see this is the same as the previous definition, we need only check that \([x,w]\) is an element of \(X^{u}(P)\cap X^{s}(x,\frac{\epsilon_{X}}{2})\) but this follows from the definitions of the bracket and the set \(U(x,\frac{\epsilon_{X}}{2})\).
_Remark 3.14_.: If \(g\) is a local homeomorphism satisfying Wieler's axioms, then \(X^{u}(P)/{\sim_{0}}=Y\) and the map \(p:X\to Y\) is a locally trivial bundle of Cantor sets. This fact follows from [8, Theorem 3.12]. Already for Williams solenoids, local triviality of \(p\) fails to hold in general [14].
Moving forward we will use an abuse of notation to identify sets of the form \(X^{s}(x,\delta_{1})\times X^{u}(x,\delta_{2})\) with their image in \(X\) under the bracket map. For example, using this convention, there would be no reference to the set \(U(x,\frac{\epsilon_{X}}{2})\) in the proof of the previous theorem.
**Lemma 3.15**.: _Suppose \(x\in X\) and \(0<\delta<\delta^{\prime}<\epsilon^{\prime}\). Then_
\[p(X^{s}(x,\delta)\times X^{u}(x,\epsilon^{\prime}))=p(X^{s}(x,\delta^{\prime}) \times X^{u}(x,\epsilon^{\prime})).\]
Proof.: Let \(z\in X^{s}(x,\delta^{\prime})\times X^{u}(x,\epsilon^{\prime})\). Then \([z,x]\in X^{u}(x,\epsilon^{\prime})\subseteq X^{s}(x,\delta)\times X^{u}(x, \epsilon^{\prime})\). Furthermore, taking \(w\in X^{u}(P)\cap U(x,\frac{\epsilon_{X}}{2})\), we have that
\[p(z)=[z,w]=[[z,x],w]=p([z,x])\]
as required.
**Theorem 3.16**.: _Using the notation in the past few paragraphs, we have that the following diagram commutes:_
\[\begin{CD}X@>{\varphi}>{}>X\\ @V{p}V{}V@V{p}V{}V\\ X^{u}(P)/{\sim_{0}}@>{\tilde{g}}>{}>X^{u}(P)/{\sim_{0}}\\ @V{r}V{}V@V{r}V{}V\\ Y@>{g}>{}>Y\end{CD}\]
_where \(r\) is defined above (see Section 2.2) and \(r\circ p\) is equal to the projection map \(\pi_{0}:X\to Y\)._
Proof.: The result follows directly from the definitions of the relevant maps; the details are omitted.
**Theorem 3.17**.: _The map \(p\) induces a bijection between periodic points of \(\varphi\) and periodic points of \(\tilde{g}\) and hence \(\varphi\) and \(\tilde{g}\) have the same zeta function. Furthermore, if the set of periodic points with respect to \(\varphi\) is dense in \(X\), then the set of periodic points with respect to \(\tilde{g}\) is dense in \(X^{u}(P)/{\sim_{0}}\)._
Proof.: The first statement using Theorem 3.11 and applying the proof of [41, Lemma 5.3] to our situation. The second follows since \(p\) is onto so the image of a dense set in \(X\) under \(p\) is dense in \(X^{u}(P)/{\sim_{0}}\).
**Theorem 3.18**.: _The map \(r:X^{u}(P)/{\sim_{0}}\to Y\) induces a bijection between the periodic points of \(\tilde{g}\) and the periodic points of \(g\). Hence \(\tilde{g}\) and \(g\) have the same zeta function._
Proof.: This follows from Theorem 3.11 and [41, Theorem 5.2 Part (3)], see in particular the argument just before Lemma 5.3 on page 189 of [41].
**Theorem 3.19**.: _The following are equivalent_
1. \((X,\varphi)\) _is mixing,_
2. \((X^{u}(P)/{\sim_{0}},\tilde{g})\) _is mixing,_
3. \((Y,g)\) _is mixing._
Proof.: We only prove the case \((X,\varphi)\) is mixing implies \((X^{u}(P)/{\sim_{0}},\tilde{g})\) is mixing in detail. Let \(U\) and \(V\) be non-empty open sets in \(X^{u}(P)/{\sim_{0}}\). Then \(p^{-1}(U)\) and \(p^{-1}(V)\) are non-empty open sets in \(X\) and since \((X,\varphi)\) is mixing, there exists \(N\in\mathbb{N}\) such that
\[\varphi^{n}(p^{-1}(U))\cap p^{-1}(V)\neq\emptyset\text{ for each }n\geq N\]
Then, using the fact that \(p\) is onto,
\[p(\varphi^{n}(p^{-1}(U))\cap p^{-1}(V)) \subseteq(p(\varphi^{n}(p^{-1}(U))))\cap p(p^{-1}(V))\] \[\subseteq(\tilde{g}^{n}(p(p^{-1}(U))))\cap p(p^{-1}(V))\] \[=\tilde{g}^{n}(U)\cap V\]
This implies the result.
Notice that we have only used the following properties: \(p\) is onto, continuous and \(\tilde{g}\circ p=p\circ\varphi\). Thus to see that \((X^{u}(P)/{\sim_{0}},\tilde{g})\) is mixing implies that \((Y,g)\) is mixing, one replaces \(p\) with \(r\) in the previous argument.
## 4. Full projections
The main goal of this section is to prove that \(C^{*}(G_{0}(\mathbf{P}))\) contains a full projection. This result is used in Section 5 to construct unital Cuntz-Pimsner models. In light of [10], the existence of a full projection in \(C^{*}(G_{0}(\mathbf{P}))\) puts restrictions on the type of Fell algebras that appear as \(C^{*}(G_{0}(\mathbf{P}))\).
### Dynamical results
**Lemma 4.1**.: _Suppose \((X^{u}(P)/{\sim_{0}},\tilde{g})\) is mixing and \(U\) is a non-empty open set in \(X^{u}(P)/{\sim_{0}}\) such that there exists \(L\in\mathbb{N}\) that satisfies \(\tilde{g}^{L}(U)\subseteq U\). Then \(U\) is dense in \(X^{u}(P)/{\sim_{0}}\)._
Proof.: The result will follow by showing that if \(V\) is a non-empty open set in \(X^{u}(P)/{\sim_{0}}\) then \(U\cap V\neq\emptyset\). Since \(\tilde{g}\) is mixing, there exists \(N\in\mathbb{N}\) such that for each \(n\geq N\),
\[\tilde{g}^{n}(U)\cap V\neq\emptyset\]
Thus \(\emptyset\neq\tilde{g}^{N\cdot L}(U)\cap V\subseteq U\cap V\).
**Lemma 4.2**.: _Suppose that \((X,\varphi)\) is mixing and \(\emptyset\neq U\subseteq X^{u}(P)/{\sim_{0}}\) is open. Then there exists \(\emptyset\neq V\subseteq U\) open and \(N\in\mathbb{N}\) such that \(V\subseteq\tilde{g}^{N}(V)\)._
Proof.: Since periodic points of \((X,\varphi)\) are dense, there exists \(x\in p^{-1}(U)\) and \(N\in\mathbb{N}\) such that \(\varphi^{N}(x)=x\). Since \(\varphi\) and \(p\) are both continuous, there exists \(\delta>0\) and \(0<\delta^{\prime}<\epsilon^{\prime}\) such that \(\varphi^{N}(X^{u}(x,\delta))\subseteq X^{u}(x,\delta^{\prime})\) and \(p(X^{s}(x,\delta)\times X^{u}(x,\delta))\subseteq U\).
The set \(X^{s}(x,\delta)\times X^{u}(x,\delta)\) is open in \(X\) and so \(p(X^{s}(x,\delta)\times X^{u}(x,\delta))\) is open in \(X^{u}(P)/{\sim_{0}}\). We also have that \(X^{u}(x,\delta)\subseteq\varphi^{N}(X^{u}(x,\delta))\).
By Lemma 3.15 and the fact that \(\varphi\) contracts in the local stable direction
\[p(\varphi^{N}(X^{s}(x,\delta)\times X^{u}(x,\delta)))=p(X^{s}(x,\delta)\times \varphi^{N}(X^{u}(x,\delta)))\]
and using \(X^{u}(x,\delta)\subseteq\varphi^{N}(X^{u}(x,\delta))\), we have that
\[p(X^{s}(x,\delta)\times X^{u}(x,\delta))\subseteq p(\varphi^{N}(X^{s}(x, \delta)\times X^{u}(x,\delta)))=\tilde{g}^{N}(p(X^{s}(x,\delta)\times X^{u}(x,\delta)))\]
**Theorem 4.3**.: _Suppose that \((X,\varphi)\) is mixing and \(U\subseteq X^{u}(P)/{\sim_{0}}\) is a non-empty, open set. Then there exists \(N\in\mathbb{N}\) such that \(\tilde{g}^{N}(U)=X^{u}(P)/{\sim_{0}}\)._
Proof.: By Lemma 4.2, there exists a nonempty open set \(V\) such that \(V\subseteq U\) and \(V\subseteq\tilde{g}^{K}(V)\). Moreover, the \(V\) constructed in Lemma 4.2 depended on \(\delta>0\) and so we denote the set associated to \(\delta>0\) as \(V_{\delta}\). In more detail, \(V_{\delta}=p(X^{s}(z,\delta)\times X^{u}(z,\delta))\) where \(z\) is a periodic point of period \(K\) in \(U\).
Form
\[G_{\delta}=\bigcup_{n\in\mathbb{N}}\tilde{g}^{n\cdot K}(V_{\delta}).\]
Notice that
\[G_{\delta}=p\left(\bigcup_{n\in\mathbb{N}}\varphi^{n\cdot K}(X^{s}(z,\delta) \times X^{u}(z,\delta))\right)\]
and that since \((X,\varphi)\) is mixing
\[\bigcup_{n\in\mathbb{N}}\varphi^{n\cdot K}(X^{s}(z,\delta)\times X^{u}(z, \delta))\]
is dense in \(X\). Since \(p\) is onto, it follows that \(G_{\delta}\) is dense for each valid \(\delta>0\). In particular, \(G_{\delta/2}\) is dense.
Next, suppose that \(\tilde{y}\in X^{u}(P)/{\sim_{0}}\) is a limit point of the set \(G_{\delta/2}\). We show \(\tilde{y}\in G_{\delta}\). From this, it follows that \(G_{\delta}=X^{u}(P)/{\sim_{0}}\). Using the compactness of \(X\) and the fact that
\[\bigcup_{n\in\mathbb{N}}\varphi^{n\cdot K}(X^{s}(z,\delta)\times X^{u}(z, \delta))\]
is dense in \(X\), there exists a sequence \((x_{k})_{k\in\mathbb{N}}\) in \(X\) converging to \(x\in X\) such that
\[x_{k}\in\bigcup_{n\in\mathbb{N}}\varphi^{n\cdot K}(X^{s}(z,\delta/2)\times X^ {u}(z,\delta/2))\]
for each \(n\) and \(p(x)=\tilde{y}\). There exists \(N\in\mathbb{N}\) such that \(x_{k}\in X^{s}(x,\frac{\delta}{2})\times X^{u}(x,\frac{\delta}{2})\) for \(k\geq N\). In particular, \([x,x_{N}]\) is well-defined and is an element in \(X^{u}(x_{N},\frac{\delta}{2})\). Lemma 3.15 implies that \(p([x,x_{N}])=p(x)=\tilde{y}\). Since \(x_{N}\in\bigcup_{n\in\mathbb{N}}\varphi^{n\cdot K}(X^{s}(z,\delta/2)\times X ^{u}(z,\delta/2))\) there exists \(l\in\mathbb{N}\) such that
\[x_{N}\in\varphi^{l\cdot K}(X^{s}(z,\delta/2)\times X^{u}(z,\delta/2))\]
which implies that
\[\varphi^{-l\cdot K}(x_{N})\in X^{s}(z,\delta/2)\times X^{u}(z,\delta/2).\]
Since \([x,x_{N}]\) is an element in \(X^{u}(x_{N},\frac{\delta}{2})\), we have that
\[\varphi^{-l\cdot K}(x)\in X^{u}(\varphi^{-l\cdot K}(x_{N}),\frac{\delta}{2}).\]
The triangle inequality and the previous two statements imply that
\[\varphi^{-l\cdot K}(x)\in X^{s}(z,\delta)\times X^{u}(z,\delta).\]
and hence that
\[x\in\varphi^{l\cdot K}(X^{s}(z,\delta)\times X^{u}(z,\delta))\text{ and }\tilde{y}=p(x)\in G _{\delta}.\]
In summary, we have shown that \(G_{\delta}=X^{u}(P)/{\sim_{0}}\). Using this along with the fact that \(X^{u}(P)/{\sim_{0}}\) is compact and \(V_{\delta}\subseteq\tilde{g}^{K}(V_{\delta})\) imply that there exists \(N\in\mathbb{N}\) such that \(\tilde{g}^{N}(V_{\delta})=X^{u}(P)/{\sim_{0}}\). The required result follows since \(V_{\delta}\subseteq U\).
### Existence of a full projection
**Lemma 4.4**.: _Let \(V\) be a basic set of \(G_{0}(\mathbf{P})\). Then, there exists \(N\in\mathbb{N}\) such that for any \(f\in C_{c}(G_{0}(\mathbf{P}))\) supported in another basic set and nonzero on \(V\), we have that the following two properties hold:_
1. _For each_ \(x\in X^{u}(P)\) _there exists_ \((\hat{x},\hat{w})\in\operatorname{supp}(\alpha^{N}(f))\) _such that_ \(x\sim_{0}\hat{x}\)_;_
2. _likewise for each_ \(b\in X^{u}(P)\) _there exists_ \((\hat{a},\hat{b})\in\operatorname{supp}(\alpha^{N}(f))\) _such that_ \(b\sim_{0}\hat{b}\)_._
Proof.: This follows by applying Theorem 4.3 to the open set \(s(V)\) and \(r(V)\) and taking the max of the two \(N\)s.
**Lemma 4.5**.: _Let \(V\) be a basic set of \(G_{0}(\mathbf{P})\). Then, there exists \(N\in\mathbb{N}\) such that for any \(f\in C_{c}(G_{0}(\mathbf{P}))\) supported in another basic set with \(||f|_{V}||>0\), we have that for each \(k\in C_{c}(G_{0}(\mathbf{P}))\) supported in yet another basic set, there exists \(f_{1}\), \(f_{2}\in C_{c}(G_{0}(\mathbf{P}))\) such that_
\[f_{1}\alpha^{K}(f)f_{2}=k\text{ and }||f_{1}||\leq||k||,||f_{2}||=\frac{1}{||f|_{V} ||}\]
_Moreover, in this case, we have that \(\overline{C^{*}(G_{0}(\mathbf{P}))\alpha^{N}(f)C^{*}(G_{0}(\mathbf{P}))}=C^{* }(G_{0}(\mathbf{P}))\)._
Proof.: Let \(A=C^{*}(G_{0}(\mathbf{P}))\). The second part of the theorem follows from the first part. To see this note that \(\overline{A\alpha^{N}(f)A}\) denotes the closed linear span of \(A\alpha^{N}(f)A\) and that closed linear span of the set of functions of compact support with support contained in a single basic set is \(A\).
We now prove the first part. Recall that \(q:X^{u}(\mathbf{P})\to X^{u}(\mathbf{P})/{\sim_{0}}\) denotes the quotient map. A basic set, \(U\), is of the form
\[\{(h_{U}(x),x)\mid x\in X^{u}(z,\delta)\}\]
where \(z\in X^{u}(\mathbf{P})\), \(\delta>0\) is small, and \(h_{U}\) is a local homeomorphism onto its image. Because we are dealing with a very specific groupoid, we have that \(h_{U}=(q|_{r(U)})^{-1}\circ q|_{s(U)}\). This in particular holds for the basic set \(U_{k}\) that contains the support of \(k\). We likewise let \(U_{f}\) denote the basic set containing the support of \(f\). By the previous lemma, there exists \(N\in\mathbb{N}\) such that
1. \(q(s(\varphi^{N}\times\varphi^{N}(U_{f})))=X^{u}(\mathbf{P})/{\sim_{0}}\) and
2. \(q(r(\varphi^{N}\times\varphi^{N}(U_{f})))=X^{u}(\mathbf{P})/{\sim_{0}}\).
It follows that we have \(V_{1}\subseteq s(\varphi^{N}\times\varphi^{N}(U_{f}))\) and \(V_{2}\subseteq r(\varphi^{N}\times\varphi^{N}(U_{f}))\) such that
1. \(q|_{s(U_{k})}\) is a homeomorphism from \(s(U_{k})\) to \(V_{1}\) and
2. \(q|_{V_{2}}\) is a homeomorphism from \(V_{2}\) to \(r(U_{k})\).
Since \(q(r(U_{k}))=q(s(U_{k}))\) and all the relevant maps are homeomorphism (because the domains have been restricted) we have
\[(q|_{r(U_{k})})^{-1}\circ q|_{s(U_{k})}=(q|_{r(U_{k})})^{-1}\circ q|_{V_{2}} \circ(q|_{V_{2}})^{-1}\circ q|_{V_{1}}\circ(q|_{V_{1}})^{-1}\circ q|_{s(U)}.\]
Using this setup we can define \(f_{1}\) and \(f_{2}\) as follows. For \(y\in s(U_{k})\), let
\[f_{1}(((q|_{V_{1}})^{-1}\circ q|_{s(U_{k})})(y),y)=k(h_{U_{k}}(y),y)\]
and otherwise define \(f_{1}\) to be zero. For \(z\in V_{2}\), let
\[f_{2}(((q|_{r(U_{k})})^{-1}\circ q|_{V_{2}})(z),z)=\frac{1}{f(\varphi^{-N}(z), \varphi^{-N}(((q|_{V_{1}})^{-1}\circ q|_{V_{2}})(z)))}\]
and then extend \(f_{2}\) so that its support is contained in a slightly large basic set. We note that \(f_{2}\) is well-defined because \(f\) is non-zero on \(V\). A short computation using the convolution product shows that \(f_{1}\) and \(f_{2}\) satisfy requirements in the statement of the theorem.
**Theorem 4.6**.: _Suppose that \(a\in A=C^{*}(G_{0}(\mathbf{P}))\) is nonzero, then there exists \(N\in\mathbb{N}\) such that \(\alpha^{N}(a)\) is full in \(A.\) In particular, \(C^{*}(G_{0}(\mathbf{P}))\) contains a full projection._
Proof.: Let \(a\in A=C^{*}(G_{0}(\mathbf{P}))\) be nonzero. Without loss of generality we can and will assume that \(||a||=1\).
There exists a basic set \(U\in\mathcal{G}\) with the following property. If \(c\in C_{c}(\mathcal{G})\) with \(||c||=1\) and
\[||a-c||<\frac{1}{4}\]
then \(|c(x,y)|>\frac{1}{2}\) for each \((x,y)\in U\).
Take a basic set such that \(V\subseteq\overline{V}\subseteq U\) and also take \(\tilde{f}\) a continuous bump function with support contained in \(U\) that is one on \(V\). Let \(N\in\mathbb{N}\) be as in the previous lemma where \(N\) depends only on \(V\).
Let \(0\neq k\in C_{c}(\mathcal{G})\) be supported in a basic set and \(\epsilon>0\). Take \(b\in C_{c}(\mathcal{G})\) such that
\[||a-b||<\min\left\{\frac{1}{4},\frac{\epsilon}{||k||}\right\}\text{ and }||b||=1\]
By construction, the previous lemma can be applied to \(\tilde{f}b\tilde{f}\). We obtain \(f_{1}\) and \(f_{2}\) such that
\[f_{1}\alpha^{N}(\tilde{f}b\tilde{f})f_{2}=k\]
Therefore
\[||f_{1}\alpha^{N}(\tilde{f})\alpha^{N}(a)\alpha^{N}(\tilde{f})f_ {2}-k|| =||f_{1}\alpha^{N}(\tilde{f})\alpha^{N}(a)\alpha^{N}(\tilde{f})f_ {2}-f_{1}\alpha^{N}(\tilde{f})\alpha^{N}(b)\alpha^{N}(\tilde{f})f_{2}||\] \[\leq||f_{1}||||\alpha^{N}(\tilde{f})||||a-b||||\alpha^{N}(\tilde {f})||||f_{2}||\] \[<\epsilon\]
where we have used the estimates for \(||f_{i}||\) from the previous lemma and the fact that \(||\tilde{f}||=1\). In summary we have shown that the set of compactly supported functions with support in a single basic set is contained in \(\overline{A\alpha^{N}(a)A}\) and this implies the result.
For the second part, by the main result of [9], \(C^{*}(G_{0}(\mathbf{P}))\) contains a non-zero projection. Using the first part it therefore contains a full projection.
**Corollary 4.7**.: _There exists a projection \(p\in C_{c}(G_{0}(\mathbf{P}))\) which is full in \(C^{*}(G_{0}(\mathbf{P}))\). In particular, the subalgebra \(A_{p}:=pC^{*}(G_{0}(\mathbf{P}))p\) is a unital Fell algebra with spectrum \(X^{u}(\mathbf{P})/{\sim_{0}}\) and the dense unital subalgebra \(pC_{c}(G_{0}(\mathbf{P}))p\subseteq A_{p}\) is closed under holomorphic functional calculus._
Proof.: By Theorem 4.6, there exists a full projection \(p_{0}\in C^{*}(G_{0}(\mathbf{P}))\). The \(*\)-subalgebra \(C_{c}(G_{0}(\mathbf{P}))\subseteq C^{*}(G_{0}(\mathbf{P}))\) is closed under holomorphic functional calculus by [11, Proposition 7.4], so by standard density and functional calculus arguments there exists a projection \(p\in C_{c}(G_{0}(\mathbf{P}))\) and a unitary \(u\in 1+C^{*}(G_{0}(\mathbf{P}))\) such that \(p=up_{0}u^{*}\). Since \(p_{0}\) is full in \(C^{*}(G_{0}(\mathbf{P}))\), it follows that \(p\) is full in \(C^{*}(G_{0}(\mathbf{P}))\). It is clear that \(pC_{c}(G_{0}(\mathbf{P}))p\) is closed under holomorphic functional calculus from [11, Proposition 7.4].
_Remark 4.8_.: The authors were surprised by the length of the proof of Theorem 4.6 for the following reason. The stable algebra \(S=\varinjlim(C^{*}(G_{0}(\mathbf{P})),\alpha)\) is a simple \(C^{*}\)-algebra arising as an inductive limit, and by [9]\(S\) admits a full projection. The existence of projections
in \(S\) readily produces projections in \(C^{*}(G_{0}(\mathbf{P}))\), but fullness is not immediate even if it is conceivably provable from some soft argument. By the results of [10], there is no gain from \(C^{*}(G_{0}(\mathbf{P}))\) being a Fell algebra. As the following example shows, something extra is needed beyond the inductive limit, e.g., the proof we gave above relied on expansiveness.
For a stationary inductive limit \(S=\varinjlim(A,\alpha)\) where \(\alpha\) is an injective, nondegenerate \(*\)-homomorphism, such that \(S\) is simple and admits a projection (which is automatically full), one can ask if there is a full projection in \(A\)? The following example due to Jamie Gabe answers the question in the negative. Consider \(A:=C_{0}(\mathbb{N},\mathbb{K}(H))\) for an infinite-dimensional, separable Hilbert space \(H\). Take a pair of Cuntz isometries \(s\) and \(t\) on \(H\), i.e. \(s\) and \(t\) are isometries such that \(ss^{*}+tt^{*}=1\). We define \(\alpha:A\to A\) by
\[\alpha(f)(n):=\begin{cases}sf(0)s^{*}+tf(1)t^{*},&n=0,\\ f(n+1),&n>0.\end{cases}\]
It is clear from the construction that \(\alpha\) is an injective, nondegenerate \(*\)-homomorphism. Moreover, consider two non-zero elements \(f_{1},f_{2}\in S\) in the image of any of the functorial embeddings \(C_{e}(\mathbb{N},\mathbb{K}(H))\hookrightarrow S\). Then for sufficiently large \(N\), \(\alpha^{N}(f_{1}),\alpha^{N}(f_{2})\in S\) are both embeddings of non-zero elements in \(A\) supported in \(\{0\}\subseteq\mathbb{N}\). Since \(\mathbb{K}(H)\) is simple, we have that \(\overline{Sf_{1}S}=\overline{Sf_{2}S}\). From this observation, one can show that \(S\) is simple. In fact, one can show that \(S\cong\mathbb{K}(H)\). Since \(A\) admits no full projection, we can conclude our negative answer to the above question. One can note that in contrast to the expansive scenario of Theorem 4.6, the \(*\)-homomorphism in this counterexample is contracting on the spectrum of \(A\).
## 5. Cuntz-Pimsner models for stationary inductive limits
In this section we consider a purely \(C^{*}\)-algebraic set up. However, the prototypical example is the case of the \(C^{*}\)-algebras associated to a Wieler solenoid. The general set up is as follows. We consider a non-degenerate self-\(*\)-monomorphism \(\alpha:A\to A\) of a \(C^{*}\)-algebra \(A\). The prototypical example is \(A=C^{*}(G_{0}(\mathbf{P}))\) with \(\alpha\) given by the composition of \(C^{*}(G_{0}(\mathbf{P}))\subseteq C^{*}(G_{1}(\mathbf{P}))\) and the canonical isomorphism \(C^{*}(G_{1}(\mathbf{P}))\cong C^{*}(G_{0}(\mathbf{P}))\) from Theorem 2.4.
Before proceeding to Cuntz-Pimsner models, we make a remark about the stationary inductive limits.
**Proposition 5.1**.: _Write \(S:=\varinjlim(A,\alpha)\) and let \(\phi:S\to S\) denote the right shift in the direct limit. Then \(\phi\) is a well defined \(*\)-automorphism of \(S\)._
Proof.: The proof of this proposition is trivial once having parsed its statement. So parsing the statement we shall. By construction, \(S:=\bigoplus_{k\in\mathbb{N}}A/I\) where \(I\) is the ideal generated by \(a\delta_{k}-\alpha(a)\delta_{k+1}\) for \(a\in A\). Here we write \(a\delta_{k}\in\bigoplus_{k\in\mathbb{N}}A\) for the element \(a\) placed in position \(k\). In this notation \(\phi(a\delta_{k}\mod I)=a\delta_{k+1}\mod I\). The map \(\phi\) is well defined since it is induced from the right shift mapping on \(\bigoplus_{k\in\mathbb{N}}A\) that preserves \(I\). The map \(\phi\) is an inverse to the left shift mapping that coincides with the mapping \(\alpha(a\delta_{k}+I):=\alpha(a)\delta_{k}+I\). Thus \(\phi\) is a well defined \(*\)-automorphism.
Associated with the \(*\)-monomorphism \(\alpha:A\to A\) there is an \(A\)-Hilbert bimodule \(E:={}_{\alpha}A_{\mathrm{id}}\). That is \(E=A\) is a vector space with the action \(a.\xi.b:=\alpha(a)\xi b\) for \(a,b\in A\) and \(\xi\in E\). The right \(A\)-inner product is given by
\[\langle\xi_{1},\xi_{2}\rangle:=\xi_{1}^{*}\xi_{2}\in A.\]
It is readily verified that \(E\) is a right \(A\)-Hilbert module and that \(A\) acts as adjointable operators on the left. Consider the associated Cuntz-Pimsner algebra \(O_{E}\) and its core \(\mathcal{C}_{E}:=O_{E}^{U(1)}\) for the standard gauge action.
**Proposition 5.2**.: _The functorial property of inductive limits and the inclusion \(A\hookrightarrow\mathcal{C}_{E}\) induces isomorphisms \(S\cong\mathcal{C}_{E}\) and \(S\rtimes_{\phi}\mathbb{Z}\cong O_{E}\)._
Proof.: By [30] it holds that
\[\mathcal{C}_{E}=\varinjlim\mathbb{K}_{A}(E^{\otimes k}).\]
Since \(\mathbb{K}_{A}(E^{\otimes k})=A\) and \(\mathbb{K}_{A}(E^{\otimes k})\to\mathbb{K}_{A}(E^{\otimes k+1})\) coincides with \(\alpha\), it follows that \(S\cong\mathcal{C}_{E}\). The construction of \(S\cong\mathcal{C}_{E}\) is by definition the map functorially constructed from the inclusion \(A\hookrightarrow\mathcal{C}_{E}\).
Following the standard procedure of "extension of scalars" (see [30] or [3, Section 3.1]) let \(E_{\infty}:=E\otimes_{A}\mathcal{C}_{E}\) denote the self-Morita equivalence of \(\mathcal{C}_{E}\) induced by \(E\). For the details of the left action of \(\mathcal{C}_{E}\) on \(E_{\infty}\), see [3, Proposition 3.2]. There are natural isomorphisms
\[O_{E}\cong O_{E_{\infty}}\cong\mathcal{C}_{E}\ltimes E_{\infty}.\]
For details, see [3, Proposition 3.2]. Here \(\mathcal{C}_{E}\ltimes E_{\infty}\) denotes the generalized crossed product by the the self-Morita equivalence \(E_{\infty}\). In terms of the isomorphism \(S\cong\mathcal{C}_{E}\) we can identify the \(\mathcal{C}_{E}\)-self Morita equivalence \(E_{\infty}\) with the \(S\)-self Morita equivalence \({}_{\phi}S_{\mathrm{id}}\). Therefore
\[\mathcal{C}_{E}\ltimes E_{\infty}\cong S\rtimes_{\phi}S_{\mathrm{id}}=S \rtimes_{\phi}\mathbb{Z}.\]
Proposition 5.2 has implications for the stable Ruelle algebra of a Wieler solenoid. It was proven in [11, Theorem 1.2] that \(C^{*}(G^{s}(\mathbf{P}))\cong\varinjlim(C^{*}(G_{0}(\mathbf{P})),\alpha)\) for the Fell subalgebra \(C^{*}(G_{0}(\mathbf{P}))\subseteq C^{*}(G^{s}(\mathbf{P}))\) with spectrum \(X^{u}(\mathbf{P})/{\sim_{0}}\). The next result extends this result to the stable Ruelle algebra, and is an immediate consequence of Proposition 5.2 and [11, Theorem 1.2].
**Corollary 5.3**.: _Consider a Wieler solenoid, and write \(E\) for the \(C^{*}(G_{0}(\mathbf{P})-C^{*}(G_{0}(\mathbf{P}))\)-correspondence defined as \(E:=C^{*}(G_{0}(\mathbf{P}))\) as a right Hilbert module with left action defined from \(\alpha\). The stable Ruelle algebra \(C^{*}(G^{s}(\mathbf{P}))\rtimes\mathbb{Z}\) is isomorphic to the Cuntz-Pimsner algebra \(O_{E}\)._
We are also interested in unital Cuntz-Pimsner models. We first describe the general setup and return to the specifics of Wieler solenoids at the end of the section. They can be constructed from choosing a full projection \(p\in A\). For such a projection, define the unital \(C^{*}\)-algebra \(A_{p}:=pAp\). Set \(E_{p,k}:=\alpha^{k}(p)Ap\) equipped with the structure of an \(A_{p}\)-Hilbert bimodule defined by \(a.\xi.b:=\alpha^{k}(a)\xi b\) for \(a,b\in A\) and \(\xi\in E\). The right \(A_{p}\)-inner product on \(E_{p,k}\) is given by
\[\langle\xi_{1},\xi_{2}\rangle:=\xi_{1}^{*}\xi_{2}\in A_{p}.\]
We set \(E_{p}:=E_{p,1}\).
**Proposition 5.4**.: _Let \(p\in A\) be a full projection and define the unital \(C^{*}\)-algebra \(A_{p}:=pAp\). It then holds that_
1. \(E_{p,k}=E_{p}^{\otimes_{A}k}\)__
2. \(\mathbb{K}_{A_{p}}(E_{p}^{\otimes_{A}k})=A_{\alpha^{k}(p)}\)__
3. \(\mathcal{C}_{E_{p}}=\varinjlim(A_{\alpha^{k}(p)},\alpha)\) _is a unital_ \(C^{*}\)_-algebra which is stably isomorphic to_ \(S\)__
4. \(O_{E_{p}}\) _is a unital_ \(C^{*}\)_-algebra which is_ \(U(1)\)_-equivariantly stably isomorphic to_ \(S\rtimes_{\phi}\mathbb{Z}\)_._
Proof.: For (1), we have for any \(k,l\in\mathbb{N}\) that
\[E_{p,k}\otimes_{A}E_{p,l}=\overline{\alpha^{l}(\alpha^{k}(p)Ap)\alpha^{l}(p)Ap }=\alpha^{k+l}(p)Ap=E_{p,k+l}.\]
For (2), we have that
\[\mathbb{K}_{A_{p}}(E_{p}^{\otimes_{A}k})=E_{p}^{\otimes_{A}k}\otimes_{A}(E_{p }^{\otimes_{A}k})^{*}=\alpha^{k}(p)A\alpha^{k}(p)=A_{\alpha^{k}(p)}.\]
Item (3) follows from (2) and the fact that \(\varinjlim(A_{\alpha^{k}(p)},\alpha)\) coincides with \(pSp\) which is Morita equivalent to \(S\). Item (4) follows from (3) and Proposition 5.2.
We also note the following consequence on traces and KMS-weights that relies on [24].
**Theorem 5.5**.: _Let \((A,\alpha)\) be a stationary inductive system, \(p\in A\) a full projection and \(\beta\in\mathbb{R}\). There are bijections between the sets of following objects:_
1. _Tracial weights_ \(\tau\) _on_ \(A\) _such that_ \(\tau\circ\alpha=\mathrm{e}^{\beta}\tau\) _normalized by_ \(\tau(p)=1\)_._
2. _Tracial states_ \(\tau_{p}\) _on_ \(A_{p}\) _such that_ \(\mathrm{Tr}_{\tau_{p}}^{E_{p}}=\mathrm{e}^{\beta}\tau_{p}\)_._
3. _KMS_\({}_{\beta}\)_-weights_ \(\Phi\) _on_ \(S\rtimes_{\phi}\mathbb{Z}\) _normalized by_ \(\Phi(p)=1\)_._
_The bijection from the set of objects in (1) to the set of objects in (2) is given by restriction along \(A_{p}\hookrightarrow A\). The bijection from the set of objects in (3) to the set of objects in (1) is given by restriction along \(A\hookrightarrow S\rtimes_{\phi}\mathbb{Z}\)._
It is worth noting that the results from the previous section on the existence of full projections in the case of the Fell algebra associated to a Wieler solenoid imply that the previous theorem can be applied to the case of \((C^{*}(G_{0}(\mathbf{P})),\alpha)\) where \(\alpha\) is the composition of \(C^{*}(G_{0}(\mathbf{P}))\subseteq C^{*}(G_{1}(\mathbf{P}))\) with the isomorphism \(C^{*}(G_{1}(\mathbf{P}))\cong C^{*}(G_{0}(\mathbf{P}))\). In the case of Wieler solenoids, we can summarize the results of this section in the following corollary. Recall that Corollary 4.7 ensures the existence of a projection \(p\in C_{c}(G_{0}(\mathbf{P}))\) which is full in \(C^{*}(G_{0}(\mathbf{P}))\).
**Corollary 5.6**.: _Let \((Y,\mathrm{d}_{Y},g)\) be a Wieler pre-solenoid with associated Wieler solenoid \(X:=\varprojlim(Y,g)\). We fix a projection \(p\in C_{c}(G_{0}(\mathbf{P}))\) which is full in \(C^{*}(G_{0}(\mathbf{P}))\). Consider the unital Fell algebra \(A_{p}:=pC^{*}(G_{0}(\mathbf{P}))p\) and define the \(A_{p}\)-bimodule \(E_{p}:=\alpha(p)C^{*}(G_{0}(\mathbf{P}))p\). It then holds that_
1. _The Cuntz-Pimsner algebra_ \(O_{E_{p}}\) _is a unital_ \(C^{*}\)_-algebra which is_ \(U(1)\)_-equivariantly stably isomorphic to the stable Ruelle algebra_ \(C^{*}(G^{s}(\mathbf{P}))\rtimes\mathbb{Z}\)__
2. _The core of the Cuntz-Pimsner algebra_ \(\mathcal{C}_{E_{p}}=(O_{E_{p}})^{U(1)}\) _is a unital_ \(C^{*}\)_-algebra which is stably isomorphic to the stable algebra_ \(C^{*}(G^{s}(\mathbf{P}))\)_._
3. _If_ \((Y,\mathrm{d}_{Y},g)\) _is mixing, the equation of traces on_ \(A_{p}\)__ \[\mathrm{Tr}_{\tau}^{E_{p}}=\mathrm{e}^{\beta}\tau\] _only has a solution when_ \(\beta\) _is the topological entropy of_ \(X\)_, in which case there exists a one-dimensional space of solutions determined from the Bowen measure on_ \(X\) _and the correspondence in Theorem_ 5.5 _combined with item 1)._
For more details on the Bowen measure and how it determines traces and weights on the algebras associated with a Smale space, see [9, 23, 31].
## 6. The \(K\)-theory as a functor for compact, locally Hausdorff spaces
The goal of this section is to extend the \(K\)-theory functor from compact, Hausdorff spaces to compact, locally Hausdorff spaces. As mentioned in Remark 1.17, the process of taking spectra of Fell algebras does not define a functor when the morphisms between compact, locally Hausdorff spaces are taken to be continuous maps. Nevertheless, the \(K\)-theory of the associated Fell algebra is better behaved than the algebra itself.
In addition to the contravariant functor associated to (at least some) continuous maps between compact, locally Hausdorff space, there is also a wrong-way functor for self local homeomorphisms. Wrong way functoriality is particularly important for our study of the Fell algebra associated to a Wieler solenoid because it appears both in the inductive limits structure of the stable algebra and the Cuntz-Pimsner model of the stable Ruelle algebra.
We start by defining \(K\)-theory of a compact, locally Hausdorff space \(\tilde{Y}\). As in Subsection 1.3, we choose a Hausdorff resolution \(\psi:X\to\tilde{Y}\). The associated groupoid \(R(\psi)\) is defined in Example 1.15. We define
\[K^{*}(\tilde{Y}):=K_{*}(C^{*}(R(\psi))).\]
This definition is a priori depending on the choice of \(\psi\). We shall in the next subsection study functoriality and in the subsequent subsection show that up to canonical isomorphism, \(K^{*}(\tilde{Y})\) is independent on the choice of \(\psi\).
### Correspondences and maps between locally Hausdorff spaces
We will in this subsection study how maps between compact, locally Hausdorff spaces give rise to correspondences between the related Fell algebras \(C^{*}(R(\psi))\). Recall the terminology of a Hausdorff resolution of a compact, locally Hausdorff space \(\tilde{Y}\) from Definition 1.14. We shall make use of another terminology.
**Definition 6.1**.: Let \(p_{1}:X_{1}\to\tilde{Y}_{1}\) and \(p_{2}:X_{2}\to\tilde{Y}_{2}\) be Hausdorff resolutions of two compact, locally Hausdorff spaces. A proper morphism \((\Pi,\pi):p_{1}\to p_{2}\) is a pair of continuous maps fitting into the commuting diagram
such that \(\Pi:X_{1}\to X_{2}\) is a proper mapping. We say that \(\pi\) lifts to a proper morphism if there is a proper mapping \(\Pi:X_{1}\to X_{2}\) making \((\Pi,\pi)\) into a proper morphism.
The results of this subsection can be summarized in the following theorem.
**Theorem 6.2**.: _Let \(\tilde{Y}_{1}\) and \(\tilde{Y}_{2}\) be compact, locally Hausdorff spaces, and assume that_
\[\pi:\tilde{Y}_{1}\to\tilde{Y}_{2},\]
_is a continuous mapping. Fix Hausdorff resolutions \(p_{1}:X_{1}\to\tilde{Y}_{1}\) and \(p_{2}:X_{2}\to\tilde{Y}_{2}\)._
1. _Associated with this data there is a canonically associated_ \(C^{*}(R(p_{2}))-C^{*}(R(p_{1}))\)_-correspondence_ \(\operatorname{Corr}(\pi,p_{1},p_{2})\)_, see Definition_ 6.7_._
2. _The left action of_ \(C^{*}(R(p_{2}))\) _on_ \(\operatorname{Corr}(\pi,p_{1},p_{2})\) _is via_ \(C^{*}(R(p_{1}))\)_-compact operators if_ \(\pi\) _lifts to a proper morphism for some Hausdorff resolutions of_ \(\tilde{Y}_{1}\) _and_ \(\tilde{Y}_{2}\)_, for instance if_ \(\pi\) _is a local homeomorphism. In fact, if_ \(\pi\) _is a homeomorphism then_ \(\operatorname{Corr}(\pi,p_{1},p_{2})\) _is a Morita equivalence._
3. _If_ \(\pi^{\prime}:\tilde{Y}_{2}\to\tilde{Y}_{3}\) _is another continuous mapping to a space with a Hausdorff resolution_ \(p_{3}:X_{3}\to\tilde{Y}_{3}\)_, then there is a unitary isomorphism of_ \(C^{*}(R(p_{3}))-C^{*}(R(p_{1}))\)_-correspondences_ \[\operatorname{Corr}(\pi^{\prime},p_{2},p_{3})\otimes_{C^{*}(R(p_{2}))} \operatorname{Corr}(\pi,p_{1},p_{2})\cong\operatorname{Corr}(\pi^{\prime} \circ\pi,p_{1},p_{3}).\]
We shall start with an easy result for proper morphisms of Hausdorff resolutions.
**Proposition 6.3**.: _Given a proper morphism \((\Pi,\pi):p_{1}\to p_{2}\) of Hausdorff resolutions \(p_{1}:X_{1}\to\tilde{Y}_{1}\) and \(p_{2}:X_{2}\to\tilde{Y}_{2}\), the pullback map_
\[\Pi^{*}:C_{c}(R(p_{2}))\to C_{c}(R(p_{1})),\]
_along the proper groupoid homomorphism \((x,x^{\prime})\mapsto(\Pi(x),\Pi(x^{\prime}))\), is a well defined \(*\)-homomorphism that induces a \(*\)-homomorphism_
\[\Pi^{*}:C^{*}(R(p_{2}))\to C^{*}(R(p_{1})).\]
Recall the notion of a groupoid correspondence from Definition 1.23.
**Definition 6.4**.: For Hausdorff resolutions \(p_{1}:X_{1}\to\tilde{Y}_{1}\) and \(p_{2}:X_{2}\to\tilde{Y}_{2}\), we consider a diagram \(\mathfrak{X}\) of the form
where \(\pi\) is a continuous map. From the diagram \(\mathfrak{X}\), we define the subspace \(Z_{\mathfrak{X}}\subseteq X_{1}\times X_{2}\) by
\[Z_{\mathfrak{X}}=\{(x,y)\in X_{1}\times X_{2}:\pi\circ p_{1}(x)=p_{2}(y)\}.\]
**Lemma 6.5**.: _The space \(Z_{\mathfrak{X}}\) defined above is a groupoid correspondence from \(R(p_{1})\) to \(R(p_{2})\)._
Proof.: The moment map for the right action will be defined as \(\sigma:Z_{\mathfrak{X}}\to G_{2}^{(0)}\), \((x,y)\mapsto(y,y)\). The map \(\sigma\) is an open map. Define the right action of \(R(p_{2})\) on \(Z_{\mathfrak{X}}\) by \((x,y)\cdot(y,y^{\prime})=(x,y^{\prime})\) for \((x,y)\in Z_{\mathfrak{X}}\) and \((y,y^{\prime})\in R(p_{2})\). This action is free and proper.
Define the moment map \(\rho:Z_{\mathfrak{X}}\to R(p_{1})^{(0)}\) for the left action by \((x,y)\mapsto(x,x)\), which is open and surjective. Define the action of \(R(p_{1})\) on \(Z_{\mathfrak{X}}\) by \((x_{1},x_{2})\cdot(x_{2},y)=(x_{1},y)\), for \((x_{1},x_{2})\in R(p_{1}),(x_{2},y)\in Z_{\mathfrak{X}}\) which is free and proper. Moreover, \(\rho\) induces a homemorphism \(\overline{\rho}:Z_{\mathfrak{X}}/R(p_{2})\to R(p_{1})^{(0)}\).
The following lemma follows from Theorem 1.24.
**Proposition 6.6**.: _Let \(Z_{\mathfrak{X}}\) be the groupoid correspondence defined from a diagram \(\mathfrak{X}\) as in Definition 6.4. Then, the space \(C_{c}(Z_{\mathfrak{X}})\) becomes a pre-correspondence from \(C_{c}(R(p_{2}))\) to \(C_{c}(R(p_{1}))\) with the operations given as in Theorem 1.24._
**Definition 6.7**.: Given a diagram \(\mathfrak{X}\) as in Definition 6.4, we define the correspondence \(\operatorname{Corr}(\pi,p_{1},p_{2})\) from \(C^{*}(R(p_{2}))\) to \(C^{*}(R(p_{1}))\) as the completion of the pre-correspondence \(C_{c}(Z_{\mathfrak{X}})\). In other words \(\operatorname{Corr}(\pi,p_{1},p_{2})\) is a \(C^{*}(R(p_{2}))-C^{*}(R(p_{1}))\)-Hilbert \(C^{*}\)-module.
**Lemma 6.8**.: _Let \(Z_{\mathfrak{X}}\) be the groupoid correspondence defined from a diagram \(\mathfrak{X}\) as in Definition 6.4. If \(\pi\) is a continuous bijection then \(\operatorname{Corr}(\pi,p,q)\) is an imprimitivity bimodule from \(C^{*}(R(p_{2}))\) to \(C^{*}(R(p_{1}))\) in the left \(C^{*}(R(p_{2}))\)-valued inner product defined as follows. The \(C_{c}(R(p_{2}))\)-valued inner product \(\langle\langle\xi_{1},\xi_{2}\rangle\rangle\in C_{c}(R(p_{2}))\) of \(\xi_{1},\xi_{2}\in C_{c}(Z_{\mathfrak{X}})\) as_
\[\langle\langle\xi_{1},\xi_{2}\rangle\rangle(y,y^{\prime})=\sum_{ \begin{subarray}{c}x\in Xwith\\ (x,x^{\prime})\in R(p_{1})\end{subarray}}\xi_{1}(x,y^{\prime})\overline{\xi_{2 }(x,y)},\quad(y,y^{\prime})\in R(p_{2}), \tag{6.1}\]
_for \(x^{\prime}\in X\) with \((x^{\prime},y)\in Z_{\mathfrak{X}}\), and this left inner product is extended to a left \(C^{*}(R(p_{2}))\)-valued inner product on \(\operatorname{Corr}(\pi,p_{1},p_{2})\) by continuity._
Proof.: Notice that \(\sigma:Z_{\mathfrak{X}}\to R(p_{2})^{(0)}\) is an open map; and since \(\pi\) is a bijection, the map \(\sigma\) induces a bijection from \(R(p_{1})\backslash Z_{\mathfrak{X}}\) onto \(R(p_{1})^{(0)}\). Moreover,\(Z_{\mathfrak{X}}\) is a left principal \(R(p_{1})\)-space and a right principal \(R(p_{2})\)-space. Now, as in [27], the operation (6.1) defines a left semi-inner product, and the correspondence \({}_{C_{c}(R(p_{2}))}C_{c}(Z_{\mathfrak{X}})_{C_{c}(R(p_{1}))}\) becomes a pre-imprimitivity bimodule.
_Example 6.9_.: Let \(X_{1},X_{2}\) be compact Hausdorff spaces and let \(\pi:X_{1}\to X_{2}\) be a continuous map. The map \(\pi\) induces a unital homomorphism via pullback
\[\pi^{*}:C(X_{2})\to C(X_{1}),\qquad\pi^{*}(g)(x)=f\left(\pi(x)\right),\]
for \(g\in C(X_{2})\), \(x\in X_{1}\). The homomorphism \(\pi^{*}\) allows one to view \(C(X_{1})\) as a \(C(X_{2})-C(X_{1})\) correspondence. We show that this correspondence is isomorphic to the correspondence \({}_{C(X_{2})}C(Z_{\mathfrak{X}})_{C(X_{1})}\) associated to the following diagram.
We have \(Z_{\mathfrak{X}}=\{(x,\pi(x))\in X_{1}\times X_{2}:x\in X_{1}\}\). Notice that \(R(\operatorname{id}_{X_{j}})=\{(x,x):x\in X_{j}\}\) for \(j=1,2\). Therefore, we may identify the groupoid \(R(\operatorname{id}_{X_{j}})\) with the trivial groupoid \(X_{j}\), for \(j=1,2\). The diagram above gives us the correspondence \({}_{C(X_{2})}C(Z_{\mathfrak{X}})_{C(X_{1})}\) with the operations
\[\xi\cdot f\left(x,\pi(x)\right) =\xi\left(x,\pi(x)\right)f(x)\] \[g\cdot\xi(x,\pi(x)) =g\left(\pi(x)\right)\xi\left(x,\pi(x)\right)\] \[\left\langle\xi_{1},\xi_{2}\right\rangle(x) =\overline{\xi_{1}\left(x,\pi(x)\right)}\xi_{2}\left(x,\pi(x)\right)\]
for \(\xi,\xi_{1},\xi_{2}\in C(Z_{\mathfrak{X}}),g\in C(X_{2})\), and \(f\in C(X_{1})\). We can define the isomorphism of correspondences \(\phi:C(Z_{\mathfrak{X}})\to C(X)\) by
\[\phi(\xi)(x)=\xi(x,\pi(x)).\]
Indeed, it is clear that \(\phi\) has the inverse \(\phi^{-1}(f)(x,\pi(x))=f(x)\) and a short computation shows that \(\phi\) is compatible with the correspondence structure.
**Lemma 6.10**.: _Assume we have the following diagram_
_where \(p_{1}:X_{1}\to\tilde{Y}_{1}\), \(p_{2}:X_{2}\to\tilde{Y}_{2}\) and \(p_{3}:X_{3}\to\tilde{Y}_{3}\) are Hausdorff resolutions; and \(\pi_{1}\) and \(\pi_{2}\) are continuous maps. Write \(\mathfrak{X}_{1}\) for the left part of the diagram and \(\mathfrak{X}_{2}\) for the right part and \(\mathfrak{X}_{3}\) for the diagram_
_For \(f\in C_{c}(Z_{\mathfrak{X}_{2}})\), \(\xi\in C_{c}(Z_{\mathfrak{X}_{1}})\), the map \(F(f,\xi)\) defined by_
\[F(f,\xi)(x,r)=\sum_{\begin{subarray}{c}y\in Ywith\\ \pi\circ p_{1}(x)=p_{2}(y)\end{subarray}}\xi(x,y)f(y,r)\]
_is an element of \(C_{c}(Z_{\mathfrak{X}_{3}})\)._
Proof.: Consider the closed subset
\[Z_{\mathfrak{X}_{1}}*Z_{\mathfrak{X}_{2}}=\left\{\left((x,y),(y,r)\right):(x, y)\in Z_{\mathfrak{X}_{1}}\;\;and\;\;(y,r)\in Z_{\mathfrak{X}_{2}}\right\}\]
of \(Z_{\mathfrak{X}_{1}}\times Z_{\mathfrak{X}_{2}}\), and the continuous surjective map
\[V:Z_{\mathfrak{X}_{1}}*Z_{\mathfrak{X}_{2}}\to Z_{\mathfrak{X}_{3}},\;\;\;\; \;((x,y),(y,r))\mapsto(x,r).\]
Give \(f\) and \(\xi\) as in the statement of the lemma, we define the map \(g:Z_{\mathfrak{X}_{1}}*Z_{\mathfrak{X}_{2}}\to\mathbb{C}\) by
\[g\left((x,y),(y,r)\right)=\xi(x,y)f(y,r).\]
Then \(\operatorname{supp}(g)\) is contained in the compact \(\operatorname{supp}(\xi)\times\operatorname{supp}(f)\), and thus \(g\) is compactly supported. We now show that \(\operatorname{supp}(F(f,\xi))\subset V(\operatorname{supp}g)\). Assume \((x,r)\notin V(\operatorname{supp}g)\). Then, we have \(g(m)=0\) for any \(m\in V^{-1}\left((x,r)\right).\) Since
\[F(f,\xi)(x,r)=\sum_{m\in V^{-1}((x,r))}g(m),\]
we have \(F(f,\xi)(x,r)=0\), which completes the proof.
**Theorem 6.11**.: _In the notation of Lemma 6.10, the map \(\Phi\) defined by_
\[C_{c}(Z_{\mathfrak{X}_{2}})\otimes C_{c}(Z_{\mathfrak{X}_{1}})\to C_{c}(Z_{ \mathfrak{X}_{3}}),\ \ \ \ f\otimes\xi\mapsto F(f,\xi)\]
_is a \(C_{c}(R(p_{3}))-C_{c}(R(p_{1}))\) pre-correspondence isomorphism. The map \(\Phi\) induces an isomorphism of \(C^{*}(R(p_{3}))-C^{*}(R(p_{1}))\)-correspondences_
\[\operatorname{Corr}(\pi_{2},p_{2},p_{3})\otimes_{C^{*}(R(p_{2}))} \operatorname{Corr}(\pi_{1},p_{1},p_{2})\cong\operatorname{Corr}(\pi_{2}\circ \pi_{1},p_{1},p_{3}).\]
The proof is omitted as it only consists of long computations verifying that the module structure and inner products are respected by \(\Phi\). The following immediate corollary of Theorem 6.11 shows that the correspondence \(\operatorname{Corr}(\pi,p_{1},p_{2})\) only depends on \(\pi\) up to canonical Morita equivalence.
**Corollary 6.12**.: _Assume that \(\pi:\tilde{Y}\to\tilde{Y}^{\prime}\) is a continuous mapping between compact, locally Hausdorff spaces. Let \(p_{j}:X_{j}\to\tilde{Y}\) and \(p_{j}^{\prime}:X_{j}^{\prime}\to\tilde{Y}^{\prime}\) be surjective local homeomorphisms from locally compact, Hausdorff spaces, for \(j=1,2\). Then, for \(j=1,2\), \(\operatorname{Corr}(\pi,p_{1},p_{j}^{\prime})\) and \(\operatorname{Corr}(\pi,p_{2},p_{j}^{\prime})\) are Morita equivalent via \(\operatorname{Corr}(\operatorname{id}_{\tilde{Y}^{\prime}},p_{1},p_{2})\) and \(\operatorname{Corr}(\pi,p_{j},p_{2}^{\prime})\) are Morita equivalent via \(\operatorname{Corr}(\operatorname{id}_{\tilde{Y}^{\prime}},p_{1}^{\prime},p_{ 2}^{\prime})\)._
**Lemma 6.13**.: _In the notation of Lemma 6.10, if \(\pi_{1}\) lifts to a proper morphism \((\Pi_{1},\pi_{1}):p_{1}\to p_{2}\) then_
\[\operatorname{Corr}(\pi_{2},p_{2},p_{3})\otimes_{\Pi_{1}^{*}}C^{*}(R(p_{2})) \cong\operatorname{Corr}(\pi_{2}\circ\pi_{1},p_{1},p_{3}).\]
_Similarly, if \(\pi_{2}\) lifts to a proper morphism \((\Pi_{2},\pi_{2}):p_{2}\to p_{3}\) then_
\[C^{*}(R(p_{3}))\otimes_{\Pi_{2}^{*}}\operatorname{Corr}(\pi_{1},p_{1},p_{2}) \cong\operatorname{Corr}(\pi_{2}\circ\pi_{1},p_{1},p_{3}).\]
_In particular, if \((\Pi,\pi):p_{1}\to p_{2}\) is a proper morphism then there is a Morita equivalence of correspondences from \(\operatorname{Corr}(\pi,p_{1},p_{2})\) the \(C^{*}(R(p_{2}))-C^{*}(R(p_{1}))\)-correspondence \(\Pi_{*}C^{*}(R(p_{1}))\), i.e. the right \(C^{*}(R(p_{1}))\)-Hilbert module \(C^{*}(R(p_{1}))\) with left action defined from \(\Pi^{*}\)._
Proof.: The proof of the first two isomorphisms is analogous to Lemma 6.10 and Theorem 6.11 and is omitted. To prove the final statement, we note that \(\operatorname{Corr}(\operatorname{id}_{\tilde{Y}_{1}},p_{1},p_{1})=C^{*}(R(p_ {1}))\) and so
\[\Pi_{*}C^{*}(R(p_{1}))=C^{*}(R(p_{2}))\otimes_{\Pi^{*}}\operatorname{Corr}( \operatorname{id}_{\tilde{Y}_{1}},p_{1},p_{1})\cong\operatorname{Corr}(\pi,p_ {1},p_{2}).\]
**Lemma 6.14**.: _Consider a diagram as in Definition 6.4. The left action of \(C^{*}(R(p_{2}))\) on \(\operatorname{Corr}(\pi,p_{1},p_{2})\) is via \(C^{*}(R(p_{1}))\)-compact operators if \(\pi\) lifts to a proper morphism for some Hausdorff resolutions of \(\tilde{Y}_{1}\) and \(\tilde{Y}_{2}\)._
Proof.: The statement that the left action of \(C^{*}(R(p_{2}))\) on \(\operatorname{Corr}(\pi,p_{1},p_{2})\) is via \(C^{*}(R(p_{1}))\)-compact operators is Morita invariant, so by Lemma 6.8 it suffices to prove the lemma for particular choices of Hausdorff resolutions \(p_{1}\) and \(p_{2}\). If it is the case that \(\pi\) lifts to a proper morphism for some Hausdorff resolutions of \(\tilde{Y}_{1}\) and \(\tilde{Y}_{2}\), we can therefore assume that \(\pi\) lifts to a proper morphism \(p_{1}\to p_{2}\). In this case, the left action of \(C^{*}(R(p_{2}))\) on \(\operatorname{Corr}(\pi,p_{1},p_{2})\) is via \(C^{*}(R(p_{1}))\)-compact operators by the final statement of Lemma 6.13.
### Functoriality in \(K\)-theory of compact, locally Hausdorff spaces
In this section we consider compact, locally Hausdorff spaces. We are interested in their \(K\)-theory. First we show that the \(K\)-theory of compact, locally Hausdorff spaces is well-defined.
**Proposition 6.15**.: _Consider a compact, locally Hausdorff space \(\tilde{Y}\). Defining the \(K\)-theory of \(\tilde{Y}\) as_
\[K^{*}(\tilde{Y}):=K_{*}(C^{*}(R(\psi))),\]
_for some Hausdorff resolution \(\psi:X\to\tilde{Y}\), produces a group uniquely determined up to canonical isomorphism._
Proof.: If \(\psi_{1}\) and \(\psi_{2}\) are two different Hausdorff resolutions of \(\tilde{Y}\), Lemma 6.8 shows that \(\operatorname{Corr}(\operatorname{id}_{\tilde{Y}},\psi_{1},\psi_{2})\) is a Morita equivalence producing the sought after isomorphism \(K_{*}(C^{*}(R(\psi_{1})))\cong K_{*}(C^{*}(R(\psi_{2})))\).
Next, we study the contravariant properties of \(K\)-theory for a sub-class of continuous mappings. In compliance with the results of the last subsection, we say that a continuous map
\[\pi:\tilde{Y}_{1}\to\tilde{Y}_{2},\]
is HRP (Hausdorff Resolution Proper) if \(\pi\) lifts to a proper morphism \(p_{1}\to p_{2}\) for some Hausdorff resolutions \(p_{1}:X_{1}\to\tilde{Y}_{1}\) and \(p_{2}:X_{2}\to\tilde{Y}_{2}\). We note the following consequence of Lemma 6.13 and 6.14.
**Theorem 6.16**.: _If the continuous map of compact, locally Hausdorff spaces_
\[\pi:\tilde{Y}_{1}\to\tilde{Y}_{2},\]
_is HRP, there is an associated class \([\pi]\in KK_{0}(C^{*}(R(p_{2})),C^{*}(R(p_{1})))\) for any Hausdorff resolutions \(p_{1}:X_{1}\to\tilde{Y}_{1}\) and \(p_{2}:X_{2}\to\tilde{Y}_{2}\). Moreover, if \(\pi^{\prime}:\tilde{Y}_{2}\to\tilde{Y}_{3}\) is another HRP-map, then we have the following Kasparov product_
\[[\pi^{\prime}]\otimes_{C^{*}(R(p_{2}))}[\pi]=[\pi^{\prime}\circ\pi]\in KK_{0}( C^{*}(R(p_{3})),C^{*}(R(p_{1}))),\]
_for any Hausdorff resolution \(p_{3}:X_{3}\to\tilde{Y}_{3}\)._
Contravariance of \(K\)-theory of compact, locally Hausdorff spaces under HRP-maps is now immediate.
**Corollary 6.17**.: _Taking \(K\)-theory of compact, locally Hausdorff spaces defines a contravariant functor from the category of compact, locally Hausdorff spaces with morphisms being HRP-maps to the category of \(\mathbb{Z}/2\)-graded abelian groups._
We now turn to wrong way functoriality. Assume that \(\tilde{g}:\tilde{Y}_{1}\to\tilde{Y}_{2}\) is a surjective local homeomorphism. Our goal is the definition of a wrong way map from the \(K\)-theory of \(\tilde{Y}_{1}\) to that of \(\tilde{Y}_{2}\). Take a Hausdorff resolution \(p:X\to\tilde{Y}_{1}\). The following diagram is clearly commutative:
\[\begin{CD}X@>{\operatorname{id}_{X}}>{}>X\\ @V{p}V{}V@V{}V{\tilde{g}\circ p}V\\ \tilde{Y}_{1}@>{\tilde{g}}>{}>\tilde{Y}_{2}\end{CD}\]
Since the vertical maps are both surjective local homeomorphisms, \(\tilde{g}\circ p:X\to\tilde{Y}_{2}\) is a Hausdorff resolution. We can form the etale groupoids \(R(p)\) and \(R(\tilde{g}\circ p)\) over \(X\), whose assocated groupoid algebras have spectrum \(\tilde{Y}_{1}\) and \(\tilde{Y}_{2}\), respectively. Furthermore, \(R(p)\subseteq R(\tilde{g}\circ p)\) as an open subgroupoid. This inclusion at the groupoid level leads to an inclusion of \(C^{*}\)-algebras, \(C^{*}(R(p))\subseteq C^{*}(R(\tilde{g}\circ p))\). The induced map on \(K\)-theory will be denoted by
\[\tilde{g}!:K^{*}(\tilde{Y}_{1})=K_{*}(C^{*}(R(p)))\to K_{*}(C^{*}(R(\tilde{g} \circ p)))=K^{*}(\tilde{Y}_{2}).\]
Following the same argument as in the proof of Proposition 6.15, we arrive at the following.
**Proposition 6.18**.: _If \(\tilde{g}:\tilde{Y}_{1}\to\tilde{Y}_{2}\) is a surjective local homeomorphism of compact, locally Hausdorff spaces, then the map_
\[\tilde{g}!:K^{*}(\tilde{Y}_{1})\to K^{*}(\tilde{Y}_{2}).\]
_is well-defined, i.e. independent of Hausdorff resolution._
The next proposition shows that we indeed have wrong way functoriality for surjective local homeomorphisms of compact, locally Hausdorff spaces.
**Proposition 6.19**.: _If \(\tilde{g}_{1}:\tilde{Y}_{1}\to\tilde{Y}_{2}\) and \(\tilde{g}_{2}:\tilde{Y}_{2}\to\tilde{Y}_{3}\) are surjective local homeomorphisms of compact, locally Hausdorff spaces, then_
\[\tilde{g}_{2}!\circ\tilde{g}_{1}!=(\tilde{g}_{2}\circ\tilde{g}_{1})!:K^{*}( \tilde{Y}_{1})\to K^{*}(\tilde{Y}_{3}).\]
Proof.: The claim follows from that both sides can be defined from the chain of inclusions
\[C^{*}(R(p))\subseteq C^{*}(R(\tilde{g}_{1}\circ p))\subseteq C^{*}(R(\tilde{g} _{2}\circ\tilde{g}_{1}\circ p))\]
where \(p:X\to\tilde{Y}_{1}\) is a Hausdorff resolution. The first inclusion defines \(\tilde{g}_{1}!\), the second inclusion defines \(\tilde{g}_{2}!\) and the composed inclusion defines \((\tilde{g}_{2}\circ\tilde{g}_{1})!\).
_Remark 6.20_.: We note that also for wrong way maps, we could also work at the level of \(KK\)-theory. Associated with a surjective local homeomorphism \(\tilde{g}:\tilde{Y}_{1}\to\tilde{Y}_{2}\), we can for any Hausdorff resolutions \(p_{1}:X_{1}\to\tilde{Y}_{1}\) and \(p_{2}:X_{2}\to\tilde{Y}_{2}\) define a wrong way class
\[[g!]:=\iota^{*}\operatorname{Corr}(\operatorname{id}_{\tilde{Y}_{2}},p_{2}, \tilde{g}\circ p_{1})\in KK_{0}(C^{*}(R(p_{1})),C^{*}(R(p_{2}))),\]
where \(\iota\) denotes the ideal inclusion \(C^{*}(R(p_{1}))\subseteq C^{*}(R(\tilde{g}\circ p_{1}))\). We can then identify \(g!\) with the Kasparov product from the right
\[\cdot\otimes_{C^{*}(R(p_{1}))}[g!]:KK_{*}(\mathbb{C},C^{*}(R(p_{1})))\to KK_{ *}(\mathbb{C},C^{*}(R(p_{2}))),\]
under the natural isomorphisms
\[KK_{*}(\mathbb{C},C^{*}(R(p_{1})))\cong K_{*}(C^{*}(R(p_{1})))\quad\text{and} \quad KK_{*}(\mathbb{C},C^{*}(R(p_{2})))\cong K_{*}(C^{*}(R(p_{2}))).\]
In this setting, the analogue of Proposition 6.19 is the Kasparov product statement that
\[[g_{1}!]\otimes_{C^{*}(R(p_{2}))}[g_{2}!]=[(g_{2}\circ g_{1})!]\in KK_{0}(C^{ *}(R(p_{1})),C^{*}(R(p_{3}))),\]
which can be verified using Theorem 6.11 and Lemma 6.13.
_Example 6.21_.: If we consider the non-Hausdorff dynamical system \((X^{u}(\mathbf{P})/{\sim_{0}},\tilde{g})\) associated with a Wieler solenoid, and the Hausdorff resolution \(q:X^{u}(\mathbf{P})\to X^{u}(\mathbf{P})/{\sim_{0}}\), then as an element of \(KK_{0}(C^{*}(G_{0}(\mathbf{P})),C^{*}(G_{0}(\mathbf{P})))\) we have that
\[[g!]:=\iota^{*}\operatorname{Corr}(\operatorname{id}_{X^{u}(\mathbf{P})/{ \sim_{0}}},q,\tilde{g}\circ q)=[E],\]
where \(E\) is the \(C^{*}(G_{0}(\mathbf{P}))-C^{*}(G_{0}(\mathbf{P}))\)-correspondence defined as \(E:=C^{*}(G_{0}(\mathbf{P}))\) as a right Hilbert module with left action defined from \(\alpha\). The correspondence \(E\) was considered in Corollary 5.2 and will play a role in \(K\)-theory computations below.
_Example 6.22_.: Let \(g:Y_{1}\to Y_{2}\) be a surjective local homeomorphism of compact, Hausdorff spaces. Write \(E_{g}\) for the finitely generated projective \(C(Y_{2})\)-module \(C(Y_{1})\), with right action defined from \(g^{*}\). We can equipp \(E_{g}\) with the \(C(Y_{2})\)-valued right inner product
\[\langle f_{1},f_{2}\rangle(x):=\sum_{y\in g^{-1}(x)}\overline{f_{1}(y)}f_{2}( y),\quad x\in Y_{2},\ f_{1},f_{2}\in E_{g}=C(Y_{1}).\]
The algebra \(C(Y_{1})\) acts adjointably from the left on \(E_{g}=C(Y_{1})\) by pointwise multiplication. With these structures, \(E_{g}\) is a \(C(Y_{1})-C(Y_{2})\)-correspondence in which \(C(Y_{1})\) acts as \(C(Y_{2})\)-compact operators.
It is immediate from the definition of the inner product on \(E_{g}\) that \(\mathbb{K}_{C(Y_{2})}(E_{g})=C^{*}(R(g))\). By taking \(p=\operatorname{id}_{Y_{1}}\) in the constructions above, it follows that
\[[g!]=[E_{g}]\in KK_{0}(C(Y_{1}),C(Y_{2})).\]
In the case that \(Y=Y_{1}=Y_{2}\), the class \([E_{g}]\in KK_{0}(C(Y),C(Y))\) plays an important role in the \(K\)-theory, or more generally \(KK\)-theory, of the Cuntz-Pimsner algebra associated with the module \(E_{g}\)[16, 30]. For Wieler solenoids this is of interest due to the results of the next subsection.
### Wrong way maps and \(K\)-theory of the stable and unstable Ruelle algebra
In Corollary 5.2 we saw that the stable Ruelle algebra \(C^{*}(G^{s}(\mathbf{P}))\rtimes\mathbb{Z}\) of a Wieler solenoid is a Cuntz-Pimsner algebra over \(C^{*}(G_{0}(\mathbf{P}))\). From the general theory of Cuntz-Pimsner algebras, the \(K\)-theory of the stable Ruelle algebra can therefore be computed from \(K^{*}(X^{u}(\mathbf{P})/{\sim_{0}})\) - the \(K\)-theory of \(C^{*}(G_{0}(\mathbf{P}))\). Using Kaminker-Putnam-Whittaker duality result [21], the \(K\)-theory of the unstable Ruelle algebra can similarly be computed from the \(K\)-homology group \(K_{*}(X^{u}(\mathbf{P})/{\sim_{0}}):=K^{*}(C^{*}(G_{0}(\mathbf{P})))\). Throughout the subsection, we tacitly identify the stable Ruelle algebra \(C^{*}(G^{s}(\mathbf{P}))\rtimes\mathbb{Z}\) with the Cuntz-Pimsner algebra \(O_{E}\) over \(C^{*}(G_{0}(\mathbf{P}))\) in Corollary 5.2.
**Theorem 6.23**.: _Assume that \((X,\phi)\) is a Wieler solenoid and write \((X^{u}(\mathbf{P})/{\sim_{0}},\tilde{g})\) for the associated non-Hausdorff dynamical system. The \(K\)-theory of the stable Ruelle algebra \(K_{*}(C^{*}(G^{s}(\mathbf{P}))\rtimes\mathbb{Z})\) fits into a six term exact sequence with the \(K\)-theory of \(X^{u}(\mathbf{P})/{\sim_{0}}\):_
\[\begin{CD}K^{0}(X^{u}(\mathbf{P})/{\sim_{0}})@>{1-\tilde{g}^{!}}>{}>K^{0}(X^{u }(\mathbf{P})/{\sim_{0}})@>{js}>{}>K_{0}(C^{*}(G^{s}(\mathbf{P}))\rtimes \mathbb{Z})\\ @V{\beta_{S}}V{}V@V{}V{\beta_{S}}V\\ K_{1}(C^{*}(G^{s}(\mathbf{P}))\rtimes\mathbb{Z})@<{js}<{}>K^{1}(X^{u}(\mathbf{ P})/{\sim_{0}})@<{1-\tilde{g}^{!}}<{}>K^{1}(X^{u}(\mathbf{P})/{\sim_{0}})\end{CD}\]
_where \(j_{S}:C^{*}(G_{0}(\mathbf{P}))\hookrightarrow C^{*}(G^{s}(\mathbf{P}))\rtimes \mathbb{Z}\) denotes the inclusion and \(\beta_{S}\) is the Pimsner boundary map in \(KK_{1}(C^{*}(G^{s}(\mathbf{P}))\rtimes\mathbb{Z},C^{*}(G_{0}(\mathbf{P})))\), cf. [3, 16, 30]._
Proof.: By standard results for Cuntz-Pimsner algebras [3, 16, 30], we have a six term exact sequence
\[\begin{CD}K_{0}(C^{*}(G_{0}(\mathbf{P})))@>{1-\otimes_{C^{*}(G_{0}(\mathbf{P }))}[E]}>{}>K_{0}(C^{*}(G_{0}(\mathbf{P})))@>{j_{S}}>{}>K_{0}(C^{*}(G^{s}( \mathbf{P})))\rtimes\mathbb{Z})\\ @V{\beta_{S}}V{}V@V{}V{\beta_{S}}V{}V\\ K_{1}(C^{*}(G^{s}(\mathbf{P}))\rtimes\mathbb{Z})@<{js}<{}>K_{1}(C^{*}(G_{0}( \mathbf{P})))@<{1-\otimes_{C^{*}(G_{0}(\mathbf{P}))}[E]}<{}>K_{1}(C^{*}(G_{0}( \mathbf{P})))\end{CD}\]
where \(E\) is the \(C^{*}(G_{0}(\mathbf{P}))-C^{*}(G_{0}(\mathbf{P}))\)-correspondence from Corollary 5.2. The theorem now follows from the identity \([\tilde{g}^{!}]=[E]\in KK_{0}(C^{*}(G_{0}(\mathbf{P})),C^{*}(G_{0}(\mathbf{P})))\) in Example 6.21.
Using Kaminker-Putnam-Whittaker duality result [21], we can derive an analogous result for the \(K\)-theory of the unstable Ruelle algebra \(C^{*}(G^{u}(\mathbf{P}))\rtimes\mathbb{Z}\). Here one uses \(K\)-homology of \(X^{u}(\mathbf{P})\) instead of \(K\)-theory, and the theory for \(K\)-homology of compact, locally Hausdorff spaces can be developed in the same way as that of \(K\)-theory in Subsection 6.2.
**Theorem 6.24**.: _Assume that \((X,\phi)\) is a Wieler solenoid and write \((X^{u}(\mathbf{P})/{\sim_{0}},\tilde{g})\) for the associated non-Hausdorff dynamical system. The \(K\)-theory of the unstable Ruelle algebra \(K_{*}(C^{*}(G^{u}(\mathbf{P}))\rtimes\mathbb{Z})\) fits into a six term exact sequence the \(K\)-homology of \(X^{u}(\mathbf{P})/{\sim_{0}}\):_
\[\begin{CD}K_{0}(X^{u}(\mathbf{P})/{\sim_{0}})@>{1-[\tilde{g}^{!}]\otimes}>{}>K_{0}(X ^{u}(\mathbf{P})/{\sim_{0}})@>{\beta_{U}}>{}>K_{0}(C^{*}(G^{u}(\mathbf{P})) \rtimes\mathbb{Z})\\ @V{j_{U}}V{}V@V{}V{j_{U}}V\\ K_{1}(C^{*}(G^{u}(\mathbf{P}))\rtimes\mathbb{Z})@<{\beta_{U}}<{}>K_{1}(X^{u}( \mathbf{P})/{\sim_{0}})@<{1-[\tilde{g}^{!}]\otimes}>{}<K_{1}(X^{u}(\mathbf{P})/ {\sim_{0}})\end{CD}\]
_where \(j_{U}\) is Kaminker-Putnam-Whittaker dual to \(j_{S}\) and \(\beta_{U}\) is Kaminker-Putnam-Whittaker dual to the Pimsner boundary map in \(KK_{1}(C^{*}(G^{s}(\mathbf{P}))\rtimes\mathbb{Z},C^{*}(G_{0}(\mathbf{P})))\), cf. [3, 16, 30]._
### Groupoid homology
Although we have only discussed the \(K\)-theory of stable and stable Ruelle groupoid \(C^{*}\)-algebras, one can also use the results in this paper to compute the homology of these groupoids. For the stable groupoid, one uses the main result of [11] (stated above in Theorem 2.4) and [15, Proposition 4.7 ]. Again, determining the map (now on groupoid homology) associated to the open inclusion \(G_{0}(\mathbf{P})\subseteq G_{1}(\mathbf{P})\) is the key
to explicit computations. For the stable Ruelle groupoid, one uses the homology of the stable groupoid and [29, Lemma 1.3] (also see [7, Theorem 3.8]).
### Examples
In this section of the paper we discuss a few explicit examples. In the first few, the original map \(g\) is a local homeomorphism and hence \(X^{u}(\mathbf{P})/{\sim_{0}}=Y\) and \(\tilde{g}=g\). Nevertheless, these examples illustrate the importance of computing the wrong-way map in \(K\)-theory computation. We also summarize the case of the aab/ab-solenoid where \(X^{u}(\mathbf{P})/{\sim_{0}}\neq Y\) and \(\tilde{g}\neq g\), which is discussed in much more detail in [11]. In fact, typically \(g:Y\to Y\) is not a local homeomorphism, so that \(X^{u}(\mathbf{P})/{\sim_{0}}\neq Y\) and \(\tilde{g}\neq g\), see for example [8, Page 14] where tiling spaces are discussed.
_Example 6.25_.: Suppose \(Y\) is a manifold and \(g:Y\to Y\) is an expanding endomorphism as defined by Shub [35]. In this case, \(g\) is a covering map and hence a local homeomorphism. In [5] it is shown that the map \(g!\) is the transfer map associated to the cover, compare to Example 6.22 above. In particular, \(g!\) is a rational isomorphism in the case of \(K\)-theory and in the case of homology has even better properties, see [5] for details.
The case when \(Y\) is the circle and \(g\) is the two-fold cover from the circle to itself is a special case of this situation. It is well-known or one can check directly that \(g!\) is given by multiplication by \(2\) in degree zero and the identity in degree one. Hence,
\[K_{*}(C^{*}(G^{s}(\mathbf{P})))\cong\left\{\begin{array}{cc}\mathbb{Z}\left[ \frac{1}{2}\right]&*=0\\ \mathbb{Z}&*=1\end{array}\right.\]
and
\[K_{*}(C^{*}(G^{s}(\mathbf{P}))\rtimes\mathbb{Z})\cong\left\{\begin{array}{cc} \mathbb{Z}&*=0\\ \mathbb{Z}&*=1\end{array}\right.\]
The reader can see [5] for other explicit computations of the transfer map in the case of flat manifolds.
The unstable and unstable Ruelle algebras associated to a Wieler solenoid are relevant in the study of the HK-conjecture. However, the stable and stable Ruelle algebras are not relevant for the HK-conjecture because the unit space of the relevant groupoids are not totally disconnected (except for the case of shifts of finite type). For example, in the case of a flat manifold \(Y\), we have that the unit space is \(\mathbb{R}^{\dim(Y)}\).
_Example 6.26_.: When the relevant Smale space is a two-sided shifts of finite type, the relevant Wieler pre-solenoid is the one-sided shifts of finite type, see [39]. In this case, the map \(g\) is the shift map and \(g!\) is not a rational isomorphism even though \(g\) is a local homeomorphism. This fact is in contrast with the previous example.
_Example 6.27_.: We will discuss the case of the \(p/q\)-solenoid [4] briefly. The details will be published elsewhere and were obtained at the same time as [5]. As such, we give a short summary of the results. Let \(S^{1}\) be the unit circle in the complex plane and \(1<q<p\) be positive integers with \(\gcd(p,\,q)\)=1. The \(p/q\)-solenoid can be realized as an inverse limit where \(Y\) is the \(q\)-solenoid. That is,
\[Y=S_{q}=\{(z_{0},z_{1},z_{2},\ldots)\mid z_{i}\in S^{1}\text{ and }z_{i+1}^{q}=z_{i}\}\]
The map is defined via
\[g(z_{0},z_{1},z_{2},\ldots)=(z_{1}^{p},z_{2}^{p},\ldots)\]
where \((z_{0},z_{1},z_{2},\ldots)\in Y=S_{q}\).
Since \(g\) is a local homeomorphism in this example, \(K_{*}(C^{*}(G_{0}(\mathbf{P}))\cong K^{*}(S_{q})\). Furthermore, the \(K\)-theory of \(S_{q}\) is known and given by
\[K^{*}(S_{q})\cong\left\{\begin{array}{cc}\mathbb{Z}&*=0\\ \mathbb{Z}\left[\frac{1}{q}\right]&*=1\end{array}\right.\]
Then one computes that \(g!\) is multiplication by \(p\) in degree zero and the identity in degree one. Hence,
\[K_{*}(C^{*}(G^{s}(\mathbf{P})))\cong\left\{\begin{array}{ll}\mathbb{Z}\left[ \frac{1}{p}\right]&*=0\\ \mathbb{Z}\left[\frac{1}{q}\right]&*=1\end{array}\right.\]
and
\[K_{*}(C^{*}(G^{s}(\mathbf{P})))\rtimes\mathbb{Z})\cong\left\{\begin{array}{ ll}\mathbb{Z}\left[\frac{1}{q}\right]&*=0\\ \mathbb{Z}\left[\frac{1}{q}\right]\oplus\mathbb{Z}/(p-1)\mathbb{Z}&*=1\end{array}\right.\]
_Example 6.28_.: When \((Y,g)\) is the aab/ab-solenoid the map \(g\) is not a local homeomorphism. The \(K\)-theory of \(C^{*}(G_{0}(\mathbf{P}))\) and the induced map \(\tilde{g}!\) were computed in [11]. A summary is as follows. We have that
\[K_{*}(C^{*}(G_{0}(\mathbf{P})))\cong\left\{\begin{array}{ll}\mathbb{Z} \oplus\mathbb{Z}&*=0\\ \mathbb{Z}&*=1\end{array}\right.\]
with \(\tilde{g}!\) given by \(\begin{bmatrix}2&1\\ 1&1\end{bmatrix}\) in degree zero and the identity in degree one. Hence,
\[K_{*}(C^{*}(G^{s}(\mathbf{P})))\cong\left\{\begin{array}{ll}\mathbb{Z} \oplus\mathbb{Z}&*=0\\ \mathbb{Z}&*=1\end{array}\right.\]
and
\[K_{*}(C^{*}(G^{s}(\mathbf{P}))\rtimes\mathbb{Z})\cong\left\{\begin{array}{ ll}\mathbb{Z}&*=0\\ \mathbb{Z}&*=1\end{array}\right.\]
Other similar computations, both for one dimensional solenoids and more general constructions from tiling spaces, can be found in [17, 18, 33, 37, 38, 42].
|
2309.04076 | Greening Large Language Models of Code | Large language models of code have shown remarkable effectiveness across
various software engineering tasks. Despite the availability of many cloud
services built upon these powerful models, there remain several scenarios where
developers cannot take full advantage of them, stemming from factors such as
restricted or unreliable internet access, institutional privacy policies that
prohibit external transmission of code to third-party vendors, and more.
Therefore, developing a compact, efficient, and yet energy-saving model for
deployment on developers' devices becomes essential.
To this aim, we propose Avatar, a novel approach that crafts a deployable
model from a large language model of code by optimizing it in terms of model
size, inference latency, energy consumption, and carbon footprint while
maintaining a comparable level of effectiveness. The key idea of Avatar is to
formulate the optimization of language models as a multi-objective
configuration tuning problem and solve it with the help of a Satisfiability
Modulo Theories (SMT) solver and a tailored optimization algorithm. The SMT
solver is used to form an appropriate configuration space, while the
optimization algorithm identifies the Pareto-optimal set of configurations for
training the optimized models using knowledge distillation. We evaluate Avatar
with two popular language models of code, i.e., CodeBERT and GraphCodeBERT, on
two popular tasks, i.e., vulnerability prediction and clone detection. We use
Avatar to produce optimized models with a small size (3 MB), which is
160$\times$ smaller than the original large models. On the two tasks, the
optimized models significantly reduce the energy consumption (up to 184$\times$
less), carbon footprint (up to 157$\times$ less), and inference latency (up to
76$\times$ faster), with only a negligible loss in effectiveness (1.67\% on
average). | Jieke Shi, Zhou Yang, Hong Jin Kang, Bowen Xu, Junda He, David Lo | 2023-09-08T02:20:44Z | http://arxiv.org/abs/2309.04076v3 | # Towards Smaller, Faster, and Greener Language Models of Code
###### Abstract.
Large language models of code have shown remarkable effectiveness across various software engineering tasks. Despite the availability of many cloud services built upon these powerful models, there remain several scenarios where developers cannot take full advantage of them, stemming from factors such as restricted or unreliable internet access, institutional privacy policies that prohibit external transmission of code to third-party vendors, and more. Therefore, developing a compact, efficient, and yet energy-saving model for deployment on developers' devices becomes essential.
To this aim, we propose Avatar, a novel approach that crafts a deployable model from a large language model of code by optimizing it in terms of model size, inference latency, energy consumption, and carbon footprint while maintaining a comparable level of effectiveness (e.g., prediction accuracy on downstream tasks). The key idea of Avatar is to formulate the optimization of language models as a multi-objective configuration tuning problem and solve it with the help of a Satisfiability Modulo Theories (SMT) solver and a tailored optimization algorithm. The SMT solver is used to form an appropriate configuration space, while the optimization algorithm identifies the Pareto-optimal set of configurations for training the optimized models using knowledge distillation. We evaluate Avatar with two popular language models of code, i.e., CodeBERT and GraphCodeBERT, on two popular tasks, i.e., vulnerability prediction and clone detection. We use Avatar to produce optimized models with a small size (3 MB), which is 160\(\times\) smaller than the original large models. On the two tasks, the optimized models significantly reduce the energy consumption (up to 184\(\times\) less), carbon footprint (up to 157\(\times\) less), and inference latency (up to 76\(\times\) faster), with only a negligible loss in effectiveness (1.67% on average). Compared to the state-of-the-art approach, Avatar also optimizes language models of code more effectively in all metrics.
Language Models of Code, Configuration Tuning, Multi-Objective Optimization +
Footnote †: 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xx-xxx-xx/YY/MM... $15.00[https://doi.org/10.1145/mnmnmn](https://doi.org/10.1145/mnmnmn).
+
Footnote †: 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xx-xxx-xx/YY/MM... $15.00[https://doi.org/10.1145/mnmnmnmn](https://doi.org/10.1145/mnmnmnmn)
+
Footnote †: 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xx-xxx-xx/YY/MM... $15.00[https://doi.org/10.1145/mnmnmnmn](https://doi.org/10.1145/mnmnmnmn)
## 1. Lay abstract
Large language models of code have proven highly effective in various software engineering tasks, such as spotting program defects and helping developers write code. While many cloud services built on these models (e.g., GitHub Copilot) are now accessible, several factors, such as unreliable internet access (e.g., over 20% of GitHub Coplilot's issues are related to network connectivity1) and privacy concerns (e.g., Apple has banned internal use of external AI tools to protect confidential data2), hinder developers from fully utilizing these services. Therefore, deploying language models of code on developers' devices like laptops appears promising. However, local deployment faces challenges: (1) Consumer-grade personal devices typically lack sufficient memory and the high-performance CPUs/GPUs required for efficient model execution; (2) Even if the hardware requirements are met, deploying the models on many devices can result in considerable energy consumption and carbon emissions, negatively impacting environmental sustainability.
Footnote 1: [https://github.com/org/community/discussions/categories/copilot?](https://github.com/org/community/discussions/categories/copilot?)
Footnote 2: [https://techcrunch.com/20203/05/19/apple-reportedly-limits-internal-use-of-si-powered-tools-like-chatpg+and-github-copilot](https://techcrunch.com/20203/05/19/apple-reportedly-limits-internal-use-of-si-powered-tools-like-chatpg+and-github-copilot)
To address these challenges, we present Avatar, an innovative approach that optimizes large language models of code and enables their deployment on consumer-grade devices. Avatar can optimize two popular models from a large size of 481 MB to a compact size of 3 MB, resulting in significant reductions in execution time, energy consumption, and carbon emissions by hundreds of times.
Our technique effectively lowers the entry barrier for leveraging large language models of code, making them available to ordinary developers without the need for high-performance computing equipment. Furthermore, it also contributes to a more sustainable and user-friendly software development environment.
## 1. Introduction
Recent years have seen a remarkable surge in Artificial Intelligence (AI)-powered services for software engineering, such as GitHub Copilot (Krishnan et al., 2017) and GitLab Auto DevOps (Krishnan et al., 2017). This surge has brought a new level of automation to the software development process, significantly improving developer's productivity and the quality of software products. According to an economic analysis report released by GitHub, AI-powered services for software development could boost the global GDP by over $1.5 trillion by 2030 (Krishnan et al., 2017).
The foundation of these AI-powered services lies in large language models of code (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2018; Krishnan et al., 2019). These models have shown superior performance in various software engineering tasks such as vulnerability detection (Krishnan et al., 2017; Krishnan et al., 2019) and code completion (Krishnan et al., 2019; Krishnan et al., 2020). However, the services that utilize language models of code are typically hosted in the cloud, giving rise to several issues such as data leakage concerns (Krishnan et al., 2017; Krishnan et al., 2019; Krishnan et al., 2019) and poor user experience due to network fluctuations.1 Therefore, there is a growing need for deploying these models within the integrated development environments (IDEs) on developers' local machines. However, recent studies (Krishnan et al., 2019; Krishnan et al., 2019) have highlighted several challenges associated with deploying language models of code, including their large size, long inference latency, high energy consumption, and considerable carbon footprint.
Footnote 1: All of these calculations on energy consumption and carbon footprint are based on the Machine Learning Emissions Calculator. [https://mlco2.github.io/impact/compute](https://mlco2.github.io/impact/compute).
Typically, language models of code are large-sized with numerous parameters. For example, CodeBERT (Krishnan et al., 2019) and GraphCodeBERT (Krishnan et al., 2019), two popular language models of code, both have 125 million parameters, resulting in a file size of about 500 megabytes (MB). The recently released Code Llama model is even larger at over 130 gigabytes (GB) (Krishnan et al., 2019). However, real-world deployment experiences, as observed by the Visual Studio team in deploying IDEs, have emphasized a preference for compact models, which are typically around 3 MB in size and can seamlessly function as IDE components or editor plug-ins even on low-end hardware devices (Krishnan et al., 2019). Meanwhile, language models perform billions of floating-point operations (FLOPs) during inference. These massive computations cause long inference latency, often taking over 1.5 seconds to return a prediction (Krishnan et al., 2019). Such delays can disrupt developers' workflow, ultimately resulting in a suboptimal user experience. Previous studies (Krishnan et al., 2019; Krishnan et al., 2019) suggest that for a model deployed in IDEs to offer developers instantaneous assistance, its inference latency should ideally be within a few tens of milliseconds at most. The inability of language models of code to meet the above requirements gives rise to usability issues, consequently impeding their widespread deployment within developers' IDEs.
Furthermore, and perhaps even more importantly, the billions of FLOPs during inference entail significant energy consumption and carbon footprint, raising concerns about environmental and climate sustainability. Considering a CodeBERT deployed in IDEs, a developer typically needs to run it thousands of times per day, which is a common usage amount (Krishnan et al., 2019). Such intensive usage results in an energy consumption of 0.32 kilowatt-hours (kWh), while a typical consumer-grade laptop has a battery capacity of around 70 watt-hours (Bahdan et al., 2017), i.e., 0.07 kWh. Consequently, a laptop's battery can only support a developer running CodeBERT for 0.22 hours, which is far from sufficient for a typical workday. This would frustrate developers and also hinder their ability to work flexibly in mobile environments. Moreover, the above energy cost of 0.32 kWh can translate into a considerable carbon footprint, amounting to approximately 0.14 kilograms of CO2 emissions. This carbon footprint is comparable to the emissions generated by driving a car for 0.6 miles.3 With the expected widespread adoption of language models of code by many software developers in the near future, the cumulative carbon footprint stemming from model inference will become an increasingly pressing issue.
Footnote 3: All of these calculations on energy consumption and carbon footprint are based on the Machine Learning Emissions Calculator. [https://mlco2.github.io/impact/compute](https://mlco2.github.io/impact/compute).
To date, few approaches have emerged to address the above issues (Krishnan et al., 2019; Krishnan et al., 2019). Shi et al. (Shi et al., 2019) propose Compressor, the state-of-the-art approach that can compress language models of code down to 3 MB and thereby improve their inference latency. Compressor adopts the knowledge distillation technique (Shi et al., 2019) to transfer knowledge from a large model to a tiny one with a well-crafted architecture searched by their proposed genetic algorithm. However, while Compressor excels at optimizing the model size and inference latency, it does not encompass the optimization of two other critical aspects, i.e., energy consumption and carbon footprint. Additionally, Compressor's search space for small model architectures is limited solely to hyperparameters related to model size, like the number of network layers. This limited scope excludes configurations that can significantly affect a model's effectiveness, like the choice of tokenizer (Krishnan et al., 2019). Consequently, it falls short of identifying the optimal small model. These limitations necessitate our work. Our work still follows the idea of using knowledge distillation to optimize language models for the sake of size and inference latency, but offers a novel take on simultaneously addressing the issues of energy consumption and carbon footprint.
This paper proposes Avatar, a novel approach aimed at optimizing language models of code for real-world deployment. Avatar accomplishes this by formulating the seeking of an optimal model as a multi-objective configuration tuning problem, where the optimization objectives include the simultaneous minimization of model size, inference latency, energy consumption, and carbon footprint, while maintaining effectiveness (e.g., prediction accuracy) on downstream tasks.
Avatar starts by identifying the key configurations within language models that impact the above objectives. It then innovatively combines a Satisfiability Modulo Theories (SMT) solver with a tailored multi-objective optimization algorithm to solve the configuration tuning problem. The SMT solver is used to construct a configuration space that adheres to the 3 MB model size constraint, while the multi-objective optimization algorithm identifies the Pareto-optimal set of configurations, i.e., the set of configurations that cannot be improved in one objective without making sacrifices in another, thereby achieving the best trade-off among all objectives. To efficiently obtain the effectiveness of models during optimization without the need for expensive training and evaluation processes,
Avatar builds a regression model serving as an effectiveness indicator. This indicator estimates a model's effectiveness solely based on its configurations, facilitating the quick identification of the Pareto-optimal configurations. Finally, Avatar leverages knowledge distillation to train a compact and environmentally-friendly model using the configurations from the Pareto-optimal set.
We evaluate Avatar using the same settings as the baseline method (Vaswani et al., 2017). Our evaluation focuses on optimizing two representative language models of code: CodeBERT (Krishnan et al., 2017) and GraphCodeBERT (Krishnan et al., 2017). We utilize two datasets for popular automated software engineering tasks: vulnerability prediction and clone detection. With Avatar, we produce optimized models with a compact size of 3 MB, a reduction of 160\(\times\) compared to the original large language models. Across both tasks, these optimized models show a remarkable improvement in various aspects. They reduce inference latency by up to 76\(\times\) compared to the original models, optimize energy consumption by up to 184\(\times\) less, and reduce carbon footprint by up to 157\(\times\) less. Importantly, these optimizations incur only a negligible loss in effectiveness, averaging 1.67%. Notably, Avatar outperforms the baseline method, Compressor, across all metrics. On average, Avatar achieves a 0.75% higher prediction accuracy. Additionally, it exhibits significant improvements in terms of inference latency (44\(\times\) faster on average), energy consumption (up to 8\(\times\) less), and carbon footprint (up to 7\(\times\) less). Moreover, we also highlight the benefits of Avatar in the context of cloud deployment, showing that the optimized models can process up to 9.7\(\times\) more queries per second than the original large language models of code.
The contributions of this paper are summarized as follows:
* **Insight:** We are the first to propose optimizing language models of code in terms of the energy consumption and carbon footprint by tuning their configurations.
* **Technique:** We propose and implement Avatar, a novel approach that uses an SMT solver and a tailored multi-objective optimization algorithm to optimize language models of code in terms of model size, inference latency, energy consumption, and carbon footprint, while maintaining effectiveness.
* **Evaluation:** We perform a thorough evaluation of Avatar, and the results show that Avatar effectively optimizes language models of code, greatly outperforming the state-of-the-art approach.
## 2. Preliminaries
**Language Models of Code and Their Configurations.** The recent development and adoption of language models of code have enabled state-of-the-art results to be achieved on code-related tasks (Vaswani et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). These powerful models are mainly built upon the Transformer architecture (Vaswani et al., 2017) and trained on large datasets of source code from various programming languages. Among these models, a notable category is encoder-only models such as CodeBERT (Krishnan et al., 2017) and GraphCodeBERT (Krishnan et al., 2017), which utilize solely the encoder component of Transformer and are specialized for program understanding tasks such as vulnerability detection (Wang et al., 2018) and code search (Wang et al., 2019). These encoder-only models represent the software engineering community's early efforts at language models of code (Vaswani et al., 2017). Due to their pioneering status, these models have long been used in various real-world applications like the Akvelcon code search engine (Brock et al., 2018). This has led to widespread popularity and social impact and thus motivated our study to focus on these models.
Typically, encoder-only language models of code have a number of configurations that can be tuned to achieve varying levels of model performance. Listing 1 shows an example of tunable configurations from the Hugging Face's implementation (Hugging Face, 2018), with a total number of 13. Six of these configurations directly impact model size and inference latency, including the number of hidden layers, hidden size (i.e., the dimension of hidden layers), number of attention heads, vocabulary size, intermediate size (i.e., the dimension of feed-forward layers), and maximum sequence length. Larger values in these configurations tend to result in larger model sizes and longer inference latency, while smaller values may compromise model effectiveness (e.g., prediction accuracy). Compressor(Vaswani et al., 2017) focuses solely on tuning these configurations to optimize model size and inference latency at the cost of effectiveness.
However, there exist seven additional configurations that also contribute to model effectiveness. These include the choice of tokenizer, activation function for hidden layers, type of position embeddings, dropout rates for hidden layers and attention heads, learning rate, and batch size. For example, the choice of a tokenizer can affect a model's ability to capture the semantics of source code (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019), thus impacting its overall effectiveness. In this study, we aim to tune all 13 configurations to achieve the best trade-off between model effectiveness and efficiency. We discuss the tuning space of these configurations and how to tune them in Section 3.
**Knowledge Distillation.** Knowledge distillation has proven to be an effective technique for optimizing large language models in terms of model size (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). It compresses a large model (referred to as the teacher model) by training a small model (the student model) to mimic the behaviors of the large one (i.e., produces the same output given the same input) (Vaswani et al., 2017; Wang et al., 2019; Wang et al., 2019).
In line with recent work (Vaswani et al., 2017), our study leverages a task-specific distillation method introduced by Hinton et al. (2017) to optimize language models of code. The algorithm of this method is shown in Listing 2. Specifically, given a language model of code that is fine-tuned for a specific task and a small model to be trained, we input training data into both models, collect the resulting output
probability values (line 15), and then update the parameters of the small model (line 8) to minimize the training loss computed by the function shown in line 7. The intuition behind minimizing this loss function is to bring the outputs of the language and small models closer together. \(p_{i}\) and \(q_{i}\) in this function denote the outputs of the large and small models, respectively. \(T\) is the softmax function's temperature parameter, as Hinton et al. (2011) introduced. Note that the language model producing \(p_{i}\) is fixed during the distillation process, while the small model producing \(q_{i}\) is trained.
Note that the above loss function does not necessitate ground-truth labels, only requiring the model's outputs. Thus, we follow Compressor(Ghezani et al., 2017) to use unlabeled data for training. This choice is driven by the practical consideration that obtaining labeled data is typically costly and challenging, while ample unlabeled data can be readily collected from open-source software platforms like GitHub.
## 3. Methodology
### Problem Formulation
As introduced in Section 1, we aim to optimize the model size, inference latency, energy consumption, and carbon footprint of language models of code while maintaining their effectiveness (e.g., prediction accuracy on downstream tasks). Among these objectives, the inference latency, energy consumption, and carbon footprint are all related to the model's computational cost during inference. We use floating-point operations (FLOPs) to measure computational cost, following prior studies (Zhou et al., 2018; Zhou et al., 2018; Ghezani et al., 2018). FLOPs count how many multiply and accumulate operations the model performs for each prediction. The more FLOPs a model has, the more time it will take to make a prediction, the more energy it will consume, and the more CO\({}_{2}\) it will emit (Zhou et al., 2018). Therefore, we use FLOPs as the proxy for these three objectives. Then, combined with the model size and effectiveness, we formulate our optimization problem as follows:
\[\begin{array}{ll}\min_{c}&\{\texttt{size}(c),\texttt{FLOPs}(c),\texttt{- effectiveness}(c)\}\\ \texttt{s.t.}&c\in C\end{array} \tag{1}\]
where \(c\) is a set of configurations, and \(C\) defines the configuration space, as illustrated in Listing 3. Most of these configurations offer a range of adjustable integer or decimal values. For instance, the vocabulary size is adjustable to any integer value ranging from 1,000 to 50,265. Some others involve selecting from predefined options. The tokenizer requires a choice among four popular tokenization methods: Byte-Pair Encoding (Zhou et al., 2018), WordPiece (Wu et al., 2018), Unigram (Wu et al., 2018), and Word (Wu et al., 2018). Additionally, we set the hidden activation function and position embedding type as tunable configurations following the Hugging Face's implementation (Hugging Face, 2018), which includes a few more advanced options than the original implementation of language models. The hidden activation function requires a choice from four options: Gaussian Error Linear Unit (GELU) (Zhou et al., 2018), Rectified Linear Unit (ReLU) (Ghezani et al., 2018), Sigmoid Linear Unit (SILU) (Ghezani et al., 2018), and a new GELU implementation (GELU_new) (Hugging Face, 2018). The position embedding type offers three choices: absolute, relative_key(Zhou et al., 2018), and relative_key_query(Ghezani et al., 2018). In total, the configuration space contains about \(4.5\times 10^{19}\) possible sets of configurations, which is much larger than the one used by Compressor that only tunes 5 configurations. Our configuration space is also extensible to include more configurations or more options for existing configurations, such as more tokenizer choices. Here we focus on the configuration space shown in Listing 3 as studies (Zhou et al., 2018; Ghezani et al., 2018) and Hugging Face's implementation (Hugging Face, 2018) have explicitly shown that these configurations and options have a significant impact on model effectiveness.
Solving the problem posed by Equation 1 is challenging for three reasons: (1) the tuning space of configurations is quite huge, which makes brute force impractical since evaluating all configurations is computationally infeasible; (2) utilizing off-the-shelf Satisfiability Modulo Theories (SMT) solvers that support solving constrained optimization problems is not a viable approach for solving this problem. This is because obtaining model effectiveness necessitates training and testing the model. Such a process cannot be formulated as a mathematical function of configurations that SMT solvers can handle; (3) this multi-objective optimization problem comes with objectives that conflict with others. For example, a larger model typically has better effectiveness on downstream tasks but incurs higher FLOPs. Thus, solving Equation 1 involves finding a Pareto-optimal solution set, i.e., a set of trade-off solutions where no solution can be improved in one objective without degrading other objectives (Ghezani et al., 2018), rather than finding a single, unique solution.
### Approach Overview
Pursuant to the above challenges, our approach, Avatar, is designed to solve the problem through a multi-step process outlined in Figure 1. First, we prune the configuration space using an SMT solver, with the 3 MB model size constraint suggested by prior studies (Sundar et al., 2017; Sundar et al., 2018) as the pruning criterion (Section 3.3). This initial step removes configurations that are irrelevant to our objectives, thereby facilitating the subsequent identification of Pareto-optimal configurations. Next, we sample a small number of configurations from the pruned space and use them to train a regression model that can predict the effectiveness of a model initialized by a given set of configurations, i.e., build an effectiveness indicator (Section 3.4). Subsequently, we use a multi-objective optimization algorithm, assisted by the effectiveness indicator, to identify the set of Pareto-optimal configurations within the pruned space (Section 3.5). Finally, we train a compact and environmentally-friendly model with the configurations from the Pareto-optimal set using the knowledge distillation technique that we have introduced in Section 2. We describe these steps in detail below.
### Pruning Configuration Space
The predefined configuration space shown in Listing 3 is incredibly large, with quintillions of possible configuration sets. However, only a fraction of them adhere to the constraints outlined in Section 1. For example, setting the vocabulary size to its maximum value of 50,265 will result in a model size that exceeds the 3 MB constraint, even with all other configurations minimized. Such configurations are thus considered irrelevant to our objectives and should be omitted from the configuration space to facilitate the subsequent process of identifying Pareto-optimal configurations.
We prune the configuration space by formulating and solving a constraint satisfaction problem using Microsoft Z3 (Marcos et al., 2018), a state-of-the-art SMT solver known for efficiently handling nonlinear constrained optimization problems (Beng et al., 2017; Chen et al., 2018). While Z3 cannot directly solve our primary optimization problem, it performs well at identifying and excluding configurations that violate specified constraints. One crucial constraint is related to model size, as introduced in Section 1, which specifies that the model size cannot exceed 3 MB. This constraint is only explicit one suggested by prior studies (Sundar et al., 2017; Sundar et al., 2018) while acceptable standards for other objectives have not been empirically specified. We formulate the constraint satisfaction problem as follows, where \(\mathcal{C}\) represents the configuration space, and \(c\) denotes a set of configurations:
\[\texttt{size}(c)\leq 3\ \text{MB}\quad\text{s.t.}\quad c\in\mathcal{C} \tag{2}\]
Solving this constraint satisfaction problem yields multiple sets of configurations that satisfy the model size constraint, which can then be merged to craft a new configuration space.
As pointed out in Section 2, a language model typically offers a handful of tunable configurations that directly determine the model size. Let \(t\) denote the vocabulary size, \(l\) denote the number of hidden layers, \(h\) denote the hidden size, \(i\) denote the intermediate size, \(a\) denote the number of attention heads, and \(s\) denote the maximum sequence length. Then the model size can be calculated as follows:
\[\texttt{size}(c) =\frac{4(u+s+3)h}{1024\times 1024} \texttt{\# embedding layer}\] \[+\frac{4(4h^{2}+(9+2i)h+i)l}{1024\times 1024} \texttt{\# transformer layers}\] \[+\frac{2h^{2}+4h+2}{1024\times 1024} \texttt{\# classifier layer} \tag{3}\]
The above formula follows the official implementation of Compressor(Sundar et al., 2017) to calculate the actual file size of a model in MB. It breaks down a language model of code into three components: the embedding, transformer, and classifier layers. By summing these components, the formula calculates the total model size. Note that this formula only considers the six configurations that directly affect model size, while excluding other configurations like the tokenizer from our constraint satisfaction problem-solving process.
We then use the above formula and the raw configuration space as inputs to Z3, to find the configurations for which the formula evaluates to a value less than 3 MB. Considering that solving with Z3 can slow down significantly when dealing with an overly large configuration space (Chen et al., 2018; Chen et al., 2018), we run Z3 by partitioning the configuration space into several smaller subspaces and processing them in parallel. Taking the vocabulary size as an example, we can partition the original range of 1,000 to 50,265 into 50 subranges, i.e., 1,000
Figure 1. The workflow of Avatar.
to 2,000, 2,000 to 3,000, etc. These 50 subranges are then combined with the tuning ranges of other configurations, forming 50 subspaces. Each subspace's constraint satisfaction problem is treated as an independent task and solved in parallel using separate Z3 threads. Once all tasks are completed, we aggregate the results to form a new, pruned configuration space, as shown in Listing 4. The underlined entries, i.e., the vocabulary size, hidden size, and intermediate size, have been pruned. This process significantly reduces the configuration space from \(4.5\times 10^{19}\) to \(1.3\times 10^{19}\), which accounts for only 28.9% of the original space. Notably, the pruned configuration space still contains a broad and diverse range of configurations, providing sufficient space to identify Pareto-optimal solutions.
### Effectiveness Indicator
When tuning configurations, assessing the effectiveness of a model that has a given set of configurations is essential to determine whether it qualifies as a Pareto-optimal solution. However, obtaining model effectiveness through training and testing is computationally expensive. Inspired by recent work in leveraging machine learning techniques to predict the runtime performance of software (Zhu et al., 2017; Zhang et al., 2018; Zhang et al., 2018), we propose to construct a regression model as a proxy for the training and testing process. Specifically, the regression model builds a computationally efficient function that maps a model's configurations to its effectiveness, enabling us to estimate a model's effectiveness using only the provided configuration as input. Consequently, this approach eliminates the need for resource-intensive model training and testing. We consider this regression model as an effectiveness indicator.
We follow the procedures outlined in Listing 5 to develop an effectiveness indicator. First, we randomly sample a set of configurations from the pruned configuration space (line 7). Next, we utilize the knowledge distillation technique introduced in Section 2 to train a model for each of these sampled configurations (line 11). We then evaluate the effectiveness of these models on the validation dataset (line 12), which has a similar distribution to the test dataset, but remains distinct and is not used for training. Subsequently, we use the sampled configurations and the corresponding effectiveness values to train a regression model that serves as our effectiveness indicator (line 13). For this purpose, we employ Bayesian Ridge Regression (BRR) (Zhu et al., 2017). BRR is a statistical regression method that combines Bayesian principles (Zhu et al., 2017) with linear regression techniques (Zhu et al., 2018). It trains regression models by minimizing the squared difference between predicted and actual target values. BRR is particularly valuable when dealing with limited data points, which is the case for our effectiveness indicator since we have only a few sampled configurations. Note that the regression model usually takes numbers as inputs, while some of our configurations are strings. For these configurations, we use their corresponding indices in the tuning range as inputs to the regression model. For example, the tokenizer has four options, so we use 0, 1, 2, and 3 to represent them.
### Multi-Objective Configuration Tuning
With the pruned configuration space and effectiveness indicator, we are now ready to introduce our innovative multi-objective configuration tuning algorithm, which is specifically designed to identify the set of Pareto-optimal configurations in terms of size, FLOPs, and effectiveness for optimizing large language models of code.
As presented in Listing 6, our algorithm takes the pruned configuration space, the effectiveness indicator, and the number of generations as inputs. It starts by generating an initial population of configuration sets by an adaptive random initialization method (line 5). These configurations are then assessed in terms of the three objectives (line 6): the size and FLOPs are calculated with the implementation of Compressor(Zhu et al., 2017), while the effectiveness indicator predicts the effectiveness. The algorithm maintains an archive to store the Pareto-optimal configurations (line 7). This archive is initialized as an empty set and is updated throughout the algorithm's execution. Subsequently, it enters an iterative loop that runs for a specified number of generations. At each iteration, the algorithm applies three operators, i.e., two-point crossover, boundary random mutation, and correction, to generate new offspring from the population (lines 9 to 11). These offspring are then evaluated, and the archive of Pareto-optimal configurations is updated accordingly (lines 12 to 13). The next generation of population is selected from the current population and the offspring by a tournament selection method (line 14). After the loop terminates, the algorithm returns the archive of Pareto-optimal configurations (line 15). The main operators and steps are described in detail below.
**Adaptive Random Initialization.** We aim to assemble an initial population of highly diverse configuration sets, which can facilitate
more efficient exploration of the configuration space. To achieve this, we employ adaptive random initialization (Beng et al., 2019; Wang et al., 2020), an extension of naive random search that attempts to maximize the Euclidean distance between the selected configurations in the population. Concretely, this method first randomly selects a configuration set \(c\) from the configuration space. It then randomly selects another configuration set \(c^{\prime}\) and compares the Euclidean distance between \(c\) and \(c^{\prime}\) with the distance between \(c\) and the other configuration sets already present in the population. If the distance between \(c\) and \(c^{\prime}\) exceeds those between \(c\) and other configuration sets, \(c^{\prime}\) is added to the population. Otherwise, \(c^{\prime}\) is discarded. This process continues until the population reaches the desired size. Importantly, when calculating the Euclidean distance, as when training the effectiveness indicator, we replace the configuration in the form of strings with its corresponding numerical index within the tuning range.
**Two-Point Crossover.** This operator, commonly used in metaheuristic algorithms such as genetic algorithms to solve optimization problems (Zhu et al., 2019; Wang et al., 2020), aims to combine two parent configurations to generate new offspring configurations. It begins by randomly selecting two parent configurations and two crossover points. Subsequently, it swaps the values of the two parent configurations between these two crossover points to create two offspring configurations. For instance, if the two parent configurations are denoted as \(c_{1}\) and \(c_{2}\), and the selected crossover points are \(p_{1}\) and \(p_{2}\), the resulting offspring configurations are computed as follows: \(c_{1}[0:p_{1}]+c_{2}[p_{1}:p_{2}]+c_{1}[p_{2}:]+c_{2}[0:p_{1}]+c_{1}[p_{1}:p_{ 2}]+c_{2}[p_{2}:]\). Here, \(c_{1}[0:p_{1}]\) represents the values of \(c_{1}\) before \(p_{1}\), and \(c_{1}[p_{2}:]\) represents the values of \(c_{1}\) from \(p_{2}\) to the end. The generated offspring configurations are then added to the population.
**Boundary Random Mutation.** This operator introduces random modifications to the values of a configuration set, resulting in a new offspring configuration. Following recent work utilizing genetic algorithms for optimization problems (Zhu et al., 2019; Wang et al., 2020), we employ the boundary random mutation operator to generate offspring configurations. The process begins by randomly selecting a configuration from the population. Subsequently, for each configuration value within this selected configuration, a mutation rate \(r\) is randomly chosen from the range of \([0,1]\). If \(r\) falls below a predefined threshold, the selected configuration value is set to a random value within its tuning range, while ensuring that the modified solution remains within the feasible configuration space, i.e., the boundary. The resulting offspring configuration is then incorporated into the population.
**Correction.** The above crossover and mutation operators may produce invalid offspring configurations that are unusable for initializing models. For example, according to the implementation of Hugging Face (Hugging Face, 2019), a model's hidden size must be divisible by the number of attention heads; otherwise, the model will fail to initialize due to dimension misalignment errors. To address such cases and rectify them, our tuning algorithm employs correction operators. When it encounters invalid offspring configurations, it discards their values and proceeds to randomly select new values until the offspring configuration becomes valid.
**Tournament Selection.** The selection operator plays a key role in constructing the next generation from the existing population and the newly generated offspring. Using the tournament selection method (Kang et al., 2020), a well-established technique in metaheuristic algorithms, a fixed number of configurations are randomly selected from the combined pool of the current population and offspring. Then, the Pareto-optimal ones are selected from these configurations and added to the next generation, ensuring that the most promising candidates are retained for the next iteration.
As mentioned above, the algorithm manages and continuously updates an archive of Pareto-optimal configurations throughout its execution. When evaluating a configuration set, the algorithm compares it with the configurations already present in the archive. If the evaluated configuration set is not dominated by any other configuration set in the archive, it secures its place within the archive. Additionally, if any configuration set in the archive is found to be dominated by the new configuration set, it will be excluded from the archive. This process ensures the archive contains only non-dominated configurations, i.e., Pareto-optimal solutions. The algorithm terminates when the specified number of generations is reached, at which point it returns the archive of Pareto-optimal configurations. We then select a configuration set from the archive to train a compact and green model using knowledge distillation.
## 4. Empirical Evaluation
Our evaluation aims to answer the following research questions:
* **RQ1 (Effectiveness):** How effective is Avatar in optimizing language models of code?
* **RQ2 (Comparison):** How does Avatar compare to the state-of-the-art method in optimizing language models of code?
### Experimental Setup
**Tasks and Datasets.** Following the evaluation settings in the prior work (Zhu et al., 2019), we assess the performance of Avatar on two popular software engineering tasks: vulnerability prediction and clone detection. Table 1 provides an overview of the datasets used in our experiments. These datasets encompass different programming languages and sizes, allowing for a thorough evaluation of Avatar. More details on the tasks and datasets are provided below.
The vulnerability prediction task involves determining whether a given code snippet is vulnerable or not. Integrating vulnerability prediction models into an IDE can significantly assist developers in identifying critical program defects early, thus enhancing software quality and reducing maintenance costs. For our experiment, we utilize the Devign dataset (Wang et al., 2020), which was released by Zhou et al. It contains 27,318 functions from two popular open-source C libraries, i.e., FFmpeg4 and Qemu5. The dataset was constructed by manually
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{Dataset} & Labeled/Unlabeled & \multirow{2}{*}{Language} & \multirow{2}{*}{Source} \\ & Val/Test & & \\ \hline \multirow{2}{*}{Devign (Wang et al., 2020)} & 10,927/10,927 & & FFmpeg \\ & 2,732,2/732 & & Qemu \\ \hline \multirow{2}{*}{BigCloneBench (Wang et al., 2020)} & 45,051/45,051 & \multirow{2}{*}{Java} & SourceForge \\ & 4,000,4/000 & & Google Code \\ \hline \hline \end{tabular}
\end{table}
Table 1. Overview of datasets used in our experiments.
annotating whether these functions contain vulnerabilities or not. We first follow the CodeXGLUE [48] benchmark for dataset splitting, allocating 80% for training, 10% for validation, and 10% for testing. To facilitate knowledge distillation, which requires unlabeled data, we follow Compressor[63] to evenly divide the training set into two mutually exclusive halves. One half is used for fine-tuning the language models, while the other, with erased labels, serves to train the model with configurations generated by Avatar.
The clone detection task aims to identify whether two given functions are code clones, assisting in recognizing redundant implementations of the same functionalities during software maintenance. For evaluating Avatar's effectiveness in clone detection, we select the widely-used BigCloneBench dataset [67]. This dataset is collected by mining the clones of specific functionalities in 25,000 Java projects sourced from SourceForge6 and Google Code7. It includes over 6,000,000 pairs of cloned Java methods, along with 260,000 non-clone pairs. To keep the experiments computationally manageable, we adopt the settings from recent studies [63, 77]. We randomly select 90,102 examples (i.e., 10% of the original training dataset) for training and reserve 4,000 for validation and testing. Then, same as in the vulnerability prediction task, we divide the training data into labeled and unlabeled portions of equal size, which are for fine-tuning large models and training optimized models, respectively.
Footnote 6: [https://sourceforge.net](https://sourceforge.net)
Footnote 7: [https://code.google.com](https://code.google.com)
**Language Models of Code.** To evaluate Avatar, we follow Shi et al. [63] to use two popular encoding-only language models of code: CodeBERT [19] and GraphCodeBERT [26]. These two models share the same architecture and have been language on the CodeSearchNet dataset [38]. CodeBERT undergoes pre-training with two tasks: masked language modeling, which predicts masked tokens in input texts, and replaced token detection, which identifies whether a token in a given input has been replaced. GraphCodeBERT also uses masked language modeling, but also incorporates code graph structure information by predicting masked nodes in data flow graphs during pre-training. After pre-training, both CodeBERT and GraphCodeBERT can be fine-tuned on downstream tasks, enabling them to achieve state-of-the-art performance [48, 54, 80].
To fine-tune CodeBERT, we use the hyperparameter settings from the CodeXGLUE benchmark [48]. In the case of GraphCodeBERT, we follow the hyperparameter settings described in the GraphCodeBERT paper [26]. Despite using only half of the labeled training data compared to the full training set, all models deliver results comparable to those reported in the previous study [80].
**Evaluation Metrics.** After obtaining the model trained with configurations tuned by Avatar, we compare it with the language model and the model generated by our baseline method, Compressor, using six metrics: effectiveness, model size, inference latency, energy consumption, carbon footprint, and Giga floating-point operations (GFLOPs). Effectiveness is evaluated by prediction accuracy on the two downstream tasks, following prior studies [63, 77]. Model size is quantified in megabytes (MB). For inference latency, which is measured in milliseconds (ms), we standardize experimental conditions by limiting all models to use only 8 CPU cores, simulating running on a typical consumer-grade laptop. The testing datasets are used to query the models, and the average inference latency is calculated for each data example. Note that we use a batch size of 1 to replicate real-world scenarios where models are deployed on laptops and only process a single input at a time.
To evaluate energy consumption and carbon footprint, we use the Machine Learning Emissions Calculator8, developed by Lacoste et al. [45], to compute these metrics for the models on a specific machine. The tool requires the total running time of a model as input and outputs the energy consumption and carbon footprint, measured in kilowatt-hours (kWh) and kilograms (kg), respectively. We record the total running time of the models on the testing datasets as input to the tool, and consistent with our inference latency evaluation, we use a batch size of 1. Additionally, as mentioned in Section 3, GFLOPs are commonly used to quantify the computational cost of a model, which is closely related to energy consumption and carbon footprint. Thus, we also report GFLOPs to illustrate how Avatar contributes to environmental sustainability by reducing the computational cost of language models of code.
Footnote 8: [https://mlc02.github.io/impact/compute](https://mlc02.github.io/impact/compute)
**Implementation.** We run all experiments on an Ubuntu 18.04 server equipped with an Intel Xeon E5-2698 CPU, 504 GB of RAM, and 8 Tesla V100 GPUs. To prune the configuration space with Z3, we partition it into 25,600 subspaces and execute Z3 in parallel across 80 CPU cores. For training the effectiveness indicator, we sample 20 sets of configurations from the pruned configuration space and implement the Bayesian Ridge Regression model using scikit-learn.9 All the settings are the same as the default values in the library. In the multi-objective tuning algorithm, we configure the population size to be 20, with 50 generations. The crossover and mutation rates were set to 0.6 and 0.1, respectively.
\begin{table}
\begin{tabular}{c|c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c}{Vulnerability Prediction} & \multicolumn{4}{c}{Clone Detection} \\ \cline{2-10} & ACC (\%) & LAT (ms) & E (kWh) & CO\({}_{2}\) (kg) & GFLOPs & ACC (\%) & LAT (ms) & E (kWh) & CO\({}_{2}\) (kg) & GFLOPs \\ \hline CB (481 MB) & 61.82 & 1394 & 0.32 & 0.14 & 138.4 & 96.10 & 1963 & 0.65 & 0.28 & 138.4 \\ CB-Avatar (**3 MB**) & 60.87 (-0.95) & **29 (48.0)** & **0.006 (53.3)** & **0.003 (47.0)** & **0.64 (216.3)** & 93.69 (2.41) & **19 (103.0)** & **0.006 (108.3)** & **0.003 (93.3)** & **1.14 (121.0)** \\ \hline GCB (481 MB) & 61.57 & 1139 & 0.26 & 0.11 & 138.4 & 96.85 & 1539 & 0.52 & 0.22 & 138.4 \\ GCB-Avatar (**3 MB**) & 61.12 (-0.45) & **15 (76\(\times\))** & **0.005 (52\(\times\))** & **0.002 (55\(\times\))** & **0.67 (207\(\times\))** & 94.00 (-2.85) & **10 (154\(\times\))** & **0.002 (260\(\times\))** & **0.001 (220\(\times\))** & **0.80 (173\(\times\))** \\ \hline Average Loss/Gain & -0.70 & **62\(\times\)** & 53\(\times\)** & **51\(\times\)** & **212\(\times\)** & -2.63 & **129\(\times\)** & **184\(\times\)** & **157\(\times\)** & **147\(\times\)** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Results of Avatar and the original language models on the two tasks. “CB” and “GCB” denote CodeBERT and GraphCodeBERT, respectively. “ACC” is the prediction accuracy. “LAT” is the inference latency. “E” is the energy consumption. “CO\({}_{2}\)” is the CO\({}_{2}\) emission, i.e., the carbon footprint.
### Effectiveness of Avatar (RQ1)
After obtaining the Pareto-optimal configurations using Avatar, we select the configuration with a model size closest to 3 MB for training the optimized model. This results in a model that is approximately 160x smaller than the original language model of code for each task. Table 2 shows the experimental results comparing the optimized models with the original ones. On the two tasks, the optimized models exhibit an average decrease in accuracy of only \(1.67\%\) (\(\approx(0.70\%+2.63\%)/2\)) compared to the original large models. This accuracy result illustrates that Avatar significantly optimizes model size with only a negligible loss in effectiveness on downstream tasks. Furthermore, the inference latency of the optimized models sees a substantial reduction on both tasks, with an average reduction of 62x for vulnerability detection and 129x for clone detection. Prior research (Wang et al., 2019) has suggested that software practitioners are willing to accept a small sacrifice in effectiveness in exchange for a significant improvement in usability. Therefore, we consider the reduced accuracy of the optimized models to be acceptable in practical applications.
Table 2 also presents results of optimizing language models in terms of environmental sustainability. We employ the Machine Learning Emissions Calculator (Wang et al., 2019) to calculate the energy consumption and carbon footprint of the optimized models, comparing them to the original ones. Note that these results are calculated using a single NVIDIA Tesla V100 GPU and encompass the cost of running the entire testing dataset rather than a single query. On both tasks, the energy consumption of the optimized models sees a significant reduction, averaging 53x and 184x less, respectively. This reduction extends to a corresponding decrease in carbon footprint, ranging from 51x to 157x less. Additionally, we observe a notable reduction in GFLOPs for the optimized models, with an average reduction of 212x and 147x on the two tasks, respectively. These results underscore the sustainability benefits that the optimized models can offer in real-world deployments.
**Answers to RQ1:** Avatar effectively optimizes language models of code in terms of model size (160x smaller), inference latency (up to 76x faster), energy consumption (up to 184x less), and carbon footprint (up to 157x less), with only a negligible loss in effectiveness (1.67% on average).
### Avatar vs. Compressor (RQ2)
As the baseline for our experiments, we employ the approach, Compressor, proposed by Shi et al. (Shi et al., 2019). To ensure a fair comparison, we directly utilize the models available in the official repository of Compressor. The models produced using Compressor and Avatar have a similar size at 3 MB. The evaluation results comparing these approaches are presented in Table 3.
Compared to the models optimized by Compressor, the models produced by Avatar exhibit a slightly higher accuracy, with an average improvement of 0.75% (\(\approx(1.45\%+0.07\%)/2\)) on the two tasks. These results suggest that Avatar can optimize language models of code more effectively without compromising effectiveness as much as Compressor. More importantly, the models optimized by Avatar demonstrate significant improvements in inference latency on both tasks. Compressor produces models with an inference latency in the hundreds of milliseconds range, while the optimized models obtained by our approach have a maximum latency of 29 ms. On average, the inference latency of the models optimized by Avatar is 44x (\(\approx(33+54)/2\)) faster than that of the ones produced by Compressor, which highlights the effectiveness of Avatar in enhancing the usability of language models compared to the state-of-the-art approach.
Avatar also improves the energy consumption of the optimized models by 3x and 8x compared to Compressor on vulnerability prediction and clone detection, respectively. These reductions also translate into a corresponding decrease in carbon footprint, with reductions of 4x and 7x on the two tasks. Overall, except for model size, the models optimized by Avatar outperform the ones optimized by Compressor across all metrics.
**Answers to RQ2:** Avatar significantly outperforms Compressor (i.e., the state-of-the-art approach) in terms of prediction accuracy (0.75% on average), inference latency (44x faster on average), energy consumption (up to 8x less), and carbon footprint (up to 7x less).
## 5. Discussions
### Efficiency of Avatar
We investigate the time taken by Avatar to optimize language models of code, breaking it down into four parts: pruning the configuration space, building the effectiveness indicator, executing the configuration tuning algorithm, and training optimized models.
In our experimental setup, the parallel execution of pruning the configuration space takes just 5 minutes to complete. After that, Avatar uses a single 16 GB Tesla V100 GPU to train 20 models for constructing the effectiveness indicator, consuming approximately 10 hours. Note that this overhead is only rarely incurred, e.g., the
\begin{table}
\begin{tabular}{c|c c c c c|c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{6}{c|}{Vulnerability Prediction} & \multicolumn{6}{c}{Clone Detection} \\ \cline{2-10} & ACC (\%) & LAT (ms) & E (kWh) & CO\({}_{2}\) (kg) & GFLOPs & ACC (\%) & LAT (ms) & E (kWh) & CO\({}_{2}\) (kg) & GFLOPs \\ \hline CB-Compressor (3 MB) & 59.11 & 521 & 0.012 & 0.006 & 2.25 & 95.40 & 601 & 0.02 & 0.01 & 2.25 \\ CB-Avatar (3 MB) & **60.87 (+1.76)** & **29 (18x)** & **0.006 (2x)** & **0.003 (2x)** & **0.64 (4x)** & **93.69 (1.71)** & **19 (32x)** & **0.006 (3x)** & **0.003 (3x)** & **1.14 (2x)** \\ \hline GCB-Compressor (3 MB) & 59.99 & 702 & 0.016 & 0.007 & 2.25 & 92.15 & 747 & 0.025 & 0.011 & 2.25 \\ GCB-Avatar (3 MB) & **61.12 (+1.13)** & **15 (47\(\times\))** & **0.005 (3x)** & **0.002 (4\(\times\))** & **0.67 (3\(\times\))** & **94.00 (+1.85)** & **10 (75\(\times\))** & **0.002 (13\(\times\))** & **0.001 (11\(\times\))** & **0.80 (3x)** \\ \hline Average Loss/Gain & **+1.45** & **33\(\times\)** & **3\(\times\)** & **4\(\times\)** & **4\(\times\)** & **+0.07** & **54\(\times\)** & **8\(\times\)** & **7\(\times\)** & **3\(\times\)** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Results of Avatar and Compressor on the two tasks. “CB” and “GCB” denote CodeBERT and GraphCodeBERT, respectively. “ACC” is the prediction accuracy. “LAT” is the inference latency. “E” is the energy consumption. “CO\({}_{2}\)” is the CO\({}_{2}\) emission, i.e., the carbon footprint.
first time optimizing a language model for deployment, which may occur only on a monthly or yearly basis. Because of the carefully pruned configuration space and the specialized optimization algorithm, Avatar efficiently returned Pareto-optimal configurations in about 2 minutes. Subsequently, the knowledge distillation phase required more time, with Avatar taking an average of 14.9 and 18.3 minutes to train an optimized model for the vulnerability prediction and clone detection tasks, respectively. These results underscore the fact that Avatar can produce well-performing optimized models with much less time cost than fine-tuning or pre-training large language models, which often takes a few hours or days (Kumar et al., 2020).
### Usefulness in Cloud Deployment
The primary goal of Avatar is to optimize language models of code for deployment on developers' personal devices like laptops. As mentioned in Section 1, we hold this perspective due to privacy concerns (Kumar et al., 2020; Kumar et al., 2020; Kumar et al., 2020; Kumar et al., 2020) and the need for use under poor network conditions. Deploying models on cloud servers may not be a viable option because it requires sending code to third-party vendors, which is prohibited by some companies that consider code bases to be important intelligent properties. Also, cloud deployment may result in more inference latency for developers in some regions with poor bandwidth or Internet coverage. However, we acknowledge that cloud deployment is a common practice today, offering more computing resources and scalability to support a larger user base. Therefore, it would be worthwhile to also discuss the benefits of optimized models in the context of cloud deployments.
We run experiments assuming that the models process queries in batch mode with a batch size of 100. These experiments are run on a server equipped with a Tesla V100 GPU. We send the queries directly from the GPU's host machine to eliminate any potential impact from network fluctuations, and then measure how many queries the models can process per second. The experimental results, presented in Table 4, show that compared to the original language models of code, the optimized models can process on average 3.9\(\times\) and 9.7\(\times\) more queries per second on the two tasks, respectively. These results highlight the advantages of using Avatar for deploying large language models of code in cloud servers.
### Threats to Validity
One potential threat to _internal validity_ is the randomness inherent in the configuration tuning algorithms used in our experiments. To address this concern, we have run each experiment 10 times and reported the average results, as recommended by Arcuri and Briand (Arcuri and Briand, 2020). Regarding _external validity_, a potential threat is that our results may not be generalizable to other models and tasks beyond the ones we have studied. To ensure the generalizability of our work, we have carefully selected two representative encoder-only language models of code and two popular downstream tasks with different characteristics for our evaluation. This ensures that our results are unbiased and our method potentially applies to a broad context. While we have not yet applied our method to other types of language models, such as decoder-only models, which have also recently gained popularity, we plan to extend our study on those models to further validate our work's generalizability in the future. One threat to _construct_ validity is that the evaluation metrics may not fully capture the performance of our Avatar and the baseline in enhancing the usability and sustainability of language models of code. To mitigate it, we use a total of five widely-used evaluation metrics to compare the effectiveness of Avatar and the baseline from a comprehensive set of perspectives.
## 6. Related Work
In recent years, both the natural language processing and software engineering communities have dedicated their efforts to optimizing language models. However, unlike our work, which seeks to simultaneously optimize multiple aspects of language models of code, most existing studies focus on reducing model size only, thereby indirectly mitigating other related issues such as inference latency. These existing studies typically fall into three main categories: model pruning, model quantization, and knowledge distillation.
Model pruning and quantization involve directly altering model parameters to reduce model size. Model pruning replaces certain parameters with zeros, or removes network components like hidden layers (Kumar et al., 2020; Kumar et al., 2020). Model quantization converts a model's 32-bit floating-point parameters into lower-bit fixed-point values (Kumar et al., 2020; Kumar et al., 2020; Kumar et al., 2020). These techniques have proven effective in reducing model size to a level suitable for deployment in scenarios with less stringent requirements. A recent study has also demonstrated their potential to reduce the computational cost and carbon footprint of language models of code (Kumar et al., 2020), offering a promising avenue for future research. However, these techniques fall short of meeting the 3 MB model size recommendation put forth by Systukovsky et al. (Systukovsky et al., 2020) within the context of software engineering. As a result, we have chosen not to include them in our pipeline and comparison experiments.
We have introduced knowledge distillation in Section 2, an essential step in Avatar and the baseline. While several knowledge distillation methods have been proposed, most of them typically result in models ranging from 100 to 200 MB (Kumar et al., 2020; Kumar et al., 2020; Kumar et al., 2020; Kumar et al., 2020). Some studies (Kumar et al., 2020; Kumar et al., 2020; Kumar et al., 2020) have successfully optimized language models into sizes ranging from 20 to 40 MB. Notably, only Compressor(Kumar et al., 2020) has achieved the remarkable feat of optimizing a large language model of around 500 MB into a compact 3 MB model. Therefore, we only compare Avatar with Compressor in our experiments.
The software engineering research community has also explored alternative methods for optimizing language models of code. For example, Grishina et al. (Grishina et al., 2020) propose using only the initial layers of language models during inference to reduce resource consumption. Additionally, Zhang et al. (Zhang et al., 2020) introduce a technique to simplify the input programs for CodeBERT, significantly reducing computational cost without compromising model performance. Despite
\begin{table}
\begin{tabular}{c c c} Model & Vulnerability Prediction & Clone Detection \\ CodeBERT & 58 & 64 \\ CodeBERT-Avatar & **171 (2.9\(\times\))** & **476 (7.4\(\times\))** \\ \hline GraphCodeBERT & 79 & 48 \\ GraphCodeBERT-Avatar & **390 (4.9\(\times\))** & **570 (11.9\(\times\))** \\ \hline Average Improvements & **3.9\(\times\)** & **9.7\(\times\)** \\ \end{tabular}
\end{table}
Table 4. Usefulness of Avatar in cloud deployment. The results show how many queries that the models can process per second when deployed on a cloud server.
these efforts, there are still gaps in optimizing language models of code to simultaneously improve usability and environmental sustainability. To the best of our knowledge, our study is the first to address both aspects concurrently.
## 7. Conclusion and Future Work
This paper proposes Avatar, a novel approach that can optimize large language models of code in terms of model size, inference latency, energy consumption, and carbon footprint without sacrificing effectiveness (e.g., prediction accuracy on downstream tasks) by much, thereby improving the usability and environmental sustainability of language models of code. The key idea of Avatar is to formulate the optimization of language models as a multi-objective configuration tuning problem and solve it with the help of SMT solvers and a tailored optimization algorithm. We evaluate Avatar with two state-of-the-art language models, i.e., CodeBERT and GraphCodeBERT, on two popular tasks, i.e., vulnerability prediction and clone detection. We use Avatar to produce optimized models with a small size (3 MB), which is 160\(\times\) smaller than the original large models. On the two tasks, the optimized models can significantly reduce the energy consumption (up to 184\(\times\) less), carbon footprint (up to 157\(\times\) less), and inference latency (up to 76\(\times\) faster), with only a negligible loss in effectiveness (1.67% on average). Compared with the state-of-the-art approach, Avatar optimizes language models of code more effectively in all metrics.
In the future, we plan to further investigate the effectiveness and efficiency of our proposed approach Avatar by experimenting with more large language models of code beyond those considered in this paper, such as the generative language models of code.
|
2309.12614 | Characterizing Smooth Safety Filters via the Implicit Function Theorem | Optimization-based safety filters, such as control barrier function (CBF)
based quadratic programs (QPs), have demonstrated success in controlling
autonomous systems to achieve complex goals. These CBF-QPs can be shown to be
continuous, but are generally not smooth, let alone continuously
differentiable. In this paper, we present a general characterization of smooth
safety filters -- smooth controllers that guarantee safety in a minimally
invasive fashion -- based on the Implicit Function Theorem. This
characterization leads to families of smooth universal formulas for
safety-critical controllers that quantify the conservatism of the resulting
safety filter, the utility of which is demonstrated through illustrative
examples. | Max H. Cohen, Pio Ong, Gilbert Bahati, Aaron D. Ames | 2023-09-22T04:20:16Z | http://arxiv.org/abs/2309.12614v1 | # Characterizing Smooth Safety Filters via the Implicit Function Theorem
###### Abstract
Optimization-based safety filters, such as control barrier function (CBF) based quadratic programs (QPs), have demonstrated success in controlling autonomous systems to achieve complex goals. These CBF-QPs can be shown to be continuous, but are generally not smooth, let alone continuously differentiable. In this paper, we present a general characterization of smooth safety filters - smooth controllers that guarantee safety in a minimally invasive fashion - based on the Implicit Function Theorem. This characterization leads to families of smooth universal formulas for safety-critical controllers that quantify the conservatism of the resulting safety filter, the utility of which is demonstrated through illustrative examples.
## I Introduction
Over the past decade, control barrier functions (CBFs) [1] have proven to be a powerful tool for designing controllers enforcing safety on nonlinear systems. The properties of CBFs naturally lead to their use as _safety filters_ for nominal controllers that may not have been designed a priori to ensure safety. Most often, such safety filters are instantiated via optimization problems - typically a quadratic program (QP) - to minimize the deviation from a nominal controller while satisfying Lyapunov-like conditions that ensure forward invariance of a designated safe set [2, 3, 4]. Under certain regularity conditions, these optimization-based safety filters are locally Lipschitz functions of the system state [5, 6], allowing one to leverage set-theoretic tools, such as Nagumo's Theorem [7], to conclude forward invariance of safe sets. Although such controllers are pointwise optimal, they are typically not smooth even if the problem data is.
The lack of smoothness exhibited by optimization-based safety filters has not been overly detrimental to the development of safety-critical controllers to date; however, more recent developments in the literature motivate the consideration of smooth (or, at least, sufficiently differentiable) safety filters. For example, [8] recently proposed a barrier-backstepping methodology that enables the systematic construction of CBFs for systems in strict-feedback form. As in Lyapunov backstepping [9], such an approach requires differentiating through virtual CBF controllers at intermediate layers to construct a composite CBF for the overall system. Similar ideas are leveraged in [10, 11] to construct CBFs for robotic systems based on reduced-order models - an approach successfully used to safely control complex robotic systems, e.g., walking robots and drones.
Given the similarities between control Lyapunov functions (CLFs) and CBFs, one may wonder if it is possible to adapt smooth universal formulas for CLFs [12] to CBFs. The answer is affirmative - with some slight modifications. Sontag's Universal Formula for stabilization [12] can be applied to safety as noted in [13, 14, 15, 16]. Despite this, Sontag's formula is scarcely used as a safety filter since, in its most common form, such a controller tends to be overly invasive, overriding inputs from the nominal controller even when not necessary to ensure safety. Alternative smooth universal formulas have been proposed in [17] based on computing weighted centroids of the set of all control values satisfying CBF and/or CLF conditions using the probability density function of a Gaussian distribution. In a different approach, the authors of [18] leverage Sontag's formula to combine stabilization and safety objectives. Yet questions remain around the connections between smoothness and safety filters.
The main objective of this paper is to provide a general characterization of smooth safety filters - smooth controllers that guarantee safety in a minimally invasive fashion. Our characterization is motivated by the original development of Sontag's formula in [12]. Sontag's formula is derived by computing the roots of an algebraic equation parameterized by the Lie derivatives of a CLF/CBF. When certain regularity conditions are met, the smoothness of such roots as a function of the Lie derivatives follows directly from the Implicit Function Theorem. In this paper, we seek a deeper understanding of the properties of this equation. In particular: What properties should this equation satisfy so that one of its solutions produces a smooth minimally invasive controller that guarantees safety? We answer this question by constructing an ordinary differential equation (ODE) from a given
Fig. 1: Illustration of methodology for generating smooth safety filters.
algebraic one such that the trajectories of this ODE coincide with the solutions of the algebraic equation. Leveraging invariance-like tools, we introduce sufficient conditions for this ODE so that its trajectories produce a smooth safety filter. This characterization leads to various smooth universal formulas for safety-critical control that allow one to assess the conservatism of the resulting safety filter, the utilities of which we illustrate through their application to safety-critical control based on reduced order models [10, 11].
## II Preliminaries and Problem Formulation
Consider the nonlinear control affine system:
\[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})+\mathbf{g}(\mathbf{x})\mathbf{u}, \tag{1}\]
where \(\mathbf{x}\in\mathbb{R}^{n}\) is the system state, \(\mathbf{u}\in\mathbb{R}^{m}\) is the control input, \(\mathbf{f}:\,\mathbb{R}^{n}\to\mathbb{R}^{n}\) is the drift vector field, and \(\mathbf{g}:\mathbb{R}^{n}\to\mathbb{R}^{n\times m}\) captures the control directions. Throughout this paper, we assume that \(\mathbf{f}\) and \(\mathbf{g}\) are smooth functions of the state. Applying a smooth feedback controller \(\mathbf{k}:\,\mathbb{R}^{n}\to\mathbb{R}^{m}\) to (1) produces the smooth closed-loop system:
\[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x})+\mathbf{g}(\mathbf{x})\mathbf{k}( \mathbf{x}), \tag{2}\]
which, for each initial condition \(\mathbf{x}_{0}\in\mathbb{R}^{n}\), generates a unique smooth solution \(\mathbf{x}:\,I(\mathbf{x}_{0})\to\mathbb{R}^{n}\) satisfying (2) on some maximal interval of existence \(I(\mathbf{x}_{0})\subseteq\mathbb{R}_{\geq 0}\).
### _CLFs and Sontag's Universal Formula_
Before discussing smooth safety filters, we recount the main ideas behind CLFs as presented in [12]. Recall that a smooth, proper, positive definite function \(V:\,\mathbb{R}^{n}\to\mathbb{R}_{\geq 0}\) is a CLF for (1) if for all \(\mathbf{x}\in\mathbb{R}^{n}\setminus\{0\}\)
\[\inf_{\mathbf{u}\in\mathbb{R}^{m}}\underbrace{\big{\{}\nabla V(\mathbf{x}) \cdot\mathbf{f}(\mathbf{x})\big{\}}}_{L_{\mathbf{f}}V(\mathbf{x})}+\underbrace {\big{\{}\nabla V(\mathbf{x})\cdot\mathbf{g}(\mathbf{x})\,\mathbf{u}\big{\}}} _{L_{\mathbf{g}}V(\mathbf{x})}<0.\]
The existence of a CLF implies that for each \(\mathbf{x}\in\mathbb{R}^{n}\) there exists a \(\mathbf{u}\in\mathbb{R}^{m}\) that enforces \(V\) to decrease, and allows for constructing a feedback controller \(\mathbf{k}:\,\mathbb{R}^{n}\to\mathbb{R}^{m}\) that renders the origin asymptotically stable by ensuring that:
\[\forall\mathbf{x}\in\mathbb{R}^{n}\setminus\{0\}:\,L_{\mathbf{f}}V(\mathbf{x })+L_{\mathbf{g}}V(\mathbf{x})\mathbf{k}(\mathbf{x})<0. \tag{3}\]
In this paper, we are concerned with designing _smooth_ feedback controllers satisfying a general class of affine inequalities, such as the one in (3). In [12] Sontag provides one example of such a controller, now known as Sontag's Universal Formula for stabilization, which is given by:
\[\mathbf{k}(\mathbf{x})=\lambda_{\mathrm{CLF}}(L_{\mathbf{f}}V( \mathbf{x}),\|L_{\mathbf{g}}V(\mathbf{x})\|^{2})L_{\mathbf{g}}V(\mathbf{x})^{ \top}, \tag{4}\] \[\lambda_{\mathrm{CLF}}(a,b):=\begin{cases}0&b=0\\ \frac{-a-\sqrt{a^{2}+q(b)b}}{b}&b\neq 0,\end{cases} \tag{5}\]
where \(q:\,\mathbb{R}\to\mathbb{R}\) is smooth and satisfies \(q(0)=0\) and \(q(b)>0\) for all \(b\neq 0\). In [12], the smoothness of Sontag's formula (5) is proven using an argument based on the Implicit Function Theorem [19, Thm. 11.2]. Specifically, the following result [19, Thm. 11.1], which is related to the Implicit Function Theorem, is useful for establishing smoothness of (5).
**Theorem 1** ([19]).: _Let \((a,b,p)\mapsto F(a,b,p)\) be a smooth function. If a continuous function \(\lambda:\,\mathcal{S}\to\mathbb{R}\) satisfies:_
\[F(a,b,\lambda(a,b))=0, \tag{6}\] \[\frac{\partial F}{\partial p}(a,b,\lambda(a,b))\neq 0, \tag{7}\]
_for all \((a,b)\in\mathcal{S}\), then \(\lambda\) is smooth for all \((a,b)\in\mathcal{S}\) and its derivative is given by:_
\[\left[\begin{matrix}\frac{\partial\lambda}{\partial b}(a,b)\\ \frac{\partial\lambda}{\partial b}(a,b)\end{matrix}\right]=-\frac{1}{\frac{ \partial F}{\partial p}(a,b,\lambda(a,b))}\left[\frac{\frac{\partial F}{ \partial p}(a,b,\lambda(a,b))}{\frac{\partial F}{\partial b}(a,b,\lambda(a,b ))}\right]. \tag{8}\]
To apply this theorem and show smoothness of the function \(\lambda_{\mathrm{CLF}}\) in (5), one considers the smooth function:
\[F(a,b,p)=bp^{2}+2ap-q(b), \tag{9}\]
noting that \(\lambda_{\mathrm{CLF}}\) is continuous and satisfies (6) and (7) for each \((a,b)\) in \(\mathcal{S}_{\mathrm{CLF}}\coloneqq\{(a,b)\in\mathbb{R}\times\mathbb{R}_{\geq 0 }:\,a<0\lor b>0\}\). In addition to being smooth, one can also verify that the controller (4) constructed from \(\lambda_{\mathrm{CLF}}\) satisfies inequality (3). Indeed, this is a consequence of picking an appropriate function for \(F\), a point which we will expand on later.
### _CBFs and Safety Filters_
The main objective of this paper is to leverage Theorem 1 for constructing smooth controllers that will render the resulting closed-loop system _safe_, a property often formalized using the framework of set invariance [7]. Formally, we say that (2) is safe on a set \(\mathcal{C}\subset\mathbb{R}^{n}\) if \(\mathcal{C}\) is forward invariant. By considering sets of the form:
\[\mathcal{C}=\{\mathbf{x}\in\mathbb{R}^{n}\,:\,h(\mathbf{x})\geq 0\}, \tag{10}\]
where \(h:\,\mathbb{R}^{n}\to\mathbb{R}\) is a smooth function, the existence of a safe feedback controller can be characterized using the concept of a control barrier function (CBF) [1].
**Definition 1**.: A smooth function \(h:\mathbb{R}^{n}\to\mathbb{R}\) defining a set \(\mathcal{C}\subset\mathbb{R}^{n}\) as in (10) is said to be a _control barrier function_ (CBF) for (1) on \(\mathcal{C}\) if zero is a regular value of \(h\) and there exists a smooth1\(\alpha\in\mathcal{K}_{\infty}^{c}\) such that for all \(\mathbf{x}\in\mathbb{R}^{n}\)
Footnote 1: A continuous function \(\alpha:\,\mathbb{R}\to\mathbb{R}\) is an extended class \(\mathcal{K}_{\infty}\) function (\(\alpha\in\mathcal{K}_{\infty}^{c}\)) if \(\alpha(0)=0\), \(\alpha\) is increasing, and \(\lim_{x\to\pm\infty}\alpha(r)=\pm\infty\).
Similar to CLFs, any feedback controller \(\mathbf{k}:\,\mathbb{R}^{n}\to\mathbb{R}^{m}\) satisfying the inequality \(L_{\mathbf{f}}h(\mathbf{x})+L_{\mathbf{g}}h(\mathbf{x})\mathbf{k}(\mathbf{x}) \geq-\alpha(h(\mathbf{x}))\) ensures that \(h\) remains positive along closed-loop trajectories, and therefore renders \(\mathcal{C}\) forward invariant [1]. Perhaps the greatest utility of CBFs is their ability to act as a _safety filter_ for a nominal feedback controller \(\mathbf{k}_{\mathrm{d}}:\mathbb{R}^{n}\to\mathbb{R}^{m}\). A safety filter is a controller that modifies \(\mathbf{k}_{\mathrm{d}}\) - preferably, in a minimally invasive fashion - so that the resulting closed-loop system is safe. The most common examples of such safety filters are instantiated via the QP:
\[\begin{split}\mathbf{k}(\mathbf{x})=\operatorname*{arg\,min}_{ \mathbf{u}\in\mathbb{R}^{m}}&\frac{1}{2}\|\mathbf{u}-\mathbf{k}_{ \mathrm{d}}(\mathbf{x})\|^{2}\\ &\text{subject to}& L_{\mathbf{f}}h(\mathbf{x})+L_{ \mathbf{g}}h(\mathbf{x})\mathbf{u}\geq-\alpha(h(\mathbf{x})),\end{split} \tag{11}\]
which modifies \(\mathbf{k}_{\mathrm{d}}\) in the \(L_{\mathbf{g}}h^{\top}\) direction:
\[\mathbf{k}(\mathbf{x})=\mathbf{k}_{\mathrm{d}}(\mathbf{x})+\lambda(a(\mathbf{x}),b(\mathbf{x}))L_{\mathbf{g}}h(\mathbf{x})^{\top}, \tag{12}\]
with a scalar function \(\lambda:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\), where:
\[\begin{split} a(\mathbf{x})\coloneqq& L_{\mathbf{f}}h( \mathbf{x})+L_{\mathbf{g}}h(\mathbf{x})\mathbf{k}_{\mathrm{d}}(\mathbf{x})+ \alpha(h(\mathbf{x}))\\ b(\mathbf{x})\coloneqq&\|L_{\mathbf{g}}h(\mathbf{ x})\|^{2}.\end{split} \tag{13}\]
For the QP in (11), \(\lambda\) is the Lagrange multiplier associated with the constraint and is given by:
\[\lambda(a,b)=\lambda_{\mathrm{QP}}(a,b)\coloneqq\begin{cases}0&b=0\\ \mathrm{ReLU}(-a/b)&b>0,\end{cases} \tag{14}\]
where \(\mathrm{ReLU}(y)\coloneqq\max\{0,y\}\). This QP-based controller has the advantage of being pointwise optimal, but is not smooth even if the problem data itself is smooth. Before proceeding, we make precise the notion of a smooth safety filter.
**Definition 2**.: Given a CBF \(h:\mathbb{R}^{n}\to\mathbb{R}\) and nominal controller \(\mathbf{k}_{\mathrm{d}}:\mathbb{R}^{n}\to\mathbb{R}^{m}\) for (1), a controller \(\mathbf{k}:\mathbb{R}^{n}\to\mathbb{R}^{m}\) of the form (12) is said to be a _smooth safety filter_ for (1) with respect to \(\mathcal{C}\) if \(\mathbf{k}\) is smooth and for all \(\mathbf{x}\in\mathbb{R}^{n}\)
\[\frac{a(\mathbf{x})+b(\mathbf{x})\lambda(a(\mathbf{x}),b(\mathbf{x}))}{L_{ \mathbf{f}}h(\mathbf{x})+L_{\mathbf{g}}h(\mathbf{x})\mathbf{k}(\mathbf{x})+ \alpha(h(\mathbf{x}))}\geq 0. \tag{15}\]
In the following section, we present our characterization of smooth safety filters.
## III Towards Smooth Safety Filters
A Sontag-like safety filter can also be derived using Theorem 1. Here, we use the same function as in (9), taking the other root to the quadratic function:
\[\lambda_{\mathrm{S}}(a,b)\coloneqq\begin{cases}0&b=0\\ \frac{-a+\sqrt{a^{2}+q(b)b}}{b}&b>0.\end{cases} \tag{16}\]
One can verify \(\lambda_{\mathrm{S}}\) is continuous and satisfies (6) and (7) on
\[\mathcal{S}=\{(a,b)\in\mathbb{R}\times\mathbb{R}_{\geq 0}\::a>0\:\lor\:b>0\}, \tag{17}\]
implying \(\lambda_{\mathrm{S}}\) is smooth on (17) by Theorem 1 and therefore produces a smooth safety filter via (12) as one can verify \(a+b\lambda_{\mathrm{S}}(a,b)\geq 0\). Nevertheless, the resulting safety filter is overly conservative as illustrated by the following example.
**Example 1**.: Consider a single integrator \(\dot{\mathbf{x}}=\mathbf{u}\) tasked with reaching the origin while avoiding a circular obstacle of radius \(r_{o}\in\mathbb{R}_{>0}\) located at \(\mathbf{x}_{o}\in\mathbb{R}^{2}\). This task can be accomplished by designing a safety filter for the nominal controller \(\mathbf{k}_{\mathrm{d}}(\mathbf{x})=-\mathbf{x}\) using \(h(\mathbf{x})=\|\mathbf{x}-\mathbf{x}_{o}\|^{2}-r_{o}^{2}\) as a CBF, which we construct using the QP in (11) and the Sontag safety filter defined by (12) and (16) with \(q(b)=\sigma b\), \(\sigma\in\mathbb{R}_{>0}\). The trajectory of the single integrator under each safety filter is provided in Fig. 2(a) for different \(\sigma\). Decreasing \(\sigma\) decreases the conservatism of the controller, but it is still overly invasive even for arbitrarily small \(\sigma\).
Motivated by the previous example, we set out to develop a general class of smooth safety filters that are less invasive by analyzing how the choice of \(F\) impacts the behavior of the resulting safety filter. Towards this development, we aim to answer the following: What properties should a function \((a,b,p)\mapsto F(a,b,p)\) satisfy so that one of its roots produces a smooth safety filter, and how may this function be selected such that the resulting safety filter is minimally invasive?
### _Minimally Invasive Smooth Safety Filters_
To answer the question posed in the previous subsection, let \((a,b,p)\mapsto F(a,b,p)\) be a smooth function and suppose there exists a continuous function \(\lambda:\mathcal{S}\to\mathbb{R}\) satisfying the conditions of Theorem 1, which implies that \(\lambda\) is smooth on \(\mathcal{S}\). We are also interested in ensuring that:
\[a+b\lambda(a,b)\geq 0, \tag{18}\]
for all \((a,b)\in\mathcal{S}\) so that \(\lambda\) may be used to construct a smooth safety filter satisfying (15). Note that \(F\) in (9) is constructed such that \(\frac{\partial F}{\partial p}=2(a+b\lambda_{\mathrm{S}}(a,b))>0\) directly implies the satisfaction of (18). However, we demonstrate that this is not the only path towards certifying that one of \(F\)'s roots produces a smooth safety filter. In what follows, we introduce more general sufficient conditions on \(F\) so that one of its roots produces a smooth safety filter.
Our starting point is one of the direct consequences of Theorem 1 - the function \(\lambda\) must satisfy:
\[\frac{\partial\lambda}{\partial a}(a,b)=-\frac{\frac{\partial F}{\partial a}(a,b,\lambda(a,b))}{\frac{\partial F}{\partial p}(a,b,\lambda(a,b))}, \tag{19}\]
for all \((a,b)\in\mathcal{S}\), implying it is a solution to the ODE:
\[\frac{\mathrm{d}p}{\mathrm{d}a}=-\frac{\frac{\partial F}{\partial a}(a,b,p)}{ \frac{\partial F}{\partial p}(a,b,p)}, \tag{20}\]
from an appropriate initial condition2. For a fixed \(b=b^{*}\geq 0\), the trajectory \(a\mapsto\lambda(a,b^{*})\) defines a curve \(\{(a,b,p)\in\mathcal{S}\times\mathbb{R}^{2}\}\)
Fig. 2: (a) Comparison between the trajectory of a single integrator generated by the QP controller (blue) in (11) and Sontag safety filter (red) from (16) for Example 1. Each safety filter is implemented from the initial condition \(\mathbf{x}_{0}=(-4,3.9)\) with \(\alpha=2\). The values of \(\sigma\) are varied from 0.2 to 0.001. (b) Illustration of the sets \(\mathcal{H}_{b}\) (dotted black line) and \(\mathcal{H}_{\mathrm{f}}^{c}\) (dashed black line) from (21) and (28), respectively, for a fixed \(b\). (c-d) Trajectories of the single integrator under the influence of the smooth safety filters from (27a) (c) and (27b) (d). The trajectories are generated with \(\sigma\) varying from 1 to 0.01 for (c) and from 0.7 to 0.01 for (d). In each plot, more transparent curves correspond to smaller values of \(\sigma\).
\(\mathbb{R}\,:\,p=\lambda(a,b),\ b=b^{*}\). As \(b\) is varied, this curve defines an entire surface \(\{(a,b,p)\in\mathcal{S}\times\mathbb{R}\,:\,p=\lambda(a,b)\}\), thereby recovering \((a,b)\mapsto\lambda(a,b)\) as a function of both \(a\) and \(b\) (cf. Fig. 1). Although working with an ODE is generally more challenging than working with an algebraic equation, this new perspective allows us to reformulate the goal of satisfying inequality (18) as a set invariance problem. To do this, we define, for each \(b\geq 0\), the set-valued map3
Footnote 3: One may think of (21) as a collection of “time-varying” safe sets (recall that \(a\) is our “time” variable) for the dynamics in (19).
\[\mathcal{H}_{b}(a)\coloneqq\{p\in\mathbb{R}\,:\,a+bp\geq 0\}. \tag{21}\]
Ultimately, we would like to ensure that for each fixed \(b=b^{*}\geq 0\), the solution to (19) satisfies \(\lambda(a,b^{*})\in\mathcal{H}_{b^{*}}(a)\) for all \((a,b^{*})\in\mathcal{S}\) so that we can conclude that \(\lambda\) satisfies (18) for all \((a,b)\in\mathcal{S}\). The following proposition constitutes the first result of this paper, establishing conditions on \(F\) such that the flow of (20) satisfies inequality (18).
**Proposition 1**.: _Let \(F:\mathbb{R}^{3}\to\mathbb{R}\) be a smooth function that defines the dynamics in (20). Suppose that for all \(b>0\):_
\[\frac{\partial F}{\partial p}\bigg{|}_{p=-\frac{p}{2}}=b\frac{\partial F}{ \partial a}\bigg{|}_{p=-\frac{p}{2}}, \tag{22}\]
_Then, for all \(b>0\), the set \(\mathcal{H}_{b}\) is invariant for (20)._
Proof.: We begin by noting that the dynamics (20) may not be well-defined for a given initial condition \(p_{0}(a_{0})\in\mathcal{H}_{b}(a_{0})\) since we may have \(\frac{\partial F}{\partial p}(a_{0},b,p_{0})=0\). Nevertheless, there exists no solution4 from such initial conditions, and the empty solution is contained in \(\mathcal{H}_{b}\).
Footnote 4: We consider solutions in the classical sense.
For other initial conditions \(p_{0}(a_{0})\in\mathcal{H}_{b}(a_{0})\) where the dynamics are well-defined, there exists an interval of existence \(I(p_{0}(a_{0}))\subset\mathbb{R}\) for the solution \(\lambda(\cdot,b):I(p_{0}(a_{0}))\to\mathbb{R}\) from each initial condition. We now show that when \(b>0\), the condition \(a+b\lambda(a,b)\geq 0\) holds for all \(a\in I(p_{0}(a_{0}))\). To this end, we define:
\[h_{b}(p,a)=a+bp. \tag{23}\]
Because \(\frac{\partial h_{b}}{\partial p}(p,a)=b>0\), zero is a regular value of \(h_{b}\); hence, the condition \(h_{b}(\lambda(a,b),a)\geq 0\) holds along the trajectory provided that the total derivative \(\frac{\mathrm{d}h_{b}}{\mathrm{d}a}=0\) whenever \(h_{b}(p,a)=0\), i.e., when \(p=-a/b\). Note that we use equality in the above (rather than an inequality) since we must show invariance of \(\mathcal{H}_{b}\), not just forward invariance, since \(a\) may take any value in \(\mathbb{R}\). Computing the derivative of \(h_{b}\) along the trajectory of (20) yields:
\[\frac{\mathrm{d}h_{b}}{\mathrm{d}a}=\frac{\partial h}{\partial a}+\frac{ \partial h}{\partial p}\frac{\mathrm{d}p}{\mathrm{d}a}=1-b\bigg{[}\frac{ \partial F}{\partial p}\bigg{]}^{-1}\frac{\partial F}{\partial a}.\]
It follows from (22) that the above evaluates to zero when \(h_{b}(p,a)=0\) so long as \(\frac{\partial F}{\partial p}\neq 0\), which must be true for the solution to exist, and implies \(\mathcal{H}_{b}\) is invariant, as desired.
Proposition 1 provides a simple condition (22), for which one can use to help assess if an implicit function \(\lambda\) to a given smooth function \(F\) will satisfy inequality (18). The key idea is that with Proposition 1, if \(\lambda\) satisfies (18) for some \((a^{*},b^{*})\in\mathcal{S}\), it will satisfy (18) for all \(a\) in its interval of existence for the same \(b=b^{*}\). The following lemma establishes this idea formally and additional conditions on \(\lambda\) so that it satisfies (18) for all \((a,b)\in\mathcal{S}\).
**Lemma 1**.: _Let \(F:\mathbb{R}^{3}\to\mathbb{R}\) be a smooth function satisfying (22) for all \(b>0\), and suppose there exists a continuous function \(\lambda:\mathcal{S}\to\mathbb{R}\) satisfying (6) and (7) for all \((a,b)\in\mathcal{S}\) as in (17). Then, if \(\lambda(0,b)>0\) for all \(b>0\), \(\lambda\) is smooth on \(\mathcal{S}\) and satisfies inequality (18) for all \((a,b)\in\mathcal{S}\)._
Proof.: The proof is divided into two cases: i) \(b>0\) and ii) \(b=0\). For each \(b>0\), consider an initial condition of (20), with \(a_{0}=0\), satisfying \(p_{0}^{b}=\lambda(0,b)>0\). Since \(F\) and \(\lambda\) satisfy (6) and (7), the conditions of Theorem 1 hold, which implies \(\lambda\) is smooth on \(\mathcal{S}\). Moreover, \(\lambda(a,b)\) is the unique solution to the ODE (20) by definition since, by Theorem 1, it must satisfy (8) for all \((a,b)\in\mathcal{S}\). Thus, the maximal interval of existence of this solution must be the domain of \(\lambda\), which is equal to \(\mathbb{R}\) when \(b>0\). Moreover, since \(p_{0}^{b}\in\mathcal{H}_{b}(0)\) by definition and \(F\) satisfies (22), the conditions of Proposition 1 hold, which implies \(\lambda(a,b)\in\mathcal{H}_{b}(a)\) for all \(a\in\mathbb{R}\) for each \(b>0\). When \(b=0\) any value of \(\lambda(a,b)\) satisfies \(\lambda(a,b)\in\mathcal{H}_{b}(a)\) for all \(a>0\). Thus, \(\lambda(a,b)\in\mathcal{H}_{b}(a)\) for all \((a,b)\in\mathcal{S}\), which implies \(\lambda\) satisfies (18), as desired.
Lemma 1 suggests that with condition (22) from Proposition 1, \(F\) only needs to be constructed so that \(\lambda\) is positive when \(a=0\) for it to generate a smooth safety filter. Note that the above result does not guarantee the existence of a continuous \(\lambda:\mathcal{S}\to\mathbb{R}\) satisfying (6) and (7), but states that, if such a function exists, then it is smooth on \(\mathcal{S}\) and satisfies inequality (19). We will provide examples of \(F\) satisfying these conditions shortly. First, we combine Proposition 1 and Lemma 1 to establish the main result of this paper, which formalizes the construction of smooth safety filters.
**Theorem 2**.: _Consider system (1) with a smooth nominal controller \(\mathbf{k}_{\mathrm{d}}:\mathbb{R}^{n}\to\mathbb{R}^{m}\) and let \(h:\mathbb{R}^{n}\to\mathbb{R}\) be a CBF for (1) on a set \(\mathcal{C}\subset\mathbb{R}^{n}\) as in (10). Let \(\lambda:\mathcal{S}\to\mathbb{R}\) satisfy the conditions of Lemma 1 for some smooth function \(F:\mathbb{R}^{3}\to\mathbb{R}\). Then, the controller:_
\[\mathbf{k}_{\mathrm{s}}(\mathbf{x})=\mathbf{k}_{\mathrm{d}}(\mathbf{x})+\lambda (a(\mathbf{x}),b(\mathbf{x}))L_{\mathrm{g}}h(\mathbf{x})^{\top}, \tag{24}\]
_where \(a:\mathbb{R}^{n}\to\mathbb{R}\) and \(b:\mathbb{R}^{n}\to\mathbb{R}\) are as in (13), is a smooth safety filter for (1)._
Proof.: The proof follows directly from Lemma 1 since \(\lambda\) is smooth and satisfies (18), implying (24) satisfies (15).
**Example 2**.: We use our results to construct the functions:
\[F_{1}(a,b,p)=bp^{2}+ap-\tfrac{1}{4}q(b), \tag{25a}\] \[F_{2}(a,b,p)=e^{\frac{p}{2}}-e^{-\frac{a}{\phi b}}-1, \tag{25b}\]
where \(q\) is as in (9) and \(\sigma\in\mathbb{R}_{>0}\). These functions satisfy (22) and the conditions of Lemma 1 where:
\[F_{1}(0,b,p)=bp^{2}-\tfrac{1}{4}q(b)=0\implies\lambda(0,b)=\pm\tfrac{1}{2} \sqrt{\tfrac{q(b)}{b}},\]
\[F_{2}(0,b,p)=e^{\frac{p}{\varepsilon}}-2=0\implies\lambda(0,b)=\sigma\log(2).\]
Hence, for each \(F\) there exists a continuous \(\lambda\) satisfying \(\lambda(0,b)>0\) for all \(b>0\) (recall that \(q(b)>0\) for all \(b>0\)). After verifying these conditions, we proceed to compute the implicit functions satisfying (6):
\[\lambda_{1}(a,b)=\tfrac{1}{2}\lambda_{\mathbb{S}}(a,b)=\begin{cases}0&b=0\\ \frac{-a+\sqrt{a^{2}+bq(b)}}{2b}&b>0\end{cases} \tag{27a}\] \[\lambda_{2}(a,b)=\begin{cases}0&b=0\\ \sigma\log(1+e^{-\frac{m}{b}})&b>0,\end{cases} \tag{27b}\]
which are continuous on \(\mathcal{S}\), and therefore smooth on \(\mathcal{S}\) as one can verify that \(\frac{\partial F}{\partial p}(a,b,\lambda(a,b))\neq 0\) for all \((a,b)\in\mathcal{S}\). These functions are plotted in Fig. 2(b) for a fixed \(b\). As guaranteed by Lemma 1, each \(\lambda\) satisfies inequality (18). Moreover, when \(q(b)=\sigma b\) both (27a) and (27b) approach \(\lambda_{\mathrm{QP}}\) in (14) in the limit as \(\sigma\to 0\). Indeed, when \(b>0\) both (27a) and (27b) are smooth approximations of the \(\mathrm{ReLU}\) function, corresponding to the Squareplus approximation (yielding a "Half-Sontag" formula) and the Softplus approximation, respectively [20]. The functions in (27) are used to construct smooth safety filters via Theorem 2, and are applied to the scenario from Example 1, cf. Fig. 2(c-d).
### _Robust Smooth Safety Filters_
The results in the previous subsection provide sufficient conditions under which the solution \((a,b)\mapsto\lambda(a,b)\) of (20) satisfies inequality (18) for all \((a,b)\in\mathcal{S}\). Such conditions require \(\mathcal{H}_{b}\) to be invariant for (20), which precludes the consideration of safety filters that remain in a strict subset of \(\mathcal{H}_{b}\). For example, even though Sontag's formula in (16) satisfies inequality (18) for all \((a,b)\in\mathcal{S}\), one can verify that the \(F\) producing this formula (9) does not satisfy (22). This is because the formula is contained within the set:
\[\mathcal{H}_{b}^{\varepsilon}(a)\coloneqq\{p\in\mathbb{R}\,:\,a+\tfrac{1}{ \varepsilon}bp\geq 0\}, \tag{28}\]
for \(\varepsilon=2\), cf. Fig 2(b). The above set satisfies \(\mathcal{H}_{b}^{\varepsilon}(a)\subseteq\mathcal{H}_{b}(a)\) for all \(a\leq 0\) and \(\varepsilon\geq 1\). When \(a>0\), \(\mathcal{H}_{b}^{\varepsilon}(a)\nsubseteq\mathcal{H}_{b}(a)\) for any \(\varepsilon\geq 1\). In this situation, the fact that Sontag's formula satisfies inequality (18) relies on the fact that it is always positive, \(\lambda(a,b)\geq 0\). Motivated by this observation, in this subsection we study the invariance of \(\mathcal{H}_{b}^{\varepsilon}\cap\mathbb{R}_{\geq 0}\), noting that:
\[\varepsilon\geq 1\implies\mathcal{H}_{b}^{\varepsilon}(a)\cap\mathbb{R}_{ \geq 0}\subset\mathcal{H}_{b}(a),\]
for all \((a,b)\in\mathcal{S}\). Taking this intersection imposes the additional requirement that \(\lambda(a,b)\geq 0\) for all \((a,b)\in\mathcal{S}\), which is not restrictive since \(\lambda\) negative values of \(\lambda\) imply the resulting controller is pushing in the wrong direction (i.e., toward the constraint boundary) and is attempting to violate (18). As illustrated in Fig. 1 and Fig. 2(b), increasing \(\varepsilon\) lifts the boundary of \(\mathcal{H}_{b}^{\varepsilon}\) off that of \(\mathcal{H}_{b}\) leading to a more restricted set of values that \(\lambda(a,b)\) may achieve. Although increasing \(\varepsilon\) imposes a more conservative condition on \(\lambda\), it adds an additional robustness margin to the resulting safety filter, which, as demonstrated in Sec. IV, may be useful in practice. The following proposition establishes conditions on \(F\) such that the flow of (19) satisfies the tightened condition in (28).
**Proposition 2**.: _Let \(F:\mathbb{R}^{3}\to\mathbb{R}\) be a smooth function that defines the dynamics in (20). Suppose that for all \(b>0\):_
\[\varepsilon\frac{\partial F}{\partial p}\Big{|}_{p=-\frac{a}{b}}=b\frac{ \partial F}{\partial a}\Big{|}_{p=-\frac{a}{b}},\quad\frac{\partial F}{ \partial a}\Big{|}_{p=0}=0, \tag{29}\]
_Then, for each \(b>0\), the set \(\mathcal{H}_{b}^{\varepsilon}\cap\mathbb{R}_{\geq 0}\) is invariant for (20)._
Proof.: Showing the invariance of \(\mathcal{H}_{b}^{\varepsilon}\) follows the same steps as that of Proposition 1 by replacing \(h_{b}\) from (23) with \(h_{b}^{\varepsilon}(a,a)\coloneqq a+\tfrac{1}{\varepsilon}bp\), which defines \(\mathcal{H}_{b}^{\varepsilon}(a)\) as its zero superlevel set. To show that \(\mathbb{R}_{\geq 0}=\{p\in\mathbb{R}\,:\,p\geq 0\}\) is invariant, we define \(h_{p}(p)\coloneqq p\), which defines \(\mathbb{R}_{\geq 0}\) as its zero superlevel set and satisfies \(\frac{\partial h_{p}}{\partial p}\neq 0\), implying zero is a regular value of \(h_{p}\). Hence, \(\mathbb{R}_{\geq 0}\) is invariant provided that:
\[\frac{\mathrm{d}h_{p}}{\mathrm{d}a}=\frac{\mathrm{d}p}{\mathrm{d}a}=-\frac{ \frac{\partial F}{\partial a}(a,b,p)}{\frac{\partial F}{\partial p}(a,b,p)},\]
evalates to zero when \(p=0\), which follows from (29) provided \(\frac{\partial F}{\partial p}\neq 0\) when \(p=0\). Using a similar argument to that in the proof of Proposition 1, we may exclude points satisfying \(\frac{\partial F}{\partial p}=0\) in the above, since such points cannot lie along any trajectory produced by the dynamics in (20). Thus, since both \(\mathcal{H}_{b}^{\varepsilon}\) and \(\mathbb{R}_{\geq 0}\) are invariant for (20) and the intersection of invariant sets is invariant [7, Prop. 4.13], it follows that \(\mathcal{H}_{b}^{\varepsilon}\cap\mathbb{R}_{\geq 0}\) is invariant for (20), as desired.
Similar to Proposition 1, the above result provides a simple condition, (29), that one may use to help determine if an implicit function \(\lambda\) satisfying (6) and (7) will satisfy inequality (18). Note that results similar to Lemma 1 and Theorem 2 can be stated for Proposition 2, the formal statements of which we omit here in the interest of space. As noted earlier, Sontag's \(F\) in (9) satisfies the conditions of Proposition 2, with \(\varepsilon=2\). The following example introduces a Sontag-like safety filter that satisfies such conditions for any \(\varepsilon\geq 1\).
**Example 3**.: The smooth safety filters from Example 2 can approximate the QP-based safety filter (12) arbitrarily closely. Generally, this is desirable; however, in certain situations, such as when handling uncertainty, one may wish to modulate how conservative the resulting safety filter is by tuning the value of \(\varepsilon\) (cf. Fig. 1). For this, we introduce:
\[F(a,b,p)=bp^{2}+\varepsilon ap-\tfrac{\varepsilon^{2}}{4}q(b), \tag{30}\]
which generalizes both (9) and (25a) so that \(F\) satisfies Proposition 2 for any \(\varepsilon\geq 1\) and produces a robust version of Sontag's formula \(\lambda(a,b)=\tfrac{\varepsilon}{2}\lambda_{\mathbb{S}}(a,b)\) for any \(\varepsilon\geq 1\).
**Remark 1**.: Our approach allows for characterizing safety filters via \(\varepsilon\). Although various choices of \(F\) may contain tuning parameters, the behavior of such a safety filter is limited by the value of \(\varepsilon\). When \(\varepsilon=1\), \(\lambda(a,b)\) may approach the boundary \(a+b\lambda(a,b)=0\) of constraint (18), whereas when \(\varepsilon>1\), \(\lambda(a,b)\) may only approach the boundary of the tightened constraint \(a+\tfrac{1}{\varepsilon}b\lambda(a,b)\geq 0\), resulting in a more conservative but also more robust safety filter (cf. Fig 2(b)).
## IV Numerical Examples
This section illustrates the practical benefits of smooth safety filters by applying them to the model-free safety-critical control paradigm introduced in [10]. Here, we design a safety filter for a reduced-order model, the safe trajectory of which is tracked by the full-order dynamics in a model-free fashion. We consider the same setting as in [10, Ex. 2], which involves designing a controller for a planar Segway with configuration \((x,\varphi)\in\mathbb{R}^{2}\), where \(x\) is the position and \(\varphi\) the pitch angle, with the objective of driving forward at a desired velocity \(\dot{x}_{\rm d}\) and stopping before colliding with a wall located at \(x_{\rm max}\). This leads to the safety constraint \(h(x)=x_{\rm max}-x\), which is used as a CBF for a one-dimensional single integrator to construct a safety filter \(k_{0}:\mathbb{R}\to\mathbb{R}\) that produces a safe velocity for the Segway. On the full-order dynamics, this velocity is tracked by the PD controller:
\[\mathbf{k}(\mathbf{x})=K_{\rm p}(x-k_{0}(x))+K_{\varphi}\varphi+K_{\dot{\varphi }}\dot{\varphi}\]
that also attempts to keep the Segway upright, where \(\mathbf{x}=(x,\varphi,\dot{x},\dot{\varphi})\in\mathbb{R}^{4}\) is the state and \(K_{\rm p},K_{\varphi},K_{\dot{\varphi}}\in\mathbb{R}_{>0}\) are gains. Implementation of this controller does not require knowledge of the full-order dynamics, which may be uncertain or difficult to compute, and allows for enforcing _input-to-state_ safety [13] of the closed-loop system [10, Prop. 1].
We now compare the response of the Segway when the safe velocity is generated by a QP-based safety filter and the robust Sontag safety filter from (30) for different values of \(\varepsilon\). We begin by using the same parameters for the controller as in [10, Ex. 2], the results of which are shown in Fig. 3(a). Here, both controllers safely track the reference velocity, and the response of the smooth controller approaches that of the QP controller as \(\varepsilon\to 1\). Although this approach does not directly rely on model knowledge, it relies on tuning the gains of the tracking controller to achieve safety. In general, safety can be achieved by increasing the proportional gain \(K_{\rm p}\) to track the reference velocity more aggressively. When increasing \(K_{\rm p}\) too much, however, we observe that the controller attempting to track a non-differentiable reference signal causes instabilities and safety violation5. In contrast, the same controller attempting to track a smooth reference velocity is more oscillatory, but maintains safety. On the other extreme, taking \(K_{\rm p}\) too low (see Fig. 3(c)) results in safety violation for the QP controller and smooth controller with \(\varepsilon=1\), whereas controllers with \(\varepsilon>1\) maintain safety due to their increased robustness.
Footnote 5: Note that the results in [10] rely on differentiability of \(k_{0}\).
## V Conclusion
We presented a general characterization of smooth safety filters based on the Implicit Function Theorem, leading to smooth universal formulas for safety-critical control that enable quantifying the conservatism of the resulting safety filter. The practical benefits of such smooth safety filters were showcased through their application to safety-critical control with reduced-order models [10]. Future efforts will focus on extending our approach to multiple safety constraints and showcasing the benefits of smooth safety filters on hardware.
|
2309.06661 | Sound field decomposition based on two-stage neural networks | A method for sound field decomposition based on neural networks is proposed.
The method comprises two stages: a sound field separation stage and a
single-source localization stage. In the first stage, the sound pressure at
microphones synthesized by multiple sources is separated into one excited by
each sound source. In the second stage, the source location is obtained as a
regression from the sound pressure at microphones consisting of a single sound
source. The estimated location is not affected by discretization because the
second stage is designed as a regression rather than a classification. Datasets
are generated by simulation using Green's function, and the neural network is
trained for each frequency. Numerical experiments reveal that, compared with
conventional methods, the proposed method can achieve higher
source-localization accuracy and higher sound-field-reconstruction accuracy. | Ryo Matsuda, Makoto Otani | 2023-09-13T01:32:46Z | http://arxiv.org/abs/2309.06661v1 | # Sound field decomposition based on two-stage neural networks
###### Abstract
A method for sound field decomposition based on neural networks is proposed. The method comprises two stages: a sound field separation stage and a single-source localization stage. In the first stage, the sound pressure at microphones synthesized by multiple sources is separated into one excited by each sound source. In the second stage, the source location is obtained as a regression from the sound pressure at microphones consisting of a single sound source. The estimated location is not affected by discretization because the second stage is designed as a regression rather than a classification. Datasets are generated by simulation using Green's function, and the neural network is trained for each frequency. Numerical experiments reveal that, compared with conventional methods, the proposed method can achieve higher source-localization accuracy and higher sound-field-reconstruction accuracy.
Introduction
Sound field recording (i.e., recording the spatio-temporal distribution of sound pressures) is useful for better understanding the sound field through visualization and auralization of wave phenomena over a wide area. Sound field recording is an inverse problem that estimates the sound pressure at an arbitrary location in a region of interest from the sound pressure at discrete observation locations in space, such as at a single microphone array [1; 2; 3; 4], distributed microphones [5], or distributed microphone arrays [6; 7]. In a three-dimensional sound field, an arbitrary sound field can be represented by a linear combination of bases such as spherical harmonics and plane waves; therefore, we consider estimating the coefficients for these bases by regression. Once those coefficients are obtained, the sound field can be reproduced for the listener using a loudspeaker array [8; 9; 10; 11] or a set of headphones [12; 13].
When a sound field in a region is recorded, the representation of the sound field differs depending on whether the target region includes a sound source [14; 15]. In a region without a sound source (i.e., a region subject to the homogeneous Helmholtz equation), the sound field is represented through a straightforward spherical harmonic expansion or plane-wave expansion. However, in a region that includes a sound source, the sound field follows the inhomogeneous Helmholtz equation, which is an ill-posed problem, and cannot be directly expanded using those bases.
Therefore, a method has been proposed to decompose the sound field into a superposition of a small number of point sources by imposing sparsity on the distribution of sound sources as a constraint on the acoustical environment [16; 17]. The sparsity constraint improves the
estimation accuracy even in frequencies greater than the spatial Nyquist frequency. However, these methods require discretization of candidate positions for the sound source location onto a grid in advance and thus cannot accurately estimate the sound source locations when sound sources do not exist at the pre-assumed grid points. In addition, although reducing the grid interval improves the estimation accuracy, it also leads to an increase in computational complexity and memory because of the larger number of grid points.
In contrast to the aforementioned methods that discretize a priori assumed sound source positions, sound field decomposition methods based on the reciprocity gap functional (RGF) [18] and the RGF in the spherical harmonic domain [19] have been proposed as methods for gridless sound field decomposition. because these methods can directly estimate sound source positions in closed form, they are not affected by grid discretization. However, because of the effect of spatial aliasing, the frequency band with high reproduction accuracy is limited by the number of microphones and their arrangement.
Many neural-network-based methods have been proposed in the fields of sound source localization and direction-of-arrival estimation in recent years [20]. Neural-network-based methods estimate the sound source positions either by classification or regression. Classification requires the prior discretization of candidate sound source positions and has the same off-grid problem encountered in the case of sparse sound field decomposition. By contrast, regression does not have the off-grid problem because the source positions can be obtained as the output of the network. In addition, the regression model has also shown better performances than classification for a single-source situation [21; 22]. However, the performance of source localization based solely on single-frequency sound field information is unclear be
cause most regression models have been considered in the time domain [23] or time-frequency domain [24; 21; 22; 25] and are limited to specific sound source signals (e.g., speech).
Therefore, in the present study, we propose a sound field decomposition method that uses a regression-type neural network based solely on sound field information in a single frequency independent of the source information. The proposed method consists of two stages: In the first stage, the sound pressure at the microphones synthesized from multiple sound sources is separated into the sound pressure excited by each source. Then, in the second stage, the sound source position is obtained as a regression from the sound pressure at the microphone consisting of a single sound source. The strength of each sound source is obtained from the sound pressure in the microphone array after separation and the sound source position by linear regression. The structure of the neural networks is similar to that of the source-splitting proposed in [24]. However, the proposed method explicitly separates the contributions of the sound sources in the first stage using a loss function proposed in the present study. The proposed method also limits the number of sources in advance, which can impose sparsity constraints.
This paper is organized as follows: Section II defines the problem setting of sound field decomposition based on the sparsity of the source distribution. Section III describes our proposed method using neural networks; datasets and loss functions for training networks are also described. Section IV presents the numerical experiments and their results. Section. V concludes this study.
## II Sound field decomposition
### Preliminaries
Throughout this paper, the following notations are used: matrices and vectors are denoted by uppercase and lowercase boldface, respectively. The imaginary unit \(\sqrt{-1}\) is denoted by j. Wave number is denoted by \(k=\omega/c\), where \(\omega\) is the angular frequency and \(c\) is the sound velocity. The position vector is denoted by \(\mathbf{r}=(x,y,z)\in\mathbb{R}^{3}\) in the Cartesian coordinate system. The time dependency is assumed as \(\exp\left(\mathrm{j}\omega t\right)\) and is hereafter omitted for simplicity.
### Problem setting
Consider the reconstruction of the sound field in the region \(\Omega\) in \(\mathbb{R}^{3}\) including sound sources from the sound pressure measured by microphone sets \(\mathcal{M}\) discretely placed on the boundary \(\partial\Omega\) (Fig. 1). Because \(\Omega\) includes sources (i.e., singular points), the sound pressure
Figure 1: Overview of the problem setting. Sound sources exist in the target region and microphones are distributed around the region.
in \(\Omega\) satisfies the following inhomogeneous Helmholtz equation:
\[(\nabla^{2}+k^{2})p(\mathbf{r},k)=-Q(\mathbf{r},k). \tag{1}\]
Here, \(p(\mathbf{r},k)\) represents the sound pressure of \(k\) at \(\mathbf{r}\in\Omega\), \(Q(\mathbf{r},k)\) denotes the source distribution in \(\Omega\), and \(\nabla^{2}\) is the Laplace operator. The solution satisfying Eq. (1) can be expressed in terms of the volume integral of the three-dimensional free field Green's function \(G(\mathbf{r}|\mathbf{r}^{\prime},k)\) and \(Q(\mathbf{r}^{\prime},k)\) as
\[p(\mathbf{r},k)=\int_{\Omega}Q(\mathbf{r}^{\prime},k)G(\mathbf{r}|\mathbf{r}^{ \prime},k)\mathrm{d}\Omega, \tag{2}\]
where
\[G(\mathbf{r}|\mathbf{r}^{\prime},k)=\frac{\exp{(-\mathrm{j}k\|\mathbf{r}- \mathbf{r}^{\prime}\|_{2})}}{4\pi\|\mathbf{r}-\mathbf{r}^{\prime}\|_{2}}. \tag{3}\]
If all sources in the \(\Omega\) region are assumed to be point sources, Eq. (2) can be approximated as
\[p(\mathbf{r},k)\approx\sum_{s=1}^{S}a_{s}G(\mathbf{r}|\mathbf{r}_{s},k), \tag{4}\]
where \(S\) denotes the number of sound sources and \(a_{s}\in\mathbb{C}\) and \(\mathbf{r}_{s}\in\Omega\) represent the position and amplitude of the \(s\)-th source, respectively. Therefore, the sound field reconstruction in \(\Omega\) can be considered a sound field decomposition problem to estimate \(S\), \(\{a_{s}\}_{s\in\mathcal{S}}\), and \(\{\mathbf{r}_{s}\}_{s\in\mathcal{S}}\) from the set of observed sound pressure \(\{p(\mathbf{r}_{m},k)\}_{m\in\mathcal{M}}\), where \(\mathcal{S}\) denotes the set of the sound sources. Hereafter, the number of sound sources is assumed to be known in advance.
## III Sound field decomposition based on neural networks
Sound field decomposition based on neural networks is proposed in this section. The proposed method consists of two stages: a sound field separator (SFS) stage and a sound source localizer (SSL) stage. Figure 2 shows a schematic of the proposed model.
### Model architecture
#### iii.1.1 Sound field separator
The SFS aims to separate a sound field generated by multiple sound sources observed at the microphones into multiple sound fields generated by each source.
The sound pressure observed at the microphones is normalized before input to the neural network as follows in order for the neural network to learn scale-independently:
\[\bar{p}(\mathbf{r}_{m},k)=\frac{p(\mathbf{r}_{m},k)}{p_{\max}}, \tag{5}\]
Figure 2: Overview of the proposed sound field decomposition method based on neural networks.
(5)
where
\[p_{\max}=\max_{m\in\mathcal{M}}{(|p(\mathbf{r}_{m},k)|)}. \tag{6}\]
Here, \(|\cdot|\) and \(\max(\cdot)\) denote the operations of taking the absolute value and the maximum value, respectively. Because the neural network processes with real values, the complex sound pressure vector \(\bar{\mathbf{p}}\in\mathbb{C}^{\mathbb{M}}\), which is a column vector of \(\{\bar{p}(\mathbf{r}_{m},k)\}_{m\in\mathcal{M}}\), is transformed into a real-valued tensor \(\bar{\mathbf{P}}^{\mathbb{R}}\in\mathbb{R}^{2\times M}\) as
\[\begin{split}\left[\bar{\mathbf{P}}^{\mathbb{R}}\right]_{0,:}& =\Re\left[\bar{\mathbf{p}}^{\top}\right],\\ &\left[\bar{\mathbf{P}}^{\mathbb{R}}\right]_{1,:}&= \Im\left[\bar{\mathbf{p}}^{\top}\right],\end{split} \tag{7}\]
where \(\Re[\cdot]\) and \(\Im[\cdot]\) represent the operations of taking the real and imaginary parts, respectively.
The neural network of the SFS is defined as a one-dimensional U-net [26] (Fig. 3). Each convolution layer consists of a one-dimensional (1D) convolution followed by layer normalization, and activation, except for the final layer, which has only 1D convolution. The kernel size for convolution is 5, with stride size 1, and padding size 2. Transposed convolution is
Figure 3: Schematic of the neural network architecture of SFS in the case of \(M=64\). (Color online)
defined as 1D transposed convolution with kernel size 3, stride size 2, padding size 2, and output-padding size 2. Max pooling and rectified linear unit (ReLU) functions are used for all pooling layers and activation functions, respectively.
The output of the neural network corresponds to a tensor of the separated sound pressure denoted by \(\bar{\mathbf{P}}_{\text{sep}}^{\mathbb{R}}\in\mathbb{R}^{2S\times M}\). The sound pressure vector corresponding to the \(s\)-th sound source, \(\bar{\mathbf{p}}_{\text{sep},s}\in\mathbb{C}^{\mathbb{M}}\), is represented as
\[\bar{\mathbf{p}}_{\text{sep},s}=\left(\left[\bar{\mathbf{P}}_{\text{sep}}^{\mathbb{R} }\right]_{2(s-1),:}+\text{j}\left[\bar{\mathbf{P}}_{\text{sep}}^{\mathbb{R}} \right]_{2(s-1)+1,:}\right)^{\top} \tag{8}\]
and then unnormalized as
\[\hat{\mathbf{p}}_{\text{sep},s}=\bar{\mathbf{p}}_{\text{sep},s}\times p_{\text{max}}. \tag{9}\]
Figure 4: Schematic of the neural network architecture of SSL in the case of \(M=64\). (Color online)
#### iii.1.2 Single source localizer
In the SSL, the sound source position is located from the sound pressure at the microphones corresponding to each source as described in Sec. III.1.1. Therefore, the SSL is repeated \(S\) times. We denote \(u(\mathbf{r}_{m},k)\) as the separated pressure \(\hat{p}_{\text{sep},s}(\mathbf{r}_{m},k)\), and consider the \(s\)-th source hereafter.
The sound pressure is also normalized before the neural network for scale-independent learning as
\[\bar{u}(\mathbf{r}_{m},k)=\frac{u(\mathbf{r}_{m},k)}{\max_{m\in\mathcal{M}}\left(|u( \mathbf{r}_{m},k)|\right)}. \tag{10}\]
The normalized spatial covariance matrix of sound pressure vectors \(\mathbf{\Sigma}=\bar{\mathbf{u}}\bar{\mathbf{u}}^{\text{H}}\in\mathbb{C}^{M\times M}\) is used as the input of the neural network. Here, \(\bar{\mathbf{u}}\in\mathbb{C}^{\text{M}}\) is a column vector of \(\{\bar{u}(\mathbf{r}_{m},k)\}_{m\in\mathcal{M}}\). The spatial covariance matrix is transformed into a real-valued tensor \(\mathbf{\Sigma}^{\mathbb{R}}\in\mathbb{R}^{2\times M\times M}\) to represent it in a format compatible with the network as follows:
\[\begin{split}\left[\mathbf{\Sigma}^{\mathbb{R}}\right]_{0,:,:}& =\Re\left[\bar{\mathbf{u}}\bar{\mathbf{u}}^{\text{H}}\right],\\ \left[\mathbf{\Sigma}^{\mathbb{R}}\right]_{1,:,:}&= \Im\left[\bar{\mathbf{u}}\bar{\mathbf{u}}^{\text{H}}\right].\end{split} \tag{11}\]
The neural network consists of a feature extractor composed of four convolution layers and a multilayer perceptron (MLP) composed of four linear transformation layers (Fig. 4). Each convolution layer consists of a 2D convolution layer, layer normalization, and activation, in that order. The kernel size for convolution is \(5\times 5\), the stride is 2, and the padding is 1. Each linear transformation layer except the final layer consists of a linear transformation, layer normalization, and activation, in that order. Layer normalization and activation are not used in the final layer. ReLU functions are used for all activation functions, and bias
is added in all layers. The output of the neural network corresponds to the sound source position \(\hat{\mathbf{r}}=(\hat{x},\hat{y},\hat{z})\in\mathbb{R}^{3}\) in the Cartesian coordinate system.
The signal of the sound source can be obtained by linear regression from the estimated source position \(\hat{\mathbf{r}}\) and the sound pressure at microphones \(\{u(\mathbf{r}_{m},k)\}_{m\in\mathcal{M}}\) as
\[\left\{\begin{array}{ll}\hat{a}(k)=\frac{\sum_{m=1}^{M}\{(u(\mathbf{r}_{m},k)-\mu _{u})(G(\mathbf{r}_{m}|\hat{\mathbf{r}},k)-\mu_{g})\}}{\sum_{m=1}^{M}(G(\mathbf{r}_{m}| \hat{\mathbf{r}},k)-\mu_{g})^{2}}&\text{if }S=1\\ \hat{\mathbf{a}}=\mathbf{G}^{\dagger}\mathbf{p}&\text{if }S>1\end{array}\right. \tag{12}\]
where \(\hat{\mathbf{a}}\in\mathbb{C}^{S}\) denotes the estimated source-signal vector; \(\dagger\) denotes the Moore-Penrose pseudo-inverse; \(\mathbf{G}\in\mathbb{C}^{M\times S}\) denotes the transfer function matrix between the \(s\)-th source and the \(m\)-th microphone; and \(\mathbf{p}\in\mathbb{C}^{M}\) denotes the vector of recorded sound pressure at the microphones;
\[\mu_{u}=\frac{1}{M}\sum_{m=1}^{M}u(\mathbf{r}_{m},k);\quad\mu_{g}=\frac{1}{M}\sum _{m=1}^{M}G(\mathbf{r}_{m}|\hat{\mathbf{r}},k). \tag{13}\]
### Dataset
We assume that the sound field is a three-dimensional free field, that \(\Omega\) is a spherical region of radius 1.0 m with the free-field condition, that microphones are located on \(\partial\Omega\) with \(M=64\) using a spherical _t_-design [27], and that the sound sources exist inside a spherical region \(\Omega_{\text{S}}\) of radius 0.8 m. Datasets for training the SFS and SSL are prepared separately.
Pairs of a sound source position and simulated sound pressure at the microphones are used as the dataset for the SSL. If the sound field is assumed to be excited by a single point source, the sound pressure observed at each microphone can be obtained by
\[u(\mathbf{r}_{m},k)=a(k)G(\mathbf{r}_{m}|\mathbf{r}_{\text{src}},k)+n(\mathbf{r}_{m},k), \tag{14}\]
where \(a(k)\in\mathbb{C}\) denotes the source signal, \(\mathbf{r}_{\rm src}\) denotes the single source position, and \(n(\mathbf{r}_{m},k)\) denotes the noise component.
The SSL dataset comprises 10,000 pairs of \(\mathbf{r}_{\rm src}\) and \(\{u(\mathbf{r}_{m},k)\}_{m\in\mathcal{M}}\) for each frequency. The positions of the sound source are randomly generated from a uniform distribution in \(\Omega_{S}\). We use 90% of the dataset for training and the remaining 10% for validation. The amplitude of the sound source \(|a(k)|\) is set to one, and the phase \(\angle a_{s}(k)\) is randomly varied from batch to batch following uniform distribution \(\mathcal{U}(-\pi,\pi)\) for phase-independent learning. The noise is generated as a Gaussian distribution such that the signal-to-noise ratio (SNR) is in the range of \([20,60]\) dB.
The number of sources \(S\) that exist in \(\Omega\) is assumed to be two for the SFS. The SFS dataset consists of pairs of the sound pressure observed at the microphones \(\{p(\mathbf{r}_{m},k)\}_{m\in\mathcal{M}}\) and the separated sound pressure corresponding to each sound source denoted by \(\{p_{{\rm sep},s}(\mathbf{r}_{m},k)\}_{s\in\mathcal{S},m\in\mathcal{M}}\). The sound pressure observed at each microphone can be expressed as,
\[p(\mathbf{r}_{m},k)=\sum_{s=1}^{S}\underbrace{a_{s}(k)G(\mathbf{r}_{m}|\mathbf{r}_{s},k)}_ {p_{{\rm sep},s}(\mathbf{r}_{m},k)}+n(\mathbf{r}_{m},k). \tag{15}\]
Here, \(a_{s}(k)\in\mathbb{C}\) denotes the signal of the \(s\)-th source.
The source locations \(\{\mathbf{r}_{s}\}_{s\in\mathcal{S}}\) used to generate the training dataset are selected from a random combination of 45,000 source locations used in the SSL training. Similarly, a random combination of 5,000 points from the SSL validation dataset is chosen for the source positions for validation. The source signal \(\{a_{s}(k)\}_{s\in\mathcal{S}}\) is randomly varied from batch to batch following \(\Re\left[a_{s}(k)\right]\sim\mathcal{U}(-1,1)\), \(\Im\left[a_{s}(k)\right]\sim\mathcal{U}(-1,1)\) for inter-amplitude-independent and inter-phase-independent learning. The noise is generated as a Gaussian distribution such that the SNR is in the range of \([20,60]\) dB.
### Loss function and training procedure
The mean squared error (MSE) with respect to the source position is used as a loss function of the SSL:
\[\mathcal{L}_{\mathrm{SSL}}=\frac{1}{3}\|\mathbf{r}_{\mathrm{src}}-\hat{\mathbf{r}}_{ \mathrm{src}}\|_{2}^{2}. \tag{16}\]
Here, \(\hat{\mathbf{r}}_{\mathrm{src}}\) denotes the estimated source position in each dataset, respectively. The Adam optimizer [28] with a learning rate of \(5\times 10^{-4}\) is used for training, and the batch size is 100. The model is trained during 1,000 epochs.
As a loss function of the SFS, we propose the permutation-invariant MSE [29] with respect to the separated sound pressure. The loss function in the case of \(S=2\) is defined as,
\[\mathcal{L}_{\mathrm{SFS}}=\frac{1}{S}\min\Bigl{(}\mathrm{MSE}_{11}+\mathrm{ MSE}_{22},\mathrm{MSE}_{12}+\mathrm{MSE}_{21}\Bigr{)}, \tag{17}\]
where
\[\mathrm{MSE}_{ij}=\frac{1}{M}\left(\sum_{M}|p_{\mathrm{sep},i}(\mathbf{r}_{m},k)- \hat{p}_{\mathrm{sep},j}(\mathbf{r}_{m},k)|^{2}\right). \tag{18}\]
The Adam optimizer with a learning rate of \(1\times 10^{-3}\) is used for training, and the batch size is 100. The model is trained during 10,000 epochs.
Numerical experiments
Numerical simulations were conducted to compare the performance of the proposed method with that of conventional methods (i.e., the sparse sound field decomposition method [16] and the spherical-harmonic-domain RGF [19]). Hereafter, these methods are denoted as **Proposed**, **Sparse**, and **SHD-RGF**, respectively.
The arrangement of microphones was the same as that defined in Sec. III.3. Numerical simulations were performed for one and two sound sources, respectively.
To validate the effectiveness of SFS in the case of two sources, we used a neural-network-based model in which the output layer of the SSL model was changed to two source positions; this model was used as a baseline model, hereafter referred to as **Baseline**. The MSE of the permutation-invariant source positions was used as a loss function to learn the baseline, defined as
\[\mathcal{L}_{\text{base}}=\frac{1}{3S}\min\Bigl{(}\|\mathbf{r}_{1}-\hat{\mathbf{r}}_{1 }\|_{2}^{2}+\|\mathbf{r}_{2}-\hat{\mathbf{r}}_{2}\|_{2}^{2},\|\mathbf{r}_{1}-\hat{\mathbf{r}}_ {2}\|_{2}^{2}+\|\mathbf{r}_{2}-\hat{\mathbf{r}}_{1}\|_{2}^{2}\Bigr{)}. \tag{19}\]
The dataset and noise addition used for training **Baseline** were the same as those used for training the SFS neural network. The Adam optimizer with a learning rate of \(5\times 10^{-4}\) was used for optimization. The model was trained during 10,000 epochs with a batch size of 100.
In addition, a neural network model with the same layer as proposed for the SFS trained by Eq. (19) was used to validate the effectiveness of loss function Eq. (17) in the case of two sources. The SSL of the model was pre-trained, and the weights of each SSL layer were fixed when training the SFS. The model was optimized by the Adam optimizer with a learning
rate of \(1\times 10^{-3}\) and trained during 10,000 epochs with a batch size of 100. Hereafter, the model is denoted by **Proposed** (\(\mathcal{L}_{\text{base}}\)).
In **Sparse**, it is necessary to discretize \(\Omega\) in advance in order to set up the candidate source positions. In this experiment, \(\Omega\) was discretized in the \(x,y,z\) directions into grids with \(\delta\) intervals and the grid points were used as source-position candidates. Therefore, we discretized the source-included region \(\Omega_{s}\) by \(\delta=0.1\) m and \(\delta=0.2\) m; the total number of candidate points was 2,109 and 257, respectively. Hereafter, they are denoted by **Sparse** (\(\delta=0.1\)) and **Sparse** (\(\delta=0.2\)), respectively. The OMP [30; 31] algorithm was used for sparse decomposition.
Because **SHD-RGF** requires truncation of the order of the spherical harmonic expansion, three truncation orders (i.e., 5, 6, and 7) were used; they are denoted by **SHD-RGF** (\(N=5\)), **SHD-RGF** (\(N=6\)), and **SHD-RGF** (\(N=7\)), respectively.
In this experiment, we compare and evaluate each method in terms of the accuracy of sound source localization and sound field reconstruction. To evaluate the accuracy of the sound source localization, we define the root-means-square error (RMSE) as
\[\text{RMSE}=\left\{\begin{array}{ll}\sqrt{\|\mathbf{r}_{1}-\hat{\mathbf{r}}_{1}\|_ {2}^{2}}&\text{if }\,S=1\\ \sqrt{\frac{1}{S}\min\Bigl{(}\|\mathbf{r}_{1}-\hat{\mathbf{r}}_{1}\|_{2}^{2}+\|\mathbf{r}_ {2}-\hat{\mathbf{r}}_{2}\|_{2}^{2},\|\mathbf{r}_{1}-\hat{\mathbf{r}}_{2}\|_{2}^{2}+\|\mathbf{r }_{2}-\hat{\mathbf{r}}_{1}\|_{2}^{2}\Bigr{)}}&\text{if }\,S=2.\end{array}\right. \tag{20}\]
To evaluate the accuracy of the sound field reconstruction, we define the signal-to-distortion ratio (SDR) as
\[\text{SDR}=10\log_{10}\frac{\int_{\Omega}|p_{\text{rec}}(\mathbf{r},k)-p(\mathbf{r}, k)|^{2}\text{d}\mathbf{r}}{\int_{\Omega}|p(\mathbf{r},k)|^{2}\text{d}\mathbf{r}}\ (\text{dB}). \tag{21}\]
\(\Omega\) was discretized at 0.1 m intervals to calculate the integral in Eq. (21).
### Training results
All neural networks were trained using a single GPU (GeForce RTX 3090, NVIDIA).
Figure 5 shows the training and validation loss of the SSL as a function of the epoch number. The training loss was found to decrease with each additional epoch at all frequencies. However, the difference between the validation loss and the training loss increased slightly with increasing frequency. The computation time for SSL training was ~0.7 h for each frequency.
Figure 6 shows the training and validation loss of the SFS for **Proposed** as a function of the epoch number. Unlike the SSL learning, little difference was observed between the training loss and the validation loss at all frequencies. However, the loss converged to a larger value as the frequency increased. The computation time for SFS training in **Proposed** was about ~16.7 h for each frequency.
Figure 7 shows the training and validation loss of **Baseline** as a function of the epoch number. Although the converged loss values were dependent on the frequency, the training loss and validation loss converged similarly for all of the investigated frequencies. The computation time for training in **Baseline** was about ~14.0 h for each frequency.
Figure 8 shows the training and validation loss of **Proposed** (\(\mathcal{L}_{\text{base}}\)) as a function of the epoch number. The learning trend was similar to that of **Baseline**; however, the loss value was slightly greater. The computation time for training SFS in **Proposed** (\(\mathcal{L}_{\text{base}}\)) was about ~20.7 h for each frequency.
### Experiments for a single source
In the case of a single sound source, **Proposed** consists of the SSL only. The source position sets used in this simulation were the entire SSL validation dataset described in Sec. III.3. The amplitude of the source signal was set to one, and the phase was randomly
Figure 5: Training and validation loss of SSL plotted against the epoch number at each frequency. (Color online)
Figure 6: Training and validation loss of SFS for **Proposed** plotted against the epoch number at each frequency. (Color online)
chosen from \(\mathcal{U}(-\pi,\pi)\) for each condition. We compared each method with an SNR of 40 dB and 20 dB. The results were averaged for all conditions.
Figure 9 shows the RMSE plotted against frequency for the frequency range 100-900 Hz at intervals of 100 Hz. The RMSEs of **Sparse** were nearly constant at all of the investigated frequencies and SNRs for each \(\delta\). Comparing **Sparse** (\(\delta=0.1\)) and **Sparse** (\(\delta=0.2\)) reveals that a smaller discretization interval resulted in smaller RMSEs, although the RMSEs
Figure 8: Training and validation loss of SFS for **Proposed** (\(\mathcal{L}_{\text{base}}\)) plotted against the epoch number at each frequency. (Color online)
Figure 7: Training and validation loss of **Baseline** plotted against the epoch number at each frequency. (Color online)
remained at almost \(\delta/2\). Figure 9(a) shows that the RMSEs of **SHD-RGF** were smaller than those of **Sparse** at frequencies less than 200 Hz. However, the RMSEs of **SHD-RGF** increased with increasing frequency because of spatial aliasing. In addition, Fig. 9(b) shows that the RMSEs of **SHD-RGF** were larger than those of **Sparse** because noise increased at all of the investigated frequencies. However, **Proposed** achieved much smaller RMSEs than the other methods at all frequencies under both investigated SNRs.
Figure 10 shows the SDR plotted against frequency for the frequency range 100-900 Hz at intervals of 100 Hz. The SDRs of **SHD-RGF** were higher than those of **Sparse** for high SNRs (Fig. 10(a)) and at frequencies less than 200 Hz. The **Proposed** SDRs were the highest among the SDRs of the investigated methods for all of the considered frequencies and SNRs.
Figure 9: RMSE as a function of frequency in the case of a single source with (a) an SNR of 40 dB and (b) an SNR of 20 dB. (Color online)
Figures 11 and 12 show the reconstructed sound pressure distribution and the normalized error distribution at 500 Hz on the \(x\)-\(y\) plane for a single source with an SNR of 20 dB. The true position of the source was at \((-0.05,-0.17,-0.45)\), which was chosen randomly from the validation dataset. The amplitude was set to unity. For **Sparse** and **SHD-RGF**, only the results of **Sparse** (\(\delta=0.1\)) and **SHD-RGF** (\(N=7\)) are shown. The red and green crosses correspond to the true and estimated sound source positions, respectively. The black lines represent the sphere where microphones exist. Figure. 12 shows that **Proposed** achieved the lowest normalized error distribution among the investigated methods. The SDRs in **Proposed**, **Sparse** (\(\delta=0.1\)), and **SHD-RGF** (\(N=7\)) were 26.3, 5.8, and 1.8 dB, respectively.
Figure 10: SDR as a function of frequency in the case of a single source with (a) an SNR of 40 dB and (b) an SNR of 20 dB. (Color online)
Figure 11: Real part of true and reconstructed sound pressure distribution at 500 Hz on the \(x\)-\(y\) plane in the case of a single source with an SNR of 20 dB. The red and green crosses represent the true and estimated sound source positions, respectively. The black lines represent the sphere where microphones exist. (Color online)
Figure 12: Normalized error distribution at 500 Hz on the \(x\)-\(y\) plane in the case of a single source with an SNR of 20 dB. The SDRs in (a), (b), and (c) were 26.3, 5.8, and 1.8 dB, respectively. (Color online)
### Experiments for two sources
In this experiment, we randomly chose 1,000 validation data from all of the SFS validation datasets which are described in Sec. III.3. The source signal \(\{a_{s}(k)\}_{s\in\mathcal{S}}\) was randomized for each condition following \(\Re\left[a_{s}(k)\right]\sim\mathcal{U}(-1,1),\ \Im\left[a_{s}(k)\right]\sim \mathcal{U}(-1,1)\). The experiments were conducted for SNRs of 40 dB and 20 dB, respectively. The results were averaged for all conditions.
Figure 13 shows the RMSE plotted against frequency for frequencies ranging from 100 Hz to 900 Hz at intervals of 100 Hz in the case of two sources. In **Sparse**, the RMSE increased with increasing frequency; this trend differs somewhat from that in the case with a single sound source (Fig. 9). Comparing **Sparse** (\(\delta=0.1\)) and **Sparse** (\(\delta=0.2\)) reveals that a smaller discretization interval resulted in smaller RMSEs even in the case of two sound
Figure 13: RMSE as a function of frequency in the case of two sources with (a) an SNR of 40 dB and (b) an SNR of 20 dB. (Color online)
sources. Unlike the case of a single sound source shown in Fig. 9, the RMSEs of **SHD-RGF** were largest at all frequencies under both investigated SNR conditions. Comparing **Proposed** (\(\mathcal{L}_{\text{base}}\)) and **Baseline** reveals that the RMSEs of **Baseline** were smaller than those of **Proposed** (\(\mathcal{L}_{\text{base}}\)). However, **Proposed** achieved much smaller RMSEs than the other investigated methods at all frequencies under both SNR conditions, except for 100 Hz with an SNR of 40 dB. These results not only demonstrate the effectiveness of the proposed method but also the effectiveness of the proposed loss function.
Figure 14 shows the SDR plotted against frequency for frequencies in the range of 100-900 Hz at intervals of 100 Hz. **Proposed** achieved the highest SDRs among the investigated methods for all frequencies and SNRs; it also achieved the lowest RMSEs. Remarkably, at 500 Hz, the SDR was more than 10 dB greater than those of the other methods.
Figure 14: SDR as a function of frequency in the case of two sources with (a) an SNR of 40 dB and (b) an SNR of 20 dB. (Color online)
Figures 15 and 16 show the reconstructed sound pressure distribution and the normalized error distribution at 500 Hz on the \(x\)-\(y\) plane for an SNR of 20 dB. The true positions of the sources were at \((-0.54,\ 0.04,\ -0.04)\) and \((0.44,\ 0.61,\ 0.06)\), which were chosen randomly from the validation dataset. The amplitudes of the sources were set to one. For **Sparse** and **SHD-RGF**, only the results of **Sparse** (\(\delta=0.1\)) and **SHD-RGF** (\(N=7\)) are shown. Figure 16 shows that **Proposed** achieved the lowest normalized error distribution
Figure 15: Real part of true and reconstructed sound pressure distribution at 500 Hz on the \(x\)-\(y\) plane in the case of two sources with an SNR of 20 dB. (Color online)
among the investigated methods. The SDRs in **Proposed**, **Proposed** (\(\mathcal{L}_{\rm base}\)), **Baseline**, **Sparse** (\(\delta=0.1\)), and **SHD-RGF** (\(N=7\)) were 19.5, 8.0, 11.9, 7.0, and 0.1 dB, respectively.
Figure 16: Normalized error distribution at 500 Hz on the \(x\)-\(y\) plane in the case of two sources with an SNR of 20 dB. The SDRs in (a), (b), (c), (d), and (e) were 19.5, 8.0, 11.9, 7.0, and 0.1 dB, respectively. (Color online)
Conclusion
A neural-network-based method for sound field decomposition was proposed. To reconstruct a sound field in a source-included region, some constraints are necessary to address the ill-posed problem. Conventional methods that use sparsity in the number of sound sources have the disadvantage of determining source-position candidates in advance, which results in a loss of accuracy when sound sources exist at locations other than the candidate locations. In other conventional methods that uses the reciprocity of the transfer function, the accuracy of sound field reconstruction is limited to the spatial Nyquist frequency. To overcome these problems, we proposed two-stage neural networks for sound field decomposition. In the first stage, the sound pressure at microphones is separated into the sound pressure corresponding to each source. In the second stage, the position of each source is localized. For training the first stage, a loss function that explicitly separates measured sound pressure into the sound pressure corresponding to each source was also proposed. Numerical experiments showed that the proposed method that uses the proposed loss function achieved more accurate sound source localization and sound field reconstruction than the investigated conventional methods. Future work will consider the non-anechoic conditions where room reflections exist.
###### Acknowledgements.
This research was partially supported by JSPS Grants (Nos. JP19H04153 and JP22H00523). |
2309.08894 | Charged spherically symmetric black holes in scalar-tensor Gauss-Bonnet
gravity | We derive a novel class of four-dimensional black hole solutions in
Gauss-Bonnet gravity coupled with a scalar field in presence of Maxwell
electrodynamics. In order to derive such solutions, we assume the ansatz $
g_{tt}\neq g_{rr}{}^{-1}$ for metric potentials. Due to the ansatz for the
metric, the Reissner Nordstr\"om gauge potential cannot be recovered because of
the presence of higher-order terms ${\cal O}\left(\frac{1}{r}\right)$ which are
not allowed to be vanishing. Moreover, the scalar field is not allowed to
vanish. If it vanishes, a function of the solution results undefined. For this
reason, the solution cannot be reduced to a Reissner Nordstr\"om space-time in
any limit. Furthermore, it is possible to show that the electric field is of
higher-order in the monopole expansion: this fact explicitly comes from the
contribution of the scalar field. Therefore, we can conclude that the
Gauss-Bonnet scalar field acts as non-linear electrodynamics creating
monopoles, quadrupoles, etc. in the metric potentials. We compute the
invariants associated with the black holes and show that, when compared to
Schwarzschild or Reissner-Nordstr\"om space-times, they have a soft
singularity. Also, it is possible to demonstrate that these black holes give
rise to three horizons in AdS space-time and two horizons in dS space-time.
Finally, thermodynamic quantities can be derived and we show that the solution
can be stable or unstable depending on a critical value of the temperature. | Salvatore Capozziello, G. G. L. Nashed | 2023-09-16T06:25:06Z | http://arxiv.org/abs/2309.08894v1 | # Charged spherically symmetric black holes in scalar-tensor Gauss-Bonnet gravity
###### Abstract
We derive a novel class of four-dimensional black hole solutions in Gauss-Bonnet gravity coupled with a scalar field in presence of Maxwell electrodynamics. In order to derive such solutions, we assume the ansatz \(g_{tt}\neq g_{rr}{}^{-1}\) for metric potentials. Due to the ansatz for the metric, the Reissner Nordstrom gauge potential cannot be recovered because of the presence of higher-order terms \(\tilde{\text{A}}\ \mathcal{O}\left(\frac{1}{r}\right)\tilde{\text{A}}\\) which are not allowed to be vanishing. A Moreover, the scalar field is not allowed to vanish. If it vanishes, a function of the solution results undefined. A Furthermore, it is possible to show that the electric field is of higher-order in the monopole expansion: this fact explicitly comes from the contribution of the scalar field. Therefore, we can conclude that the Gauss-Bonnet scalar field acts as non-linear electrodynamics creating monopoles, quadrupoles, etc. in the metric potentials. We compute the invariants associated with the black holes and show that, when compared to Schwarzschild or Reissner-Nordstrom space-times, they have a soft singularity. Also, it is possible to demonstrate that these black holes give rise to three horizons in AdS space-time and two horizons in dS space-time. Finally, thermodynamic quantities can be derived and we show that the solution can be stable or unstable depending on a critical value of the temperature.
pacs: 04.50.Kd, 04.25.Nx, 04.40.Nr
## I Introduction
A large amount of observational data indicates that our universe is experiencing an accelerated expansion. There are two basic approaches to explain this cosmic acceleration: A The first considers the issue of acceleration in the framework of Einstein's general relativity (GR) and then needs the existence of an odd type of energy called "dark energy," which has a repulsive gravity and constitutes an unclustered ingredient of universe components. The second approach proposes extensions of GR by including functions of the curvature invariants like the Ricci scalar \(R\), the Riemann and Ricci tensors or their derivatives in the Lagrangian formulation or modifications of the Einstein paradigm involving torsion tensor or non-metricity. Among these proposals to modify Einstein's theory there are \(f(R)\) gravity, A braneworld cosmology, Lovelock gravity, Brans-Dicke-like scalar-tensor theories, etc. [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41]. Among these extensions a particular role can be that recently assumed by the so called Gauss-Bonnet (GB) gravity where the topological invariant
\[\mathcal{G}\equiv R^{2}-4R^{\alpha\beta}R_{\alpha\beta}+R^{\alpha\beta\mu\nu}R _{\alpha\beta\mu\nu}\,, \tag{1}\]
is considered into dynamics [42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59]. Specifically, the Lovelock theory is a well-known higher derivative theory of gravity that is a natural extension of GR where GB contributions are taken into account A [60; 61; 62]. A Interestingly, there are no ghost terms if GB curvature-squared terms are present, and the associated field equations include only metric's second derivatives [63; 64; 65; 66; 67; 68]. Another interesting feature of GB gravity is the fact that it can arise from the low-energy limit of A string heterotic theory [69; 70; 71; 72; 73; 74]. It is found that the low-energy expansion of a closed heterotic string effective action possesses a GB term and a scalar field [74], however the functional form of the scalar field is still a debated topic in view of some possible observational signature.
In the context of GB gravity,A several black hole (BH) solutions have been discussed A in the literature [75; 76; 77; 78; 79; 90; 91]. In Ref. [92; 93; 94] holographic thermalization in GB gravity via the Wilson loops and holographic entanglement entropy has been studied.
If we are dealing with GR corrected by a GB term, we are considering the so-called _Einstein-Gauss-Bonnet_ (EGB) gravity. It has been demonstrated that conserved charges of EGB- AdS gravity emerges in the electric section of the Weyl tensor: this feature broadens the notion of conformal mass [95]. A four-dimensional A EGB gravity possesses many interesting properties. It could, for example, solve some singularity issues. A In particular, by taking into account four-dimensional EGB gravity and deriving a spherically symmetric BHs, the field of gravity becomes repulsive as \(r\to 0\), and therefore infalling object cannot approach singularity. A Additionally, spherically symmetric solutions in EGB is distinguish from solutions of GR like the Schwarzschild one. Additionally, investigations into compact objects and their physical characteristics have been conducted within the context of four-dimensional Einstein-Gauss-Bonnet (EGB) gravity. These studies encompass aspects such as black hole stability, quasi-normal modes, shadow phenomena, and the concept of strong cosmic censorship [96; 97; 98]. A wide range of investigations has been carried out in the realm of gravitational physics. These studies encompass various aspects, including: The analysis of the shadow of a rotating black hole [99]. Examination of the innermost stable circular orbit and shadow [100]. Investigations into the thermodynamics, phase transitions, and Joule-Thomson expansion of (un)charged Anti-de Sitter black holes [101; 102; 103]. Exploration of Bardeen solutions [104]. Research on rotating black holes [105; 106]. Development of solutions for relativistic stars [107]. Investigation into Born-Infeld black holes [108]. Examination of spinning test particles orbiting around static spherically symmetric black holes [109]. Studies on the thermodynamics and critical behavior of Anti-de Sitter black holes [110]. Exploration of gravitational lensing phenomena [116]. Analysis of the thermodynamic geometry of Anti-de Sitter black holes [117]. Investigations into Hayward black holes [118]. Research on thin accretion disks around black holes [119]. Examination of superradiance and stability in charged Einstein-Gauss-Bonnet black holes [120]. These studies collectively contribute to our understanding of various aspects of gravitational physics.
This paper delves into the realm of Einstein-Scalar-Gauss-Bonnet (ESGB) gravity involving a scalar field. The emergence of the scalar field can occur through various means, including gravitational processes that arise from induced or spontaneous scalarization, or it can manifest as a gauge field through charged solutions. The presence of a scalar field has a notable impact on the gravitational backdrop and the structure of geodesics, which encompasses the trajectories of photons and the dimensions of a BH's shadow. Specifically, we will employ the field equations of ESGB theory that incorporate charge, alongside the cosmological constant. It's widely recognized that when the Gauss-Bonnet (GB) term is coupled in a non-minimal manner with a scalar field denoted as \(\zeta\), the ensuing dynamics exhibit intricate and non-trivial characteristics See A [121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161] and references therein.Nevertheless, there has been no prior exploration of charged spherically symmetric BH solutions within the ESGB when adopting the assumption that the metric potentials do not follow the ansatz \(g_{tt}\neq g_{rr}{}^{-1}\). This paper endeavors to rectify this gap by obtaining precise spherically symmetric charged black hole solutions in the ESGB theory and subsequently delving into their physical attributes. The structure of the paper is the following. In Section II, we discuss the ghost-free EGB theory to investigate the formation of BH. Furthermore, we study charged field equations to derive exact spherically symmetric space-times with \(g_{tt}\neq g_{rr}{}^{-1}\). In Sec. III, we discuss the physics related to the charged BH solutions, in particular, the thermodynamical properties. In Sec. IV, we discuss results and draw conclusions1
Footnote 1: The full length solutions are reported in a supplementary notebook.
## II Charged solutions in Einstein-scalar-Gauss-Bonnet gravity
Within this section, we will provide a succinct overview of the development of ghost-free \(f\left(\mathcal{G}\right)\) gravity employing Lagrange multipliers. Subsequently, we will proceed to get a solution characterized by spherically symmetric charge.
The Lagrangian for a ghost-free formulation can be expressed as follows, as detailed in [168]:
\[\mathcal{L}=\int d^{4}x\sqrt{-g}\left[\frac{1}{2\kappa^{2}}R+\lambda\left( \frac{1}{2}\omega(\zeta)\partial_{\mu}\zeta\partial^{\mu}\zeta+\frac{\mu^{4}}{ 2}\right)+h\left(\zeta\right)\mathcal{G}-\tilde{V}\left(\zeta\right)+ \mathcal{L}_{\rm matter}-\Lambda+\mathcal{L}_{\rm em}\right]\,. \tag{2}\]
In this equation, \(\mathcal{G}\) represents the Gauss-Bonnet (GB) topological invariant, \(V(\zeta)\) stands for the field potential, and \(h(\zeta)\) is a function associated with the auxiliary field. Additionally, the constant \(\mu\) carries a mass dimension.
The Lagrangian for the electromagnetic field, denoted as \(\mathcal{L}_{\rm em}\), is defined as follows:
\[\mathcal{L}_{\rm em}=-\frac{1}{2}\mathcal{F}\,\left(\mathcal{F}\equiv \mathcal{F}_{\mu\nu}\mathcal{F}^{\mu\nu}\right)\,. \tag{3}\]
Here \(\mathcal{F}=d\xi\) and \(\xi=\xi_{\beta}dx^{\beta}\) is the electromagnetic potential [168; 169]. See, in particular, Ref.[170]. Variations of
the action (2) concerning \(\zeta\), \(\lambda\), and \(g_{\mu\nu}\) result in the following equations:
\[0= -\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\lambda\omega(\zeta)g^{\mu \nu}\sqrt{-g}\partial_{\nu}\zeta\right)+h^{\prime}\left(\zeta\right)\mathcal{G} -\tilde{V}^{\prime}\left(\zeta\right)+\frac{1}{2}\lambda\omega^{\prime}(\zeta)g ^{\mu\nu}\partial_{\mu}\zeta\partial_{\nu}\zeta\,, \tag{4}\] \[0= \frac{1}{2}\omega(\zeta)\partial_{\mu}\zeta\partial^{\mu}\zeta+ \frac{\mu^{4}}{2}\,,\] (5) \[0= \frac{1}{2\kappa^{2}}\left(-R_{\mu\nu}+\frac{1}{2}g_{\mu\nu}[R-4 \Lambda]\right)+\frac{1}{2}T_{\mu\nu}^{\rm matter}-\frac{1}{2}\lambda\omega( \zeta)\partial_{\mu}\zeta\partial_{\nu}\zeta-\frac{1}{2}g_{\mu\nu}\tilde{V} \left(\zeta\right)+D_{\mu\nu}{}^{\tau\eta}\nabla_{\tau}\nabla_{\eta}h\left( \zeta\right)+T_{\mu\nu}^{\rm em}\,. \tag{6}\]
The energy-momentum tensor of the electromagnetic field \(T_{\mu\nu}^{em}\), is built from the Maxwell field as:
\[T_{\mu}^{\rm em\,\nu}=\mathcal{F}_{\mu\alpha}\mathcal{F}^{\nu\alpha}-\frac{1}{ 4}\delta_{\mu}{}^{\nu}\mathcal{F}\,. \tag{7}\]
Additionally, the variation of Lagrangian (2) w.r.t. the gauge potential 1-form \(\xi_{\mu}\) gives:
\[L^{\nu}\equiv\partial_{\mu}\left(\sqrt{-g}\mathcal{F}\mathcal{F}^{\mu\nu} \right)=0\,. \tag{8}\]
In this paper, we will not take into account any more the energy-momentum tensor of perfect fluid matter \(\mathcal{L}_{\rm matter}\) because we are interested only in vacuum solutions.
Now, let's examine the field equations (4), (5), (6), and (8) within a spherically symmetric spacetime characterized by non-equal metric potentials, where \(g_{tt}\neq g_{rr}{}^{-1}\). Our goal is to solve these resulting equations and obtain an analytical solution for the model. The metric can be supposed of the form:
\[ds^{2}=f(r)dt^{2}-\frac{dr^{2}}{f_{1}(r)}-r^{2}\left[d\theta^{2}+\sin^{2} \theta\,d\phi^{2}\right]\,. \tag{9}\]
Here, we have two unspecified functions of the radial coordinate denoted as \(f(r)\) and \(f_{1}(r)\). In our investigation, we make an assumption regarding the vector potential of the Maxwell field, as follows [171]:
\[\xi=q(r)dt\,, \tag{10}\]
where \(q(r)\) is the electric field which is a function of the radial coordinate.
From Eqs. (4), (5), (6), and (8) we obtain:
* the \((0,0)\) of Eq. (4) is: \[0=-\frac{1}{fr^{2}}\bigg{[}f+12\,h^{\prime}f_{1}^{\prime}f_{1}f-f\tilde{V}r^{ 2}+q^{\prime 2}f_{1}r^{2}-8h^{\prime\prime}ff_{1}+8h^{\prime\prime}ff_{1}{}^{2}-4h^{ \prime}ff_{1}^{\prime}-ff_{1}^{\prime}r-ff_{1}\bigg{]}\,,\] (11)
* the \((r,r)\) of Eq. (4) is: \[0=\frac{1}{fr^{2}\sin^{2}\theta}\bigg{[}\lambda\omega\zeta^{ \prime 2}ff_{1}r^{2}\sin^{2}\theta+f\tilde{V}r^{2}\sin^{2}\theta+4\,h^{\prime}f^{ \prime}f_{1}\sin^{2}\theta-12\,h^{\prime}f^{\prime}f_{1}{}^{2}\sin^{2}\theta+ f_{1}f^{\prime}r\sin^{2}\theta-q^{\prime 2}f_{1}r^{2}\sin^{2}\theta\] \[+ff_{1}\sin^{2}\theta-f\sin^{2}\theta\bigg{]}\,,\] (12)
* the \((\theta,\theta)=(\phi,\phi)\) of Eq. (4) is: \[0=-\frac{1}{4r^{2}f^{2}\sin^{2}\theta}\bigg{[}16\,h^{\prime}f_{1} {}^{2}ff^{\prime\prime}r-2\,f_{1}f^{\prime\prime}fr^{2}-8\,h^{\prime}f_{1}{}^{ 2}f^{\prime 2}r-f_{1}^{\prime}f^{\prime}fr^{2}+16\,h^{\prime\prime}f_{1}{}^{2}f^{ \prime}fr-2\,f_{1}f^{\prime}fr+24\,h^{\prime}f_{1}^{\prime}f_{1}f^{\prime}fr\] \[-16\,h^{\prime}f_{1}{}^{2}ff^{\prime\prime}r\cos^{2}\theta-16\,h ^{\prime\prime}f_{1}{}^{2}f^{\prime}fr\cos^{2}\theta-24\,h^{\prime}f_{1}^{ \prime}f_{1}f^{\prime}fr\cos^{2}\theta+2\,f_{1}f^{\prime\prime}fr^{2}\cos^{2 }\theta+8\,h^{\prime}f_{1}{}^{2}f^{\prime 2}r\cos^{2}\theta\] \[+2\,f_{1}f^{\prime}fr\cos^{2}\theta+4\,ff_{1}r^{2}q^{\prime 2}\cos^{2 }\theta+f_{1}^{\prime}f^{\prime}fr^{2}\cos^{2}\theta-f_{1}f^{\prime 2}r^{2}\cos^{2} \theta+4\,\tilde{V}f^{\prime 2}r^{2}\cos^{2}\theta+2\,f_{1}^{\prime}f^{2}r\cos^{2}\theta\] \[-4\,ff_{1}r^{2}q^{\prime 2}-4\,\tilde{V}f^{2}r^{2}-2\,f_{1}^{ \prime}f^{2}r+f_{1}f^{\prime 2}r^{2}\bigg{]}\,,\] (13) where \(q^{\prime}=\frac{\partial q(r)}{\partial r}\).
The field equations of the scalar field (4) and (5) take the following forms:
\[0=\frac{1}{2r^{2}f^{2}\zeta^{\prime}}\left[12\,h^{\prime}f_{1}^{ \prime}ff^{\prime}f_{1}-4\,h^{\prime}f_{1}{}^{2}f^{\prime 2}-2\,f^{2}r^{2}\zeta^{ \prime 2}\lambda^{\prime}\omega f_{1}-f^{2}r^{2}\zeta^{\prime 2}\lambda\omega f _{1}^{\prime}-4\,f^{2}r\zeta^{\prime 2}\lambda\omega f_{1}-2\,\tilde{V}^{\prime}f^{2}r^{2 }-4\,h^{\prime}f_{1}^{\prime}ff^{\prime}\right.\] \[-8\,h^{\prime}f_{1}f^{\prime\prime}f+8\,h^{\prime}f_{1}{}^{2}ff^{ \prime\prime}+4\,h^{\prime}f_{1}f^{\prime 2}-\lambda\omega^{\prime}f_{1}\zeta^{ \prime 2}f^{2}r^{2}-fr^{2}\zeta^{\prime 2}\lambda\omega f_{1}f^{\prime}-2\,f^{2}r^{2} \zeta^{\prime}\lambda\omega f_{1}\zeta^{\prime\prime}\right], \tag{14}\]
\[0=\frac{2\,\lambda\omega f_{1}fr\zeta^{\prime\prime}+2\zeta^{\prime}\left\{ \lambda^{\prime}\omega f_{1}fr+\left[\lambda\omega^{\prime}ff_{1}r+1/2\,\left( rff_{1}^{\prime}+f_{1}\left(4\,b+rf^{\prime}\right)\right)\omega\right]\right\}}{2br}\,. \tag{15}\]
Ultimately, the component of the field equations (8) that does not equate to zero yields the following result:
\[L^{t}\equiv\frac{rf_{1}q^{\prime}f^{\prime}-rf_{1}^{\prime}q^{\prime}f-2\,rf_ {1}q^{\prime\prime}f-4\,f_{1}q^{\prime}f}{2f^{2}r}=0\,, \tag{16}\]
where \({}^{\prime}=\frac{d}{dr}\), and \({}^{\prime\prime}=\frac{d^{2}}{dr^{2}}\).
Eqs. (11)-(16) are six non-linear differential equations for eight unknown functions \(f\), \(f_{1}\), \(h\), \(\tilde{V}^{\prime}\), \(\lambda\), \(\omega\), \(\chi\), and \(q\), therefore, we are going to fix some of these unknown functions to derive the other ones. First, we solve Eq. (5) and obtain2
Footnote 2: The special form of the solution of the scalar field given by Eq. (17) does not allow us to put the constant \(c_{1}\) equal zero, because in this case the function \(\omega\) will be undefined. Moreover, if we assume the scalar field equal zero this yields the function \(\omega\) will also vanishes and in that case the expression multiplied by the Lagrangian multiplier will be vanishing and the theory in that case will lost one of its main merits which is the ghost free.
\[\zeta=c_{1}\,r\,,\qquad\qquad\omega(r)=-\frac{\mu^{2}}{c_{1}{}^{2}f_{1}(r)}\,. \tag{17}\]
Using the above assumptions in Eqs. (11), (12), (13), (14) and (15), we get:
\[f(r)=\Lambda r^{2}+1+\frac{c_{2}r}{c_{1}+r^{2}}\,,\qquad\qquad f_{1}(r)= \Lambda r^{2}+\frac{c_{2}}{r}+\frac{c_{1}{}^{5/2}}{r^{5}}\,, \tag{18}\]
while the forms of \(h\), \(\tilde{V}\) and \(\lambda\) are reported in the Supplementary Material. Let us analyze the above solution in which one can show that, from the metric ansatzs, \(f\) and \(f_{1}\), cannot be equal to each other in any way. Moreover, the dimensional constant \(c_{1}\), which has a dimension \(L^{2}\), is not allowed to be vanishing as Eq. (17) shows.
Due to the complicated forms of the electric field \(q(r)\), the Lagrangian multiplier \(\lambda\), the potential \(V\), and the arbitrary function \(h\), we are going to write here their asymptotic forms in view to understand analytically their behavior. The electric field \(q(r)\), for \(r\rightarrow\infty\), is:
\[q(r)\approx c_{3}-\frac{c_{4}}{r}+\frac{c_{4}c_{1}c_{2}}{12\Lambda r^{6}}+ \frac{c_{2}c_{4}(c_{1}+c_{1}c_{2}\Lambda-\Lambda)}{16\Lambda^{2}r^{8}}-\frac{ c_{4}c_{1}c_{2}{}^{2}}{18\Lambda^{2}r^{9}}\,. \tag{19}\]
Eq. (19) shows that if the constant \(c_{4}\) is vanishing, we get a constant value of the electric field. Moreover Eq. (19) shows that the electric field has more order than the monopole. These extra terms comes from the contributions due to the scalar field. The behavior of the potential \(\tilde{V}(r)\) as \(r\) approaches infinity can be expressed as follows:
\[\tilde{V}(r)\approx-3\Lambda-\frac{4\Lambda{c_{4}}^{2}}{3rc_{2}}+\frac{10c_{1} \Lambda}{3r^{2}}-\frac{20\Lambda{c_{1}}{c_{4}}^{2}}{9r^{3}c_{2}}-\frac{34\,{c _{1}}^{2}\Lambda}{9r^{4}}\,. \tag{20}\]
Moreover, if the constant \(c_{4}\), linked to the electric, is vanishing, we see that the value of the potential is \(O\left(\frac{1}{r^{2n}}\right)\). This implies that when there is no electric field present, the potential order follows a behavior of \(O(\frac{1}{r^{2n}})\), where \(n\) is a positive numerical value. Eq. (20) indicates that we have a constant potential as \(r\rightarrow\infty\). Eq. (20) shows that the dimensional parameter \(c_{2}\), which has the unit of \(L\), is not allowed to vanish to avoid undefined value of the potential \(\tilde{V}\). Moreover, from Eq. (19), we can not reproduce, in any case, the Reissner Nordstrom gauge potential.
Now, let's examine the trend exhibited by the function \(h\) as it approaches zero. In this limit, it assumes the following form:
\[h(r)\approx c_{5}+\frac{r^{7}}{56{c_{1}}^{7/2}}-\frac{c_{3}r^{10}}{60{c_{1}}^{5}}- \frac{r^{11}c_{2}({c_{3}}^{2}+105)}{9240{c_{1}}^{6}}\,. \tag{21}\]
Eq. (21) indicates that \(h\) is as \(r\to 0\). Ultimately, \(\lambda(r)\), as \(r\to\infty\), yields:
\[\lambda(r)\approx-\frac{4}{3}\frac{{c_{4}}^{2}\Lambda}{{c_{2}}\,\mu^{4}r}+ \frac{10}{3}\frac{{c_{1}}\,\Lambda}{\mu^{4}r^{2}}-\frac{20}{9}\frac{{c_{4}}^{2 }\Lambda\,c_{1}}{{c_{2}}\,\mu^{4}r^{3}}-\frac{1}{486}\,\frac{648\,\Lambda^{3}{ c_{4}}^{2}+486\,\Lambda^{3}c_{1}+1836\,{c_{1}}^{2}\Lambda^{4}}{\mu^{4}\Lambda^{3}r^{4}}\,. \tag{22}\]
Eq. (22) means that, for \(r\to\infty\), we get a vanishing value of the Lagrangian multiplier. Also Eq. (22) shows that if the constant \(c_{4}\), related to the electric field, is vanishing, we get the behavior of the Lagrangian multiplier as \(O(\frac{1}{r^{2n}})\) with \(n\) being positive number. In Figure 1 we indicate behavior of \(\tilde{V}\), \(h\) and \(\lambda\) for some numerical values characterizing the solution. From these figures, we can see that all these quantities have positive values and approach to zero as \(r\to\infty\).
## III Physical behavior of the charged black holes
Now, we will delve into the physical characteristics of the black hole derived in the preceding section. To facilitate this, we express the line element of this black hole in the following manner:
\[ds^{2}=\bigg{[}\Lambda r^{2}+1+\frac{c_{2}r}{c_{1}+r^{2}}\bigg{]} dt^{2}-\frac{dr^{2}}{\Lambda r^{2}+1+\frac{c_{2}}{r}+\frac{c_{1}{5}^{7/2}}{r^{ 5}}}-r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2})\] \[\approx\bigg{[}\Lambda r^{2}+1-\frac{2M}{r}+\frac{2Mc_{1}}{r^{3}} -\frac{2Mc_{1}{}^{2}}{r^{5}}\bigg{]}dt^{2}-\frac{dr^{2}}{\Lambda r^{2}+1-\frac {2M}{r}+\frac{c_{1}{5}^{7/2}}{r^{5}}}-r^{2}(d\theta^{2}+\sin^{2}\theta d\phi ^{2})\,. \tag{23}\]
Here we put \(c_{2}=-2M\). The above metric indicates that in no limit can we obtain a Reissner-Nordstrom BH.
The curvature scalars, associated to the solution (23), as \(r\to\infty\) take the form:
\[R^{\alpha\beta\gamma\rho}R_{\alpha\beta\gamma\rho}\approx 24 \Lambda^{2}-\frac{40Mc_{1}\Lambda}{r^{5}}+\frac{48M^{2}}{r^{6}}\,,\] \[R^{\alpha\beta}R_{\alpha\beta}\approx 36\Lambda^{2}+\frac{60Mc_{ 1}}{r^{5}}+\frac{24Mc_{1}-\Lambda c_{1}[252Mc_{1}-108]}{2r^{7}}\,,\] \[R\approx-12\Lambda-\frac{10Mc_{1}}{r^{5}}-\frac{2Mc_{1}+3\Lambda c _{1}[3c_{1}+14Mc_{1}]}{\Lambda r^{7}}\] \[\mathcal{G}\approx 24\Lambda^{2}+\frac{40M\Lambda c_{1}}{r^{5}} +\frac{12c_{1}}{r^{6}}\,. \tag{24}\]
Figure 1: Plot (a) represents \(\tilde{V}\); Plot (b) shows the behavior of \(h\); Plot (c) shows the behavior of \(\lambda\). The numerical values of parameters are \(\Lambda=0.0001\), \(c_{1}=-5\), and \(c_{2}=c_{3}=c_{4}=c_{5}=1\). The dimensional constants are in length units.
For \(r\to 0\), they are:
\[R^{\alpha\beta\gamma\rho}R_{\alpha\beta\gamma\rho}\approx C\,r^{2}+ C_{1}\,r+C_{2}+\frac{C_{3}}{r}\,,\] \[R^{\alpha\beta}R_{\alpha\beta}\approx\frac{C_{4}}{r^{12}}+\frac{ C_{5}}{r^{13}}+\frac{C_{6}}{r^{14}}\,,\] \[R\approx C_{7}\,r^{2}+C_{8}r+C_{9}+\frac{C_{10}}{r}+\frac{C_{11}}{r^{2}}\,,\] \[\mathcal{G}\approx\frac{C_{12}}{c_{1}r^{11}}-\frac{C_{13}}{c_{1} r^{11}}-\frac{60Mc_{1}}{r^{13}}\,, \tag{25}\]
where \(C_{i}\), \(i=0\cdots 13\) are constants constructed from \(c_{1}\) and \(c_{2}\). Eqs. (24) and (25) show that the above invariants become infinite as \(r\to 0\) and have constant values as \(r\to\infty\). Moreover the invariants in Eq. (24) have a mild singularity compared with the Schwarzschild and Reissner-Nordstrom space-times [172], i.e., the leading term for \(r\to 0\) is \(\mathcal{O}\left(\frac{1}{r^{5}}\right)\) while, in Schwarzschild or in Reissner-Nordstrom space-time, it is \(\mathcal{O}\left(\frac{1}{r^{6}}\right)\). The main source of this mild singularity is the dimensional constant \(c_{1}\) which is related to the Gauss-Bonnet scalar field.
To derive the horizon radii of solution (23), we have to solve the equation \(f_{1}(r)=0\) which gives seven roots. It is hard to derive the analytic expressions of such solutions but we can deduce the asymptotic expressions and plot them using some numerical values characterizing the model. We discuss these solutions which depend on the sign of \(\Lambda\).
### The case \(\Lambda<0\)
Let us discuss now numerically the horizons of \(g_{rr}{}^{-1}=f_{1}(r)=0\) since analytic considerations are very difficult being the algebraic equation for \(g_{rr}{}^{-1}=0\) of seventh order. The solution (23) has three real roots which represent the horizons for \(g_{rr}{}^{-1}=f_{1}(r)=0\). As shown in Figure 2 (a), this might constitute three, two, or one solutions depending on the relative values of \(M\), \(c_{1}\) and \(\Lambda\). It is well-known that, for charged (A)dS solutions, one can derive two horizons. This model has three horizons thanks to the dimensional parameter \(c_{1}\) which depends on the Gauss-Bonnet scalar field. Now we are going to discuss the formation of these three horizons separately:
i-When \(\Lambda=-0.01\), \(c_{1}=3\), and \(M=0.65\) then the metric has one horizon in the \((t,r,\theta,\phi)\) coordinates. In this case, the horizon emerges because of the cosmological constant \(r_{c}\) and, if the cosmological constant is equal zero, then we have a naked singularity where the coordinate \(t\) is always timelike and \(r\) is always spacelike. But still there is the singularity at \(r=0\), which is now timelike.
ii-When \(\Lambda=-0.01\), \(c_{1}=3\), and \(M=1.01\), the metric has two horizons in the \((t,r,\theta,\phi)\) coordinates. In this case, \(r_{-}\) and \(r_{+}\) coincide to form \(r_{d}\) which is the degenerate horizon and the other horizon is the one produced by the cosmological constant \(r_{c}\). When the cosmological constant is vanishing, we get one horizon which occurs at \(r=2M\). This represents an event horizon, but the \(r\) coordinate is never timelike: it becomes null at \(r=2M\), but it is spacelike on the other side. The singularity at \(r=0\) is timelike.
iii-When \(\Lambda=-0.01\), \(c_{1}=3\), and \(M=1.15\), the metric has three horizons in the \((t,r,\theta,\phi)\) coordinates. In this case, the metric is positive for large \(r\) and small \(r\) and negative inside the two vanishing points \(r_{\pm}\) as Figure 2 (a) shows. When the cosmological constant is equal zero, the metric has a coordinate singularity at both \(r_{-}\) and \(r_{+}\) that could be removed by coordinate singularity. The surfaces defined by \(r=r_{\pm}\) are both null, and they are both event horizons. The singularity at \(r=0\) is timelike, not a spacelike surface as in Schwarzschild. If one is an observer falling into the black hole from far away, \(r_{+}\) is just like \(2M\) in the Schwarzschild metric; at this radius \(r\) switches from being a spacelike coordinate to a timelike coordinate, and one necessarily moves in the direction of decreasing \(r\).
We can discuss now the thermodynamical properties of the BH solution (23). To this purpose, let us define some useful expressions of thermodynamics. Figure 2 (b), shows that \(r_{-}\) and \(r_{+}\) are related to the outer horizon as \(r=r_{+}\) and the inner horizon of Cauchy as \(r=r_{-}\), and when \(c_{1}=3\), \(\Lambda=-0.01\), and \(c_{2}=2.3\) (trapped region). Furthermore, these horizons are related and we can get a degenerate horizon \(r_{*}\) (extreme BH) when \(c_{2}=-2.1\). If \(c_{2}>-2\), there appears a naked singularity (untrapped region).
The temperature \(T\) of \(r_{+}\) is figured as [173; 174; 175; 176; 177; 178]:
\[T(r_{+})=\frac{f^{\prime}\left(r_{+}\right)}{4\pi}\,, \tag{26}\]
and the entropy \(S\) is figured as:
\[S\left(r_{+}\right)=\frac{1}{4}A\left(r_{+}\right)\,, \tag{27}\]
The stability, \(H\), of the BH relies on the heat capacity sign. To investigate this we figure \(H_{+}\) as [179; 180],
\[H_{+}=\frac{\partial M}{\partial r_{+}}\left(\frac{\partial T}{\partial r_{+}} \right)^{-1}\,. \tag{28}\]
When \(H_{+}>0\left(H_{+}<0\right)\), the BH is stable (unstable). Additionally, we figure \(\mathbb{G}\left(r_{+}\right)\) as [181; 182]:
\[\mathbb{G}\left(r_{+}\right)=M\left(r_{+}\right)-T\left(r_{+}\right)S\left(r_{ +}\right)\,, \tag{29}\]
with \(M\left(r_{+}\right)\) defined as:
\[M\left(r_{+}\right)=\frac{\Lambda{r_{+}}^{7}+{r_{+}}^{5}+c_{1}}{2{r_{+}}^{4}}\,. \tag{30}\]
From Eq. (26), we evaluate the Hawking temperature as:
\[T\left(r_{+}\right)=\frac{2\Lambda{c_{1}}^{2}{r_{2}}^{5}+3c_{1}\Lambda{r_{2}} ^{7}+3\Lambda{r_{+}}^{9}-c_{1}{r_{+}}^{5}+{r_{+}}^{7}-{c_{1}}^{2}+c_{1}{r_{+} }^{2}}{4\pi{r_{+}}^{4}(c_{1}+{r_{2}}^{2})^{2}}\,. \tag{31}\]
The behavior of \(H_{+}\) is represented in Figure 2 (c), indicates that \(T\left(r_{+}\right)>0\) when \(r_{+}>r_{*}\). Figure 2 (c) indicates that \(T_{+}\) vanishes as \(r_{2}=r_{*}\) and, if \(r_{+}<r_{*}\), we have a negative temperature. The vanishing of temperature is a critical temperature and thus we have a positive temperature above the critical temperature and a negative temperature below the critical temperature. We can calculate the heat capacity of BH (23) obtaining:
\[H\left(r_{+}\right)=-\frac{2\pi(3\Lambda{r_{+}}^{7}+{r_{+}}^{5}-4c_{1})(c_{1} +{r_{+}}^{2})^{3}}{2{c_{1}}^{3}\Lambda{r_{+}}^{5}+3{c_{1}}^{2}\Lambda{r_{+}}^ {7}+12c_{1}\Lambda{r_{+}}^{9}+3\Lambda{r_{+}}^{11}-{c_{1}}^{2}{r_{+}}^{5}+6c_{ 1}{r_{+}}^{7}-{r_{+}}^{9}+6{c_{1}}^{2}{r_{+}}^{2}-6{c_{1}}^{2}{r_{+}}^{4}+4{c_ {1}}^{3}}}\,. \tag{32}\]
Using the above thermodynamical quantities, we can finally calculate the Gibbs function as:
\[\mathbb{G}\left(r_{+}\right)=\frac{-\Lambda{c_{1}}^{2}{r_{2}}^{7}-11c_{1} \Lambda{r_{+}}^{9}-7\Lambda{r_{+}}^{11}-4{c_{1}}^{2}{r_{+}}^{5}-7c_{1}{r_{+}}^ {7}-5{r_{+}}^{9}-4{c_{1}}^{3}-7{c_{1}}^{2}{r_{+}}^{2}-5{c_{1}}{r_{+}}^{4}}{4{r _{2}}^{4}(c_{1}+{r_{+}}^{2})^{2}}\,. \tag{33}\]
The behavior of \(\mathbb{G}\left(r_{+}\right)\) is shown in Figure 2 (e) which indicates its positivity.
### The case \(\Lambda>0\)
Following the same procedure of the case \(\Lambda<0\), we obtain the behavior of the solutions of Eq. (23) for \(\Lambda>0\). Also the behavior of the thermodynamical quantities like \(T_{+}\), \(H_{+}\) and \(\mathbb{G}\left(r_{+}\right)\) are shown in Figure 3. From the figure, it is clear that also for \(\Lambda>0\), a physically relevant model can be achieved.
## IV Discussion and conclusions
In four-dimension AdS space-time, we investigated a class of GB gravitational equations coupled to a scalar field [162; 163] in presence of the electromagnetic field. We obtained a charged BH solution. It is derived choosing the condition \(g_{tt}\neq{g_{rr}}^{-1}\) for the metric potentials. It is worth noticing that such a solution cannot yield the standard Reissner-Nordstrom space-time. The BH presents extra terms coming from the presence of the GB field. Such a contribution is not allowed to have a zero value because one of the unknown functions characterizing the GB coupled scalar field, would be undefined if it vanishes. Therefore, this solution cannot, in general, yield solutions of Einstein's GR. Moreover, we have shown that the electric field of this solution is different from the Maxwell field, and this difference depends on the contribution of the GB scalar field. This electric field cannot coincide with the electric field of Maxwell theory. Therefore, we can conclude that the GB scalar field, considered in this study, acts as a non-linear electrodynamics field reproducing higher multipole contributions in the metric potential. In contrast to EGB theory in which one can not reproduce any solutions different from the solutions of GR since the GB term act sin this theory as a topological invariant, which is constructed from the Riemann curvature tensor. Thus in the framework of Einstein theory, where the scalar field is coupled with the GB term, we succeeded in deriving a new solution.
We have to stress again the fact that the solution derived in this study can not be reduced, in any case, to the Reissner-Nordstrom solution. The reasons are the following:
\(i\)) If the dimensional parameter \(c_{2}\) is vanishing then we can reproduce the Reissner-Nordstrom gauge potential but this give rise to many issues in the solution as the potential field \(\tilde{V}\) which is not defined (see Eq. (20)). Additionally, being \(c_{2}\) the total mass of the system, it can not go to zero otherwise the gravitational field has no source.
Furthermore, we investigated the relevant physics of the solution and showed that, as \(r\to\infty\), it approaches to the AdS space-time. Furthermore, it is possible to compute the associated curvature invariants and demonstrate that the solution has a soft singularity when compared to the Schwarzschild and Reissner-Nordstrom space-times. This is because the invariants of Schwarzschild A space-time behave as \(\{K,R^{\mu\nu}R_{\mu\nu},R\}\approx\{\mathcal{O}\left(\frac{1}{r^{6}}\right),0,0\}\) while the invariants of this BH behave as \(\{K,R^{\mu\nu}R_{\mu\nu},R\}\approx\{\mathcal{O}\left(\frac{1}{r^{6}}\right), \mathcal{O}\left(\frac{1}{r^{6}}\right),\mathcal{O}\left(\frac{1}{r^{6}}\right)\}\). The main source of this soft singularity comes from the Gauss-Bonnet scalar field. Also, it is possible to show that this BH has a multi-horizon structure. Specifically, we have three-horizons for the AdS space-time and two horizons for the dS space-time.
Finally, the thermodynamic behavior of this solution can be studied by A calculating some Hawking temperature, heat capacity, and Gibbs free energy. Thermodynamics can be divided into two classes depending on cosmological constant sign, i.e., if \(\Lambda>0\) or \(\Lambda<0\). For both classes, we show that, below a critical temperature, A the BH solution is unstable and undergoes a phase transition whose endpoint is the charged BH dressed with a scalar field. The solution shows a second-order phase transition in which the scalar field condenses below a critical temperature. The Gibbs function can be also evaluated and we showed that, for both classes, it always possesses a positive value.
To summarize, we found a charged solution in the framework of GB gravity coupled to a scalar and demonstrated that it evolves asymptotically to the AdS space-time as \(r\to\infty\). Its singularity is soft compared to the Schwarzschild space-time, it has a multi-horizons structure, and that it is unstable at the critical temperature.
Figure 2: Plot of the thermodynamic quantities of BH solution (23) for \(\Lambda<0\); Figure (a) represents Eq. (23); Figure (b) is the solutions of Eq. (23); Figure (c) represents \(T_{+}\); Figure (d) represents \(\mathbb{G}_{+}\); Figure (e) represents \(\mathbb{G}_{+}\). Here we take the numerical values for \(M=4.6\) and \(\Lambda=-0.01\).
A discussion similar to that reported here can be developed for GB gravity coupled to a scalar but in the framework of non-linear electrodynamics. We expect different BH solutions. This topic will be developed in a forthcoming paper.
###### Acknowledgements.
S.C. acknowledges the support of Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, _iniziative specifiche_ QGSKY and MOONLIGHT2. This paper is based upon work from COST Action CA21136 - _Addressing observational tensions in cosmology with systematics and fundamental physics_ (CosmoVerse), supported by COST (European Cooperation in Science and Technology).
|
2309.12953 | Inter-vendor harmonization of Computed Tomography (CT) reconstruction
kernels using unpaired image translation | The reconstruction kernel in computed tomography (CT) generation determines
the texture of the image. Consistency in reconstruction kernels is important as
the underlying CT texture can impact measurements during quantitative image
analysis. Harmonization (i.e., kernel conversion) minimizes differences in
measurements due to inconsistent reconstruction kernels. Existing methods
investigate harmonization of CT scans in single or multiple manufacturers.
However, these methods require paired scans of hard and soft reconstruction
kernels that are spatially and anatomically aligned. Additionally, a large
number of models need to be trained across different kernel pairs within
manufacturers. In this study, we adopt an unpaired image translation approach
to investigate harmonization between and across reconstruction kernels from
different manufacturers by constructing a multipath cycle generative
adversarial network (GAN). We use hard and soft reconstruction kernels from the
Siemens and GE vendors from the National Lung Screening Trial dataset. We use
50 scans from each reconstruction kernel and train a multipath cycle GAN. To
evaluate the effect of harmonization on the reconstruction kernels, we
harmonize 50 scans each from Siemens hard kernel, GE soft kernel and GE hard
kernel to a reference Siemens soft kernel (B30f) and evaluate percent
emphysema. We fit a linear model by considering the age, smoking status, sex
and vendor and perform an analysis of variance (ANOVA) on the emphysema scores.
Our approach minimizes differences in emphysema measurement and highlights the
impact of age, sex, smoking status and vendor on emphysema quantification. | Aravind R. Krishnan, Kaiwen Xu, Thomas Li, Chenyu Gao, Lucas W. Remedios, Praitayini Kanakaraj, Ho Hin Lee, Shunxing Bao, Kim L. Sandler, Fabien Maldonado, Ivana Isgum, Bennett A. Landman | 2023-09-22T15:53:56Z | http://arxiv.org/abs/2309.12953v2 | Inter-vendor harmonization of Computed Tomography (CT) reconstruction kernels using unpaired image translation
###### Abstract
The reconstruction kernel in computed tomography (CT) generation determines the texture of the image. Consistency in reconstruction kernels is important as the underlying CT texture can impact measurements during quantitative image analysis. Harmonization (i.e., kernel conversion) minimizes differences in measurements due to inconsistent reconstruction kernels. Existing methods investigate harmonization of CT scans in single or multiple manufacturers. However, these methods require paired scans of hard and soft reconstruction kernels that are spatially and anatomically aligned. Additionally, a large number of models need to be trained across different kernel pairs within manufacturers. In this study, we adopt an unpaired image translation approach to investigate harmonization between and across reconstruction kernels from different manufacturers by constructing a multipath cycle generative adversarial network (GAN). We use hard and soft reconstruction kernels from the Siemens and GE vendors from the National Lung Screening Trial dataset. We use 50 scans from each reconstruction kernel and train a multipath cycle GAN. To evaluate the effect of harmonization on the reconstruction kernels, we harmonize 50 scans each from Siemens hard kernel, GE soft kernel and GE hard kernel to a reference Siemens soft kernel (B30f) and evaluate percent emphysema. We fit a linear model by considering the age, smoking status, sex and vendor and perform an analysis of variance (ANOVA) on the emphysema scores. Our approach minimizes differences in emphysema measurement and highlights the impact of age, sex, smoking status and vendor on emphysema quantification.
Deep learning, image translation, generative adversarial networks, harmonization, computed tomography
## 1 Introduction
Image resolution and noise in computed tomography (CT) scans are dependent on raw data acquisition parameters and reconstruction parameters[1]. In the context of lung imaging using CT, the reconstruction kernel has an impact on emphysema quantification[2] and the robustness of radiomic features for different lung diseases[3]. There exists a tradeoff between spatial resolution and noise based on the choice of reconstruction kernel[4]. A hard kernel has higher spatial resolution accompanied by noise while a soft kernel has lower spatial resolution with reduced noise[4]. This trend is observed within a vendor and across vendors **(Figure 1)**. The sharpness of the kernel affects the values of quantitative image features during image analysis, creating differences in measurements[5].
Kernel harmonization is a method that standardizes quantitative measurements across reconstruction kernels. Existing methods include physics-based and deep learning approaches. A physics-based harmonizer involving the modulation transfer function and global noise index was developed and its efficiency was evaluated on emphysema quantification [6]. Another physics-based approach implemented a generative deep learning model for harmonization and measured its performance on image similarity metrics and emphysema-based imaging biomarkers [7]. Deep learning approaches perform kernel conversion on paired data by learning the differences between high and low-resolution images using convolutional neural networks (CNN) [8]. Tanabe et al. [9] employed a harmonization method on paired hard and soft kernels using a CNN and evaluated the performance on emphysema, intramuscular adipose tissue and coronary artery calcification. Lee et al. [10] implemented kernel conversion from one kernel to various other kernels using CNNs. Fully convolutional networks (FCN) are also used for kernel harmonization. Bak et al. [11] implemented an FCN for image-to-image translation from a hard to soft kernel to study emphysema quantification. In addition to quantitative assessment, kernel conversion using CNN has shown to improve the reproducibility of radiomic features for pulmonary nodules and masses [12]. These methods explore kernel harmonization in CT scans reconstructed with pairs of hard and soft kernels that have one-to-one mapping.
Generative adversarial networks (GANs) can generate synthetic images from Gaussian noise [13]. Image-to-image translation involves mapping a source image to a target image, preserving the contents of the source and transferring the style of the target [14]. Conditional GANs [15] have been employed for image translation in the form of pix2pix GAN for paired data [16] and cycle GAN for unpaired data [17]. Advanced generative models have been developed that can perform unsupervised image translation [18], multimodal image translation [19] and multidomain image translation [20]. In medical imaging, GANs have been used in clinical applications that include image synthesis, image reconstruction, cross-modality synthesis, image analysis and pseudo-healthy synthesis [21]. When considering CT scans from different vendors reconstructed with different kernels, a one-to-one mapping does not exist. One approach to harmonize across unpaired kernels is to use a cycle GAN where the goal is to translate images from the source to target domain and back to the source domain, ensuring a cycle consistent translation.
We implemented a multipath cycle GAN for kernel harmonization between kernels from the same vendor and kernels from different vendors using the concept of a cycle GAN. The generators were built using a combination of shared encoder-decoder architectures, creating multiple harmonization paths. This enables an encoder to share the latent space with the corresponding target decoders for multi domain image-to-image translation (**Figure 2**). We evaluated our model on emphysema quantification by standardizing hard and soft kernels to a reference soft kernel. We studied the effect of age, sex, smoking status and vendor on emphysema scores before and after harmonization by fitting a linear regression model and carrying out an analysis of variance (ANOVA) statistical test.
Figure 1: Differences in reconstruction kernels can be minimized by harmonizing to a reference standard. Harmonizing between paired kernels (left) has been explored due to the presence of one-to-one pixel correspondence between scans. However, unpaired kernels (right) create additional difficulties due to the difference in the anatomical alignment of scans obtained for different subjects from different vendors.
## 2 Methodology
We use data from the National Lung Screening Trial (NLST), a randomized controlled trial that compared low-dose CT (LDCT) scans of the chest with chest X-Ray in lung cancer screening [22]. Participants included in the trial were former and current smokers between the ages of 55 and 74 years, having a smoking history of at least 30 pack years [22]. We chose LDCT scans in the following manner: for every participant, the CT scans were reconstructed using different reconstruction kernels. Within a vendor, a participant had a scan reconstructed with a soft kernel and a hard kernel, forming a pair of scans. We consider the Siemens vendor consisting of the B50f (hard) kernel and B30f (soft) kernel and the GE vendor consisting of the BONE (hard) kernel and STD (soft) kernel. The peak kilovoltage output (kVp) for the scans from B50f, B30f, GE BONE and GE STD ranged from 80-140 kVp. We choose 50 scans for every reconstruction kernel resulting in a total of 200 scans to train the model. While testing our model, we consider 50 withheld scans each from the B50f, GE BONE and GE STD kernel and harmonize to the B30f kernel.
### Pre-processing
We convert the CT scans from DICOM to NIFTI using the dcm2niix tool [23] (version 1.0.2). Before feeding the data to the model, the images are clipped to [-1024,3072] Hounsfield Units (HU) and normalized to [-1,1].
### Multipath cycle GAN model
We use the concept of a cycle GAN and incorporate multiple paths to perform harmonization across different reconstruction kernels. To build a multipath kernel harmonization model, we use multiple generators and discriminators. The generator is a U-Net [24] with skip connections. We deconstruct the U-Net model into its respective encoder and decoder architectures, initializing encoders and decoders for every kernel. Skip connections enables the decoder to learn the low-level information (visual features) during the up-sampling process. A PatchGAN [16] is implemented as the discriminator model which classifies patches of images as real or synthetic. For a given path, the latent space from a source encoder is of size (512,1,1) where 512 represents the number of features obtained from the encoding process and the last two dimensions represent the spatial dimension. This latent space is utilized by three decoders from the respective target domains. In this fashion, each encoder shares its latent space with three other decoders depending on the path chosen for harmonization.
Figure 2: Kernel harmonization across four different reconstruction kernels can be performed using multiple cycle GANs operating across multiple paths. For a given source domain, a latent space is obtained from a source encoder that can be decoded by the corresponding target decoders. This approach enables harmonization between kernels from the same vendor and across kernels from different vendors using a high dimensional shared latent space (denoted as “L”).
For the four different kernel domains, there are six possible directions to carry out harmonization: B50f to B30f, B50f to GE BONE, B50f to GE STD, B30f to GE STD, B30f to GE BONE and GE BONE to GE STD. A total of 12 different paths are created with six forward paths and six backward paths (**Figure 2**). In every direction, we build the generator as follows: we treat one kernel as the source domain and use a source encoder to compress the image into a latent representation along with features from the down sampling process. We treat another kernel as the target domain and concatenate the features from the source encoder to the target decoder which decodes the latent representation, resulting in the generation of a synthetic image for the target domain (**Figure 3**). We stitch together all possible combinations of encoders and decoders, creating multiple generators for all paths. We use four different discriminators which are shared among all the reconstruction kernels depending on the direction of image translation.
Our model uses data from all the reconstruction kernels and is trained simultaneously on all paths. Our approach enables the model to harmonize from hard to soft kernels between and across vendors, hard to hard kernels and soft to soft kernels across vendors. We train our model on 2D grayscale axial slices of size \(512\times 512\) pixels from all domains. The images are loaded into the model at a size of \(572\times 572\) pixels and are cropped to \(512\times 512\) pixels. The model was trained in parallel on two Nvidia A6000 GPUs for a total of 30 epochs with a batch size of 8, with the Adam[25] optimizer and a learning rate of 0.0002. The learning rate remains constant for the first 15 epochs and begins to linearly decay for the next 15 epochs till it reaches 0. The generator and discriminator are governed by an adversarial loss which is implemented using the LSGAN[26] loss function. In addition to the adversarial loss, the generator is governed by an L1 cycle consistency loss. The weighting parameter, \(\lambda\) for the adversarial loss is set to 10 for the forward and backward cycle paths. We use the default cycle GAN configuration of random horizontal flipping for data augmentation. A total of 12 adversarial losses, cycle losses and discriminator losses are implemented.
Figure 3: A cycle GAN consists of a forward and backward path. In the forward path, the source encoder and target decoder combine together to form a U-Net that generates a synthetic image with the style of the target domain. The synthetic image and the real target domain image are passed as inputs to a discriminator \(\mathrm{D_{h}}\) which distinguishes whether the generated image is real or fake. In the backward path, a synthetic image with the style of the source domain is generated which is fed to discriminator \(\mathrm{D_{A}}\) along with the source domain image.
To validate our model, we estimate the ability of the model to minimize measurement differences in emphysema quantification. We compute lung masks for all the scans using an existing algorithm that automatically analysed the lung regions [27]. We compute percentage of voxels that have a radiodensity less than -950 HU using the segmented lung masks to obtain the emphysema score. We fit a linear model to estimate the effect of age, vendor, sex and smoking status on the emphysema scores for the respective kernels before and after harmonization. The linear regression equation is given by:
\[Y\sim\beta_{0}+\beta_{1}*X_{1}+\beta_{2}*X_{2}+\ \beta_{3}*X_{3}\ +\beta_{4}*X_{4}+\ \varepsilon \tag{1}\]
where \(Y\) is the emphysema measurement, \(\beta_{0}\) is the intercept term, \(X_{1},X_{2},X_{3},X_{4}\) represent the age, sex, smoking status and vendor respectively, \(\beta_{1},\beta_{2},\beta_{3},\beta_{4}\) represent the regression coefficients of the independent variables and \(\varepsilon\) is the error term.
## 3 Results
We harmonize the B50f, GE STD and GE BONE kernels to the reference B30f kernel using the trained model. Prior to harmonization, each kernel has a different appearance due to the difference in textures that occur as a result of the vendor specific reconstruction. Harmonization standardizes the noise level to the reference kernel across all the reconstruction kernels (**Figure 4**). The converted B50f kernel, GE BONE kernel and GE STD kernel are translated using the style of the B30f kernel. The anatomy of the lung scans is preserved in both kernels after harmonization. Although the GE STD and GE BONE kernel are harmonized to the B30f kernel, artefacts are introduced in regions outside the lung field of view.
We assess the effect of harmonization on emphysema quantification. The emphysema scores are computed for subjects from different reconstruction kernels. Before harmonization, the range of emphysema scores of the B50f kernel, B30f kernel and the GE STD kernel are (2.16, 37.06), (0.27, 30.12) and (0.02, 20.02). After harmonization to the B30f kernel, the range of scores for the converted B50f kernel and the GE STD kernel are (0.15, 25.05) and (0.06, 25.4). The distribution
Figure 4: The noise in reconstruction kernels creates differences in the texture of underlying anatomical structures. The B50f and GE BONE hard kernels are noisy while the B30f and GE STD kernels are less noisy. Although the B30f and GE STD are soft kernels, their noise levels are different as these kernels belong to different vendors. Standardizing the GE soft kernel, GE BONE kernel and the B50f kernel to the reference B30f kernel (row 2) ensures consistent texture across all kernels for quantitative image analysis.
of scores before and after harmonization are observed from violin plots (**Figure 5**). We also harmonized the GE BONE kernel to the B30f kernel but observed that the distribution of emphysema scores went from (1.02, 27.67) to (0.04, 43.84) (**Figure 5**). Although the GE BONE kernel resembles the B30f kernel in appearance, the emphysema scores are overestimated on the harmonized kernel as compared to the original emphysema score (**Figure 6**). For this reason, we exclude the GE BONE kernel from the regression analysis.
To study the effect of age, sex, vendor and smoking status on emphysema, we perform ANOVA after the models are fit to the data before and after harmonization. Age, sex and vendor had an impact on the emphysema scores for different reconstruction kernels before harmonization. Once the kernels were harmonized, vendor and sex are no longer significantly (p\(>\)0.05) related with the emphysema score while age continued to remain significantly (p\(<\)0.05) related. Smoking status had no impact before or after harmonization. All variables that are significantly related with emphysema score are highlighted in **Table 1**.
## 4 Discussion and Conclusion
In this study, we investigated kernel harmonization in a multi-vendor, multi-kernel scenario by considering hard and soft reconstruction kernels from the Siemens and GE vendor. We implemented a multipath cycle GAN that can harmonize
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Parameters** & **Before harmonization** & **After harmonization** \\ \hline Vendor & \(\mathbf{p<0.05}\) & \(p=0.92\) \\ \hline Age & \(\mathbf{p<0.05}\) & \(\mathbf{p<0.05}\) \\ \hline Sex & \(\mathbf{p=0.02}\) & \(p=0.24\) \\ \hline Smoking status & \(p=0.11\) & \(p=0.11\) \\ \hline \end{tabular}
\end{table}
Table 1: ANOVA is performed on the emphysema scores before and after kernel harmonization. The effect of age, vendor, sex and smoking status are studied. All p values less than 0.05 are significant.
Figure 5: Percentage emphysema scores are affected by the reconstruction kernel in a given vendor, resulting in differences in measurements. Hard kernels overestimate emphysema quantification. Harmonizing kernels from different vendors to a reference soft kernel minimizes measurement errors, leading to a consensus among vendors for emphysema measurement.
across different kernels in six different directions. We observe the efficiency of harmonization by standardizing the B50f (hard) kernel, GE BONE (hard) kernel and GE STD (soft) kernel to the B30f soft kernel and further evaluate emphysema quantification on the converted kernels. For Siemens, we observed that the model is able to convert the B50f kernel to the B30f kernel in an unpaired fashion on paired data. Across the vendors, the model was able to harmonize the GE STD model by learning the style of the B30f. Prior to harmonization, there is variation in emphysema quantification between hard and soft kernels. This variation is minimized among all the kernels after harmonizing to the reference B30f kernel as seen in **Figure 5**.
Our findings are consistent with previous studies that implemented kernel harmonization between paired kernels. Gallardo-Estrella et al.[28] showed that emphysema is sensitive to the reconstruction kernel. In their study, normalization of the kernel reduced the average differences in emphysema quantification for the Siemens and GE vendors. Additionally, Jin et al.[29] carried out a harmonization study on B50f and B30f kernels, showing that the lung density biomarkers for emphysema reduced considerably after harmonization. We also looked at the impact of age, vendor, smoking status and sex on emphysema quantification. Vendor had a high influence on emphysema prior to harmonization, suggesting variations in measurements as seen in **Table 1**. Once the kernels were harmonized, the influence of vendor and sex on emphysema was not significant, concluding that the difference in measurements were minimized across the kernels. Therefore, harmonizing kernels to a reference soft kernel mitigated the site effect on emphysema.
Our approach has several limitations. We chose 50 scans for each reconstruction kernel while training the model. Although the model was trained on slices, the size of the dataset is limited, preventing the model from learning better representations. Furthermore, the harmonization of the GE BONE kernel was poor compared to other kernels while studying emphysema quantification. There were very few subjects where the emphysema variation was minimum. In one such case, the percent emphysema score reduced from 11.59% to 6.31%. In most of the cases, emphysema was over estimated after harmonization. A representative case for overestimation of emphysema can be seen in **Figure 6** where the emphysema score changed from 27.47% to 43.84%. Additionally, there were artefacts in the harmonized GE BONE and GE STD kernels that were observed outside the lung field of view. A possible explanation for this could be the lack of convergence of the model as a result of a small number of epochs for training. In future studies, better initialization, additional data for the current vendors, inclusion of additional vendors with different reconstruction kernels and longer epochs for training need to be implemented.
Figure 6: Emphysema quantification of GE BONE after kernel harmonization to B30f remained challenging. Although harmonization correctly reduced emphysema variation (left) in a few subjects, emphysema in the majority of subjects was over estimated (right).
## Acknowledgement
This research was funded by the National Cancer Institute (NCI) grant R01 CA253923. This work was also supported in part by the Integrated Training in Engineering and Diabetes grant number T32 DK101003. This research is also supported by the following awards: National Science Foundation CAREER 1452485; NCI grant U01 CA196405; grant UL1 RR024975-01 of the National Center for Research Resources and grant UL1 TR000445-06 of the National Center for Advancing Translational Sciences; Martineau Innovation Fund grant through the Vanderbilt-Ingram Cancer Center Thoracic Working Group; NCI Early Detection Research Network grant 2U01CA152662.
|
2309.04857 | Semilinear degenerate elliptic equation in the presence of singular
nonlinearity | Given $\Omega(\subseteq\;R^{1+m})$, a smooth bounded domain and a nonnegative
measurable function $f$ defined on $\Omega$ with suitable summability. In this
paper, we will study the existence and regularity of solutions to the
quasilinear degenerate elliptic equation with a singular nonlinearity given by:
\begin{align} -\Delta_\lambda u&=\frac{f}{u^{\nu}} \text{ in }\Omega\nonumber
&u>0 \text{ in } \Omega\nonumber &u=0 \text{ on } \partial\Omega\nonumber
\end{align} where the operator $\Delta_\lambda$ is given by
$$\Delta_\lambda{u}=u_{xx}+|x|^{2\lambda}\Delta_y{u};\,(x,y)\in \;R\times\;R^m
$$ is known as the Grushin operator. | Kaushik Bal, Sanjit Biswas | 2023-09-09T18:16:15Z | http://arxiv.org/abs/2309.04857v1 | # Semilinear degenerate elliptic equation in the presence of singular nonlinearity
###### Abstract
Given \(\Omega(\subseteq\mathbb{R}^{1+m})\), a smooth bounded domain and a nonnegative measurable function \(f\) defined on \(\Omega\) with suitable summability. In this paper, we will study the existence and regularity of solutions to the quasilinear degenerate elliptic equation with a singular nonlinearity given by:
\[-\Delta_{\lambda}u =\frac{f}{u^{\nu}}\text{ in }\Omega\] \[u>0\text{ in }\Omega\] \[u=0\text{ on }\partial\Omega\]
where the operator \(\Delta_{\lambda}\) is given by
\[\Delta_{\lambda}u=u_{xx}+|x|^{2\lambda}\Delta_{y}u;(x,y)\in\mathbb{R}\times \mathbb{R}^{m}\]
is known as the Grushin operator.
###### Contents
* 1 INTRODUCTION
* 2 PRELIMINARIES AND FEW USEFLU RESULTS
* 3 EXISTENCE AND REGULARITY RESULTS
* 3.1 The case \(\nu=1\)
* 3.2 The case \(\nu>1\)
* 3.3 The case \(\nu<1\)
* 4 APPROXIMATION OF THE EQUATION (1)
* 5 A FEW AUXILIARY RESULTS
* 6 PROOF OF MAIN RESULTS
* 6.1 The case \(\nu=1\)
* 6.2 The Case \(\nu>1\)
* 6.3 The Case \(\nu<1\)
## 7 Variable singular exponent
In this paper, we are interested in the semilinear elliptic problem, whose model is given by
\[-\Delta_{\lambda}u =\frac{f}{u^{\nu}}\text{ in }\Omega \tag{1}\] \[u>0\text{ in }\Omega\] (2) \[u=0\text{ on }\partial\Omega \tag{3}\]
where the operator \(\Delta_{\lambda}\) is given by
\[\Delta_{\lambda}u=u_{xx}+|x|^{2\lambda}\Delta_{y}u;\;\lambda\geq 0\]
is known as the Grushin operator. \(\Delta_{y}\) denotes the Laplacian operator w.r.t \(y\) variable. \(\Omega\subseteq\mathbb{R}^{1+m}\) is a \(\Lambda-\)connected bounded open set (definition provided in the next section) and \(X=(x,y)\in\Omega\), \(x\in\mathbb{R}\), \(y=(y_{1},y_{2},...,y_{m})\in\mathbb{R}^{m}\), \(m\geq 1\). Here \(\nu>0\) is a positive real number, and \(f\) is a nonnegative measurable function lying in some Lebesgue space.
To understand the context of our study, we start by looking at available literature concerning (1). Starting with the now classical work by Crandall et al. [8] where the case \(\lambda=0\) was considered and showed to have a unique solution in \(C^{2}(\Omega)\cap C(\bar{\Omega})\) such that the solution behaves like some power of the distance function near the boundary, a plethora of work followed provided \(f\in C^{\alpha}(\Omega)\). Of particular significance is the work of Lazer-Mckenna, where the solution was shown to exist in \(H^{1}_{0}(\Omega)\) if and only if \(0<\delta<3\). When \(f\in L^{1}(\Omega)\), Boccardo and Orsina [6] proved if \(0<\nu\leq 1\) then there exist a solution of (1) in \(H^{1}_{0}(\Omega)\) and for \(\nu>1\) there exist a solution \(u\in H^{1}_{loc}(\Omega)\) such that \(u^{\frac{\nu+1}{2}}\in H^{1}_{0}(\Omega)\) among other regularity results. The p-laplacian case was settled by [7], where existence, uniqueness, and some regularity results were proved.
In this paper, we would like to relook at the equation (1) by replacing the Laplacian with a degenerate elliptic equation whose prototype is given by Grushin Laplacian \(\Delta_{\lambda}\). We will prove the existence and regularity results analog to [6]. It is worth pointing out that there are several issues when degeneracy is introduced. If the distance between the domain \(\Omega\) and the plane \(x=0\) is positive, then the Grushin operator will become uniformly elliptic in \(\Omega\), and in this case, the problem is settled in [6]. We assume the domain \(\Omega\) intersects the \(x=0\) plane, thus degenerating the operator in \(\Omega\). To handle this kind of degeneracy, assuming that \(\Delta_{\lambda}\) admits a uniformly elliptic direction, we discuss the solvability of (1) in the weighted degenerate Sobolev space \(H^{1,\lambda}(\Omega)\) which is defined in [9, 11]. We would also need to have a notion of convergence of sequence in the space \(H^{1,\lambda}(\Omega)\) for which Monticelli-Payne [18] introduced the concept of a quasi-gradient, hence providing a proper representation of elements of \(H^{1,\lambda}(\Omega)\). Another issue is the lack of availability of the Strong Maximum Principle, which we showed to hold using weak Harnack inequality of Franchi-Lanconelli [12, Theorem 4.3] valid for \(d-\)metric on \(\Omega\) provided \(\lambda\geq 1\) and assuming that \(\Omega\) is \(\Lambda-\)connected (definition is provided in the next section). We conclude our study with a brief discussion of how singular variable exponent for Grushin Laplacian may be handled, whose Laplacian counterpart can be found in Garain-Mukherjee [14]. For further reading into the topic, one may look at the papers [2, 3, 4, 5, 19] and the references therein.
**Notation 1.1**.: _Throughout the paper, if not explicitly stated, \(C\) will denote a positive real number depending only on \(\Omega\) and \(N\), whose value may change from line to line. We denote by \(\langle.,.\rangle\) the Euclidean inner product on \(\mathbb{R}^{n}\) and denote by \(|A|:=\sup_{|\xi|=1}\langle A\xi,\xi\rangle\) the norm of a real, symmetric \(N\times N\) matrix \(A\). The Lebesgue measure of \(S\subset\mathbb{R}^{N}\) is denoted by \(|S|\). The Holder conjugate of \(r\geq 1\) is denoted by \(r^{\prime}\)._
This paper is organized into seven sections. Section 2 discusses functional, analytical settings related to our problem and a few related results. We state our main results in section 3. Section 4 and 5 are devoted to proving a few auxiliary results. We prove our main results in section 6. Finally, in section 7, we consider the variable singular exponent case.
## 2 Preliminaries and Few Useful Results
We define a few crucial notions, and the metric introduced in Franchi-Lanconelli [12].
**Definition 2.1**.: _An open subset \(\Omega(\subset\mathbb{R}^{N})\) is said to be \(\Lambda\)-connected if for every \(X,Y\in\Omega\), there exists a continuous curve lying in \(\Omega\) which is piecewise an integral curve of the vector fields \(\pm\partial_{x},\pm|x|^{\lambda}\partial_{y_{1}},...,\pm|x|^{\lambda}\partial_ {y_{m}}\) connecting \(X\) and \(Y\)._
Note that every \(\Lambda\)-connected open set in \(\mathbb{R}^{N}\) is connected. We denote by \(P(\Lambda)\) the set of all continuous curves which are piecewise integral curves of the vector fields \(\pm\partial_{x},\pm|x|^{\lambda}\partial_{y_{1}},...,\pm|x|^{\lambda}\partial_ {y_{m}}\). Let \(\gamma:[0,T]\rightarrow\Omega\) is an element in \(P(\Lambda)\) and define \(l(\gamma)=T\).
**Definition 2.2**.: _Let \(X,Y\in\Omega\), we define a new metric \(d\) on \(\Omega\) by \(d(X,Y)=\inf\{l(\gamma):\gamma\in P(\Lambda)\) connecting \(X\) and \(Y\}\)._
The \(d-\)ball around \(X\in\Omega\) with radius \(r>0\) is denoted by \(S_{d}(X,r)\) and is given by \(S_{d}(X,r)=\{Y\in\Omega:d(X,Y)<r\}\). ([11, Proposition 2.9]) ensures that the usual metric is equivalent to the \(d\) in \(\Omega\).
Let \(N=k+m\) and \(\Omega\subseteq\mathbb{R}^{N}\) be a bounded domain. Let \(A=\left(\begin{array}{cc}I_{k}&O\\ O&|x|^{2\lambda}I_{m}\end{array}\right)\) and define the set
\[V_{A}(\Omega)=\{u\in C^{1}(\Omega)|\int_{\Omega}|u|^{p}\,dX+\int_{\Omega} \langle A\nabla u,\nabla u\rangle^{\frac{p}{2}}\,dX<\infty\}\]
Consider the normed linear spaces \((V_{A}(\Omega),\|.\|)\) and \((C^{1}_{0}(\Omega),\|.\|_{0})\) where
\[\|u\|=(\int_{\Omega}|u|^{p}\,dX+\int_{\Omega}\langle A\nabla u,\nabla u \rangle^{\frac{p}{2}}\,dX)^{\frac{1}{p}}\]
and
\[\|u\|_{0}=(\int_{\Omega}\langle A\nabla u,\nabla u\rangle^{\frac{p}{2}}\,dX)^ {\frac{1}{p}}\]
Now \(W^{1,\lambda,p}(\Omega)\) and \(W^{1,\lambda,p}_{0}(\Omega)\) is defined as the completion of \((V_{A}(\Omega),\|.\|)\) and \((C^{1}_{0}(\Omega),\|.\|_{0})\) respectively. Each element \([\{u_{n}\}]\), of the Banach space \(W^{1,\lambda,p}(\Omega)\) is a class of Cauchy sequence in \((V_{A}(\Omega),\|.\|)\) and \(\|[\{u_{n}\}]\|=\lim_{n\rightarrow\infty}\|u_{n}\|\). A function \(u\) is said to be in \(W^{1,\lambda,p}_{loc}(\Omega)\) if and only if \(u\in W^{1,\lambda,p}(\Omega^{\prime})\) for every \(\Omega^{\prime}\Subset\Omega\). For more information, one can look into Monticelli-Payne [18]. The following theorem proves that \(\|.\|_{0}\) and \(\|.\|\) are equivalent norm on \(W^{1,\lambda,p}_{0}(\Omega)\).
**Theorem 2.1**.: _(Poincare Inequality)(Monticelli-Payne [18, Theorem 2.1]) Let \(\Omega\subset\mathbb{R}^{N}\) be a bounded domain, and \(A\) is given as above. Then for any \(1\leq p<\infty\) there exists a constant \(C_{p}=C(N,p,\|A\|_{\infty},d(\Omega))>0\) such that_
\[\|u\|_{L^{p}(\Omega)}^{p}\leq C_{p}\int_{\Omega}\left\langle A\nabla u,\nabla u \right\rangle^{\frac{p}{2}}dX\;\;\text{for all}\;u\in C_{0}^{1}(\Omega)\]
_where \(d(\Omega)\) denotes the diameter of \(\Omega\)._
Now the suitable representation of an element of \(W^{1,\lambda,p}(\Omega)\) and \(W_{0}^{1,\lambda,p}(\Omega)\) is given by the following theorem, whose proof follows exactly that of Monticelli-Payne where it is done for \(p=2\).
**Theorem 2.2**.: _(Monticelli-Payne [18, Theorem 2.1]) Let \(\Omega\subset\mathbb{R}^{N}\) be a bounded open set, and \(A\) is given as above. Then for every \([\{u_{n}\}]\in W^{1,\lambda,p}(\Omega)\) there exists unique \(u\in L^{p}(\Omega)\) and \(U\in(L^{p}(\Omega))^{N}\) such that the following properties hold_
1. \(u_{n}\to u\) _in_ \(L^{p}(\Omega)\) _and_ \(\sqrt{A}\nabla u_{n}\to U\) _in_ \((L^{p}(\Omega))^{N}\)_._
2. \(\sqrt{A}^{-1}U\) _is the weak gradient of_ \(u\) _in each of the component of_ \(\Omega\setminus\Sigma\)__
3. _If_ \(|[\sqrt{A}]^{-1}|\in L^{p^{\prime}}(\Omega)\) _then_ \([\sqrt{A}]^{-1}U\) _is the weak gradient of_ \(u\) _in_ \(\Omega\)_._
4. _One has_ \[\|[u_{n}]\|^{p}=\|u\|_{L^{p}(\Omega)}^{p}+\|U\|_{(L^{p}(\Omega))^{N}}^{p}\]
_where \(\Sigma=\{X\in\Omega:det[A(X)]=0\}\), \(p^{\prime}=\frac{p}{p-1}\)._
Proof.: Let \([\{u_{n}\}]\in W^{1,\lambda,p}\). So \([\{u_{n}\}]\) is a Cauchy sequence in \((V_{A},\|.\|)\). Clearly \(\{u_{n}\}\) and \(\{\sqrt{A}\nabla u_{n}\}\) are Cauchy in \(L^{p}(\Omega)\) and \(L^{p}(\Omega)^{N}\). Hence there exists \(u\in L^{p}(\Omega)\) and \(U\in L^{p}(\Omega)^{N}\) such that \(u_{n}\to u\) in \(L^{p}(\Omega)\) and \(\{\sqrt{A}\nabla u_{n}\}\to U\) in \(L^{p}(\Omega)^{N}\) as \(n\to\infty\). If \([\{u_{n}\}]=[\{v_{n}\}]\) and \(\{\sqrt{A}\nabla u_{n}\}\to U\), \(\{\sqrt{A}\nabla v_{n}\}\to V\) in \(L^{p}(\Omega)^{N}\) as \(n\to\infty\). Then
\[\|U-V\|_{L^{p}(\Omega)^{N}} \leq\|\sqrt{A}\nabla u_{n}-U\|_{L^{p}(\Omega)^{N}}+\|\sqrt{A} \nabla u_{n}-\sqrt{A}\nabla v_{n}\|_{L^{p}(\Omega)^{N}}+\|\sqrt{A}\nabla v_{n }-V\|_{L^{p}(\Omega)^{N}}\] \[\to 0\text{ as }n\to\infty\]
which implies \(U=V\) a.e in \(\Omega\). So \(U\) does not depend on the representative of the class \([\{u_{n}\}]\). Let \(\phi\in C_{0}^{\infty}(\Omega)\). Since \(u_{n}\to u\) in \(L^{p}(\Omega)\) so \(u_{n}\) converges to \(u\) in the distributional sense as well. As \(u_{n}\in C^{1}(\Omega)\) so
\[\int_{\Omega}u_{n}\nabla\phi dx=-\int_{\Omega}\phi\nabla u_{n}dx\]
Taking limit \(n\to\infty\) we have
\[\int_{\Omega}u\nabla\phi dx=-\lim_{n\to\infty}\int_{\Omega}\phi\nabla u_{n} dx=-\lim_{n\to\infty}\int_{\Omega}\phi\sqrt{A}^{-1}\sqrt{A}\nabla u_{n}dx\]
Hence if \(|\phi\sqrt{A}^{-1}|\in L^{p^{\prime}}(\Omega)\) then
\[\int_{\Omega}u\nabla\phi dx=-\int_{\Omega}\phi\sqrt{A}^{-1}Udx \tag{4}\]
If support of \(\phi\) is contained in a component of \(\Omega\setminus\Sigma\) then \(|\phi\sqrt{A^{-1}}|\in L^{p^{\prime}}(\Omega)\). By using (4) we can conclude that \(\sqrt{A^{-1}}U\) is the weak gradient of \(u\) in that component of \(\Omega\setminus\Sigma\). Hence (ii) is proved. Also, if \(|\sqrt{A^{-1}}|\in L^{p^{\prime}}(\Omega)\) then (4) is true for every \(\phi\in C_{0}^{\infty}(\Omega)\). So \(\sqrt{A^{-1}}U\) is the weak gradient of \(u\) in \(\Omega\). Which proves (iii).
For \([\{u_{n}\}]\in W^{1,\lambda,p}(\Omega)\),
\[\|[\{u_{n}\}]\|^{p}=\lim_{n\to\infty}(\|u_{n}\|_{L^{p}(\Omega)}^{p}+\|\sqrt{A }\nabla u_{n}\|_{L^{p}(\Omega)^{N}}^{p})=(\|u\|_{L^{p}(\Omega)}^{p}+\|U\|_{L^{ p}(\Omega)^{N}}^{p})\]
Hence (iv) is proved.
Using the above theorem, we have the following embedding theorem.
**Corollary 2.3**.: _The space \(W^{1,\lambda,p}(\Omega)\) is continuously embedded into \(L^{p}(\Omega)\)._
Proof.: Define the map \(T:W^{1,\lambda,p}(\Omega)\to L^{p}(\Omega)\) by \(T([\{u_{n}\}])=u\). \(T\) is a bounded linear map.
Claim: \(T\)is injective. Let \(u=0\). If we can prove \(U=0\), then we are done. Since \(\Sigma\) has measure zero, we can prove that \(U=0\) a.e in each component of \(\Omega\setminus\Sigma\). Let \(\Omega^{\prime}\) be a component of \(\Omega\setminus\Sigma\). By the above theorem for every \(\phi\in C_{0}^{\infty}(\Omega^{\prime})\)
\[\int_{\Omega^{\prime}}\phi\sqrt{A^{-1}}Udx=-\int_{\Omega^{\prime}}u\nabla\phi dx=0\]
which ensures us \(\sqrt{A^{-1}}U=0\) a.e in \(\Omega^{\prime}\). So \(U=0\) a.e in \(\Omega^{\prime}\).
Henceforth we use the notation \(u\) for the element \([\{u_{n}\}]\in W^{1,\lambda,p}(\Omega)\) or\([\{u_{n}\}]\in W^{1,\lambda,p}_{0}(\Omega)\) which is determined in Theorem (2.2). Using the properties of \(U\in(L^{p}(\Omega))^{N}\) in the theorem we introduce the following definition:
**Definition 2.3**.: _For \(u\in W^{1,\lambda,p}(\Omega)\) we denote the weak quasi gradient of \(u\) by \(\nabla^{*}u\) and defined by_
\[\nabla^{*}u:=(\sqrt{A})^{-1}U\]
_which is a vector-valued function defined almost everywhere in \(\Omega\)._
Also for \(u\in W^{1,\lambda,p}(\Omega)\),
\[\|u\|^{p} =\|u\|_{L^{p}(\Omega)}^{p}+\|\sqrt{A}\nabla^{*}u\|_{L^{p}(\Omega)}^ {p}\] \[=\int_{\Omega}|u|^{p}dx+\int_{\Omega}\langle A\nabla^{*}u,\nabla^ {*}u\rangle^{\frac{p}{2}}.\]
We define \(H^{1,\lambda}(\Omega):=W^{1,\lambda,2}(\Omega)\) and \(H^{1,\lambda}_{0}(\Omega):=W^{1,\lambda,2}_{0}(\Omega)\). \((H^{1,\lambda}(\Omega),\|.\|)\) and \((H^{1,\lambda}_{0}(\Omega),\|.\|_{0})\) are Hilbert spaces.
**Theorem 2.4**.: _(Embedding Theorem)([13, Theorem 2.6] and [16, Proposition 3.2]) Let \(\Omega\subset\mathbb{R}^{k+m}\) be an open set. The embedding_
\[H^{1,\lambda}_{0}(\Omega)\hookrightarrow L^{q}(\Omega)\]
_is continuous for every \(q\in[1,2^{*}_{\lambda}]\) and compact for \(q\in[1,2^{*}_{\lambda})\), where \(2^{*}_{\lambda}=\frac{2Q}{Q-2},\ Q=k+(\lambda+1)m\)._
**Theorem 2.5**.: _(Stampacchia-Kinderlehrer [15, lemma B.1]) Let \(\phi:[k_{0},\infty)\to\mathbb{R}\) be a nonnegative and nonincreasing such that for \(k_{0}\leq k\leq h\),_
\[\phi(h)\leq[C/(h-k)^{\alpha}]|\phi(k)|^{\beta}\]
_where \(C,\alpha,\beta\) are positive constant with \(\beta>1\). Then_
\[\phi(k_{0}+d)=0\]
_where \(d^{\alpha}=C2^{\frac{\alpha\beta}{\beta-1}}|\phi(k_{0})|^{(\beta-1)}\)_
Now we will prove the Strong Maximum Principle for super-solutions of \(-\Delta_{\lambda}u=0\). In this proof, we denote \(\rho\) and \(S_{\rho}\), which are defined in [11, Definition 2.6]. The constants \(a,c_{1}\) are introduced in [11, Theorem 4.3]. Also, \(c\) and \(\epsilon_{0}\) are defined in [11, Proposition 2.9].
**Theorem 2.6**.: _(Strong Maximum Principle) Let \(\Omega\subset\mathbb{R}^{1+m}\) be a \(\Lambda\)-connected, bounded open set and \(\lambda\geq 1\). Let \(u\) be a nonnegative (not identically zero) function in \(H_{0}^{1,\lambda}(\Omega)\) such that \(u\) is a super solution of \(-\Delta_{\lambda}u=0\), i.e., for every nonnegative \(v\in H_{0}^{1,\lambda}(\Omega)\),_
\[\int_{\Omega}\langle A\nabla^{*}u,\nabla^{*}v\rangle dX\geq 0.\]
_If there exist a ball \(B_{r}(x_{0})\Subset\Omega\) with \(\inf_{B_{r}(x_{0})}u=0\) then \(u\) is identically zero in \(\Omega\)._
Proof.: Let \(n_{0}\) be a natural number such that \(n_{0}^{\epsilon_{0}}>2c_{1}\). We can choose \(r>0\) such that \(B(X_{0},n_{0}r)\Subset\Omega\), \(\inf_{B_{r}(X_{0})}u=0\) and \(S_{\rho}(X,ac(n_{0}r)^{\epsilon_{0}})\subset\Omega\). By using ([11, Proposition 2.9]) and ([11, Theorem 2.7]) we have
\[B(X_{0},r)\subset B(X_{0},n_{0}r)\subset S_{d}(X_{0},c(n_{0}r)^{\epsilon_{0}}) \subset S_{\rho}(X_{0},ac(n_{0}r)^{\epsilon_{0}})\subset\Omega\]
Put \(R=\frac{ac(n_{0}r)^{\epsilon_{0}}}{\epsilon_{1}}\) and by [11, Theorem 4.3] with \(p=1\), we have
\[\inf_{S_{\rho}(X_{0},\frac{p}{2})}u\geq M|S_{\rho}(X_{0},R)|^{-1}\int_{S_{ \rho}(X_{0},R)}|u|\;dX. \tag{5}\]
By using ([11, Proposition 2.9]) and ([11, Theorem 2.7]) we easily can show that \(B(X_{0},r)\subset S_{\rho}(X_{0},\frac{R}{2})\). Hence, \(\inf_{S_{\rho}(X_{0},\frac{p}{2})}u=0\). By (5) we have \(u=0\) a.e. in \(S_{\rho}(X_{0},R)\) and hence, in \(B(X_{0},r)\). Let \(Y\in\Omega\) and \(r_{0}=r\). Since \(\Omega\) is a bounded domain, we can find a finite collection of balls \(\{B(X_{i},r_{i})\}_{i=0}^{i=k}\) such that \(B(X_{i},n_{0}r_{i})\Subset\Omega\), \(S_{p}(X_{i},ac(n_{0}r_{i})^{\epsilon_{0}})\subset\Omega\), \(B(X_{i-1},r_{i-1})\cap B(X_{i},r_{i})\neq\emptyset\) for \(i=1,2...k\) and \(Y\in B(X_{k},r_{k})\). We can use the previous process to show that \(u=0\) a.e. in \(B(X_{1},r_{1})\). Iterating we have \(u=0\) a.e. in \(B(X_{k},r_{k})\). Hence, \(u=0\) a.e.in \(\Omega\).
Now we are ready to define the notion of solution of (1).
**Definition 2.4**.: _A function \(u\in H_{loc}^{1,\lambda}(\Omega)\) is said to be a weak solution of (1) if for every \(\Omega^{\prime}\Subset\Omega\), there exists a positive constant \(C(\Omega^{\prime})\) such that_
\[u\geq C(\Omega^{\prime})>0\text{ a.e in }\Omega^{\prime},\]
\[\int_{\Omega}\langle A\nabla^{*}u,\nabla v\rangle\,dX=\int_{\Omega}\frac{fv}{u ^{\prime}}\,dX\;\text{ for all }v\in C_{0}^{1}(\Omega)\]
_and_
* _if_ \(v\leq 1\) _then_ \(u\in H_{0}^{1,\lambda}(\Omega)\)_._
* _if_ \(v>1\) _then_ \(u^{\frac{v+1}{2}}\in H_{0}^{1,\lambda}(\Omega)\)
Existence and regularity results
Henceforth, we will assume \(N=1+m\), and \(\Omega\subset\mathbb{R}^{N}\) is a \(\Lambda-\)connected, bounded open set. We will also assume \(f\) is a nonnegative (not identically zero) function and \(\lambda\geq 1\). Our main results are the following:
### The case \(\nu=1\)
**Theorem 3.1**.: _Let \(\nu=1\) and \(f\in L^{1}(\Omega)\). Then the Dirichlet boundary value problem (1) has a unique solution in the sense of definition (2.4)._
**Theorem 3.2**.: _Let \(\nu=1\) and \(f\in L^{r}(\Omega),r\geq 1\). Then the solution given by Theorem 3.1 satisfies the following_
* _If_ \(r>\frac{Q}{2}\) _then_ \(u\in L^{\infty}(\Omega)\)_._
* _If_ \(1\leq r<\frac{Q}{2}\) _then_ \(u\in L^{s}(\Omega)\)_._
_where \(Q=(m+1)+\lambda m\) and \(s=\frac{2Qr}{Q-2r}\)._
### The case \(\nu>1\)
**Theorem 3.3**.: _Let \(\nu>1\) and \(f\in L^{1}(\Omega)\). Then there exists \(u\in H^{1,\lambda}_{loc}(\Omega)\) which satisfies equation (1) in sense of definition (2.4)._
**Theorem 3.4**.: _Let \(\nu>1\) and \(f\in L^{r}(\Omega),\ r\geq 1\). Then the solution \(u\) of (1) given by the above theorem is such that_
* _If_ \(r>\frac{Q}{2}\) _then_ \(u\in L^{\infty}(\Omega)\)_._
* _If_ \(1\leq r<\frac{Q}{2}\) _then_ \(u\in L^{s}(\Omega)\)_._
_where \(s=\frac{Qr(\nu+1)}{(Q-2r)}\) and \(Q=(m+1)+\lambda m\)._
### The case \(\nu<1\)
**Theorem 3.5**.: _Let \(\nu<1\) and \(f\in L^{r}(\Omega),r=(\frac{2^{*}_{1}}{1-\lambda})^{\prime}\). Then (1) has a unique solution in \(H^{1,\lambda}_{0}(\Omega)\)._
**Theorem 3.6**.: _Let \(\nu<1\) and \(f\in L^{r}(\Omega),\ r\geq(\frac{2^{*}_{1}}{1-\nu})^{\prime}\). Then the solution \(u\) of (1) given by the above theorem is such that_
* _If_ \(r>\frac{Q}{2}\) _then_ \(u\in L^{\infty}(\Omega)\)_._
* _If_ \((\frac{2^{*}_{1}}{1-\nu})^{\prime}\leq r<\frac{Q}{2}\) _then_ \(u\in L^{s}(\Omega)\)_._
_where \(s=\frac{Qr(\nu+1)}{(Q-2r)}\), \(Q=(m+1)+\lambda m\) and \(r^{\prime}\) denotes the Holder conjugate of \(r\)._
**Theorem 3.7**.: _Let \(\nu<1\) and \(f\in L^{r}(\Omega)\) for some \(1\leq r<\frac{2Q}{(Q+2)+\nu(Q-2)}\). Then there exists \(u\in W^{1,\lambda,q}_{0}(\Omega)\) which is a solution of (1) in the sense_
\[\int_{\Omega}\langle A\nabla^{*}u,\nabla v\rangle dX=\int_{\Omega}\frac{fv}{u^ {\prime}}\ dX\mbox{ for all }v\in C^{1}_{0}(\Omega)\]
_where \(q=\frac{Qr(\nu+1)}{Q-r(1-\nu)}\)._
Approximation of the Equation (1)
Let \(f\) be a nonnegative (not identically zero) measurable function and \(n\in N\). Let us consider the equation
\[-\Delta_{\lambda}u_{n}=\frac{f_{n}}{(u_{n}+\frac{1}{n})^{\nu}}\text { in }\Omega \tag{6}\] \[u=0\text{ on }\partial\Omega\]
where \(f_{n}:=\min\{f,n\}\).
**Lemma 4.1**.: _Equation (6) has a unique solution \(u_{n}\in H^{1,\lambda}_{0}(\Omega)\cap L^{\infty}(\Omega)\)._
Proof.: Let \(w\in L^{2}(\Omega)\) be a fixed element. Now consider the equation
\[-\Delta_{\lambda}u =g_{n}\text{ in }\Omega \tag{7}\] \[u =0\text{ on }\partial\Omega\]
where \(g_{n}=\frac{f_{n}}{(\|w\|+\frac{1}{n})^{\nu}}\). Since \(|g_{n}(x)|\leq n^{\nu+1}\) one has \(g_{n}\in L^{2}(\Omega)\). By [18, Theorem 4.4], we can say equation (7) has a unique solution \(u_{w}\in H^{1,\lambda}_{0}(\Omega)\) and the map \(T:L^{2}(\Omega)\to H^{1,\lambda}_{0}(\Omega)\) such that \(T(w)=u_{w}\) is continuous. By Theorem 2.4, we have the compact embedding
\[H^{1,\lambda}_{0}(\Omega)\hookrightarrow L^{2}(\Omega).\]
Hence, the \(T:L^{2}(\Omega)\to L^{2}(\Omega)\) is continuous as well as compact.
Let \(S=\{w\in L^{2}(\Omega):w=\lambda Tw\) for some \(0\leq\lambda\leq 1\}\).
**Claim:** The set \(S\) is bounded.
Let \(w\in S\). By the Poincare inequality (see [18, Theorem 2.1]), there exists a constant \(C>0\) such that,
\[\|u_{w}\|_{L^{2}(\Omega)}^{2}\leq C\int_{\Omega}\langle A\nabla^{ *}u_{w},\nabla^{*}u_{w}\rangle\,dX=C\int_{\Omega}g_{n}(x)u_{w}\,dX\leq Cn^{ \nu+1}\int_{\Omega}u_{w}\,dX\leq Cn^{\nu+1}|\Omega|^{\frac{1}{2}}\|u_{w}\|_{L ^{2}(\Omega)}\]
Hence, we have
\[\|u_{w}\|_{L^{2}(\Omega)}\leq Cn^{\nu+1}|\Omega|^{\frac{1}{2}}\]
where \(C>0\) is a independent of \(w\). This proves \(S\) is bounded. Hence by Schaefer's fixed point theorem, there exists \(u_{n}\in H^{1,\lambda}_{0}(\Omega)\) such that
\[-\Delta_{\lambda}u_{n}=\frac{f_{n}}{(|u_{n}|+\frac{1}{n})^{\nu}} \text{ in }\Omega \tag{8}\] \[u=0\text{ on }\partial\Omega\]
By Weak Maximum Principle (see [18, Theorem 4.4]), we have \(u_{n}\geq 0\) in \(\Omega\). So \(u_{n}\) is a solution of (6). Hence,
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla v\rangle dX=\int_{ \Omega}\frac{f_{n}v}{(u_{n}+\frac{1}{n})^{\nu}}dX\text{ for every }v\in C^{1}_{0}(\Omega) \tag{9}\]
Now, we want to prove \(u_{n}\in L^{\infty}(\Omega)\).
Let \(k>1\) and define \(S(k)=\{x\in\Omega:u_{n}(x)\geq k\}\). We can treat the function
\[v(x)=\begin{cases}u_{n}(x)-k&x\in S(k)\\ o&\text{otherwise}\end{cases}\]
as a function in \(C^{1}_{0}(\Omega)\). By putting \(v\) in (9), we obtain
\[\int_{S(k)}\langle A\nabla^{*}v,\nabla^{*}v\rangle\,dX=\int_{S(k)}\frac{f_{n}v }{(v+k+\frac{1}{n})^{\nu}}\,dX\leq n^{\nu+1}\int_{S(k)}v\,dX\leq n^{\nu+1}\|v \|_{L^{2^{*}_{\lambda}}(\Omega)}|S(k)|^{1-\frac{1}{2^{*}_{\lambda}}}\]
Here, \(2^{*}_{\lambda}=\frac{2Q}{Q-2}\) and \(Q=(m+1)+\lambda m\). Now, by Theorem 2.4 there exists \(C>0\) such that
\[\|v\|_{L^{2^{*}_{\lambda}}(\Omega)}^{2}\leq C\int_{\Omega}\langle A\nabla^{*}v,\nabla^{*}v\rangle\,dX=C\int_{S(k)}\langle A\nabla^{*}v,\nabla^{*}v\rangle\, dX\leq Cn^{\nu+1}\|v\|_{L^{2^{*}_{\lambda}}(\Omega)}|S(k)|^{1-\frac{1}{2^{*}_{ \lambda}}}.\]
We have
\[\|v\|_{L^{2^{*}_{\lambda}}(\Omega)}\leq Cn^{\nu+1}|S(k)|^{1-\frac{1}{2^{*}_{ \lambda}}} \tag{10}\]
Assume \(1<k<h\) and using Inequality (10) we get
\[|S(h)|^{\frac{1}{2^{*}_{\lambda}}}(h-k)=(\int_{S(h)}(h-k)^{2^{*}_{ \lambda}}\,dX)^{\frac{1}{2^{*}_{\lambda}}}\leq(\int_{S(k)}(v(x))^{2^{*}_{ \lambda}}\,dX)^{\frac{1}{2^{*}_{\lambda}}}\leq\|v\|_{L^{2^{*}_{\lambda}}( \Omega)}\leq Cn^{\nu+1}|S(k)|^{1-\frac{1}{2^{*}_{\lambda}}}\]
The above two inequalities implies
\[|S(h)|\leq(\frac{Cn^{\nu+1}}{(h-k)})^{2^{*}_{\lambda}})|S(k)|^{2^{*}_{\lambda} -1}\]
Let \(d^{2^{*}_{\lambda}}=(Cn^{\nu+1})^{2^{*}_{\lambda}})2^{\frac{2^{*}_{\lambda}(2 ^{*}_{\lambda}-1)}{2^{*}_{\lambda}-2}}\,|S(1)|^{2^{*}_{\lambda}-2}\) then by the Theorem 2.5, we get \(|S(1+d)|=0\). Hence, \(u_{n}(x)\leq 1+d\) a.e in \(\Omega\). We get a positive constant \(C(n)\) such that \(u_{n}\leq C(n)\) a.e in \(\Omega\). Consequently, \(u_{n}\in L^{\infty}(\Omega)\).
Let \(u_{n}\) and \(v_{n}\) be two solutions of (6). The function \(w=(u_{n}-v_{n})^{+}\in H^{1,\lambda}_{0}(\Omega)\) can be considered as a test function. It is clear that
\[[(v_{n}+\frac{1}{n})^{\nu}-(u_{n}+\frac{1}{n})^{\nu}\,]w\leq 0 \tag{11}\]
Since \(u_{n}\) and \(v_{n}\) are two solutions of (6) so by putting \(w\) in (9) we get
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}w\rangle dX =\int_{\Omega}\frac{f_{n}w}{(u_{n}+\frac{1}{n})^{\nu}}dX\] \[\text{and}\int_{\Omega}\langle A\nabla^{*}v_{n},\nabla^{*}w \rangle dX =\int_{\Omega}\frac{f_{n}w}{(v_{n}+\frac{1}{n})^{\nu}}dX\]
Therefore,
\[\int_{\Omega}\langle A\nabla^{*}(u_{n}-v_{n}),\nabla^{*}w\rangle\,dX =\int_{\Omega}\frac{f_{n}[(v_{n}+\frac{1}{n})^{\nu}-(u_{n}+\frac{1}{n})^{\nu} \,]}{(u_{n}+\frac{1}{n})^{\nu}(v_{n}+\frac{1}{n})^{\nu}}w\,dX\]
Using (11) we have
\[\int_{\Omega}\langle A\nabla^{*}w,\nabla^{*}w\rangle\,dX\leq 0\]
Hence, \(w=0\) and so \((u_{n}-v_{n})\leq 0\). By a similar argument, we can prove that \((v_{n}-u_{n})\leq 0\). Consequently, \(u_{n}=v_{n}\) a.e in \(\Omega\).
**Lemma 4.2**.: _Let for each \(n\in\mathbb{N}\), \(u_{n}\) be the solution of (6). Then the sequence \(\{u_{n}\}\) is an increasing sequence and for each \(\Omega^{\prime}\Subset\Omega\), there exists a constant \(C(\Omega^{\prime})>0\) such that_
\[u_{n}(x)\geq C(\Omega^{\prime})>0\ \ a.e\ x\in\Omega^{\prime}\ \ \text{and for all}\ n\in\mathbb{N}\]
Proof.: Let \(n\in\mathbb{N}\) be fixed. Define \(w=(u_{n}-u_{n+1})^{+}\). It is clear that
\[[(u_{n+1}+\frac{1}{n+1})^{\nu}-(u_{n}+\frac{1}{n})^{\nu}]w\leq 0.\]
\(w\) can be considered as a test function. Arguing as in the proof of the previous theorem, we obtain \(w=0\). Hence, \(u_{n}-u_{n+1}\leq 0\implies u_{n}\leq u_{n+1}\) a.e in \(\Omega\) and for all \(n\in\mathbb{N}\). Since \(f\) is not identically zero so \(f_{i}\) is not identically zero for some \(i\in N\). Without loss of generality, we may assume that \(f_{1}\) is not identically zero.
Consider the equation
\[-\Delta_{\lambda}u_{1} =\frac{f_{1}}{(u_{1}+1)^{\nu}}\text{ in }\Omega \tag{12}\] \[u_{1} =0\text{ on }\partial\Omega\]
Since \(f_{1}\) is not identically zero so \(u_{1}\) is not identically zero. So by Theorem 2.6, we have \(u_{1}>0\) in \(\Omega\). Hence, for every compact set \(\Omega^{\prime}\Subset\Omega\), there exists a constant \(C(\Omega^{\prime})>0\) such that \(u_{1}\geq C(\Omega^{\prime})\) a.e. in \(\Omega^{\prime}\). Monotonicity of the sequence implies that for every \(n\in N\),
\[u_{n}\geq C(\Omega^{\prime}).\]
## 5 A few auxiliary results
We start this section with the proof of a priori estimates on \(u_{n}\).
**Lemma 5.1**.: _Let \(u_{n}\) be the solution of equation (6) with \(\nu=1\) and assume \(f\in L^{1}(\Omega)\) is a nonnegative function (not identically zero). Then the sequence \(\{u_{n}\}\) is bounded in \(H_{0}^{1,\lambda}(\Omega)\)._
Proof.: Since \(u_{n}\in H_{0}^{1,\lambda}(\Omega)\) is a solution of (6) so from (9) we obtain
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}\rangle\,dX=\int_{\Omega }\frac{f_{n}u_{n}}{(u_{n}+\frac{1}{n})}dX\leq\int_{\Omega}f\,dX=\|f\|_{L^{1}( \Omega)}\]
Hence, \(\{u_{n}\}\) is bounded in \(H_{0}^{1,\lambda}(\Omega)\).
**Lemma 5.2**.: _Let \(u_{n}\) be the solution of the equation (6) with \(\nu>1\) and \(f\in L^{1}(\Omega)\) is a nonnegative function (not identically zero). Then \(\{{u_{n}}^{\frac{\nu+1}{2}}\}\) is bounded in \(H^{1,\lambda}_{0}(\Omega)\) and \(\{u_{n}\}\) is bounded in \(H^{1,\lambda}_{loc}(\Omega)\) and in \(L^{s}(\Omega)\), where \(s=\frac{(\nu+1)Q}{(Q-2)}\)._
Proof.: Since \(\nu>1\) and \(u_{n}\in H^{1,\lambda}_{0}(\Omega)\) so by putting \(\nu=u_{n}^{\nu}\) in (9) we have,
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}^{\nu}\rangle dX=\int_{ \Omega}\frac{f_{n}u_{n}^{\nu}}{(u_{n}+\frac{1}{n})^{\nu}}dX\leq\int_{\Omega}fdX.\]
Now,
\[\int_{\Omega}\langle A\nabla^{*}u_{n}^{\frac{\nu+1}{2}},\nabla^{* }u_{n}^{\frac{\nu+1}{2}}\rangle dX=\frac{(\nu+1)^{2}}{4\nu}\int_{\Omega}\nu u_{n}^{ \nu-1}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}\rangle dX=\frac{(\nu+1)^{2}}{ 4\nu}\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}^{\nu}\rangle dX\] \[\leq\frac{(\nu+1)^{2}}{4\nu}\int_{\Omega}fdX. \tag{13}\]
Hence, \(\{{u_{n}}^{\frac{\nu+1}{2}}\}\) is bounded in \(H^{1,\lambda}_{0}(\Omega)\). By Theorem 2.4, there exists a constant \(C>0\) such that
\[\|u_{n}^{\frac{\nu+1}{2}}\|_{L^{2^{*}_{\lambda}}(\Omega)}\leq C\|u_{n}^{\frac {\nu+1}{2}}\|_{H^{1,\lambda}_{0}(\Omega)}\]
By using (13), we have
\[(\int_{\Omega}u_{n}^{2^{*}_{\lambda}\frac{(\nu+1)}{2}}dX)^{\frac{2}{2^{*}_{ \lambda}}}\leq C\frac{(\nu+1)^{2}}{4\nu}\|f\|_{L^{1}(\Omega)}\]
Since \(s=2^{*}_{\lambda}\frac{(\nu+1)}{2}\) so
\[\int_{\Omega}u_{n}^{s}dX\leq(C\frac{(\nu+1)^{2}}{4\nu}\|f\|_{L^{1}(\Omega)})^ {\frac{2^{*}_{\lambda}}{2}}\]
Hence, \(\{u_{n}\}\) is bounded in \(L^{s}(\Omega)\). To prove \(\{u_{n}\}\) is bounded in \(H^{1,\lambda}_{\rm loc}(\Omega)\), let \(\Omega^{\prime}\Subset\Omega\) and \(\eta\in C^{\infty}_{0}(\Omega)\) such that \(0\leq\eta\leq 1\) and \(\eta=1\) in \(\Omega^{\prime}\). It is a test function as \(u_{n}\eta^{2}\in H^{1,\lambda}_{0}(\Omega)\). By Lemma 4.2, there exists a constant \(C>0\) such that \(u_{n}\geq C\) a.e in \(\operatorname{supp}(\eta)\). Put \(v=u_{n}\eta^{2}\) in (9) we have
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}(u_{n}\eta^{2}) \rangle dX=\int_{\Omega}\frac{f_{n}u_{n}\eta^{2}}{(u_{n}+\frac{1}{n})^{\nu}}dX \tag{14}\]
Also,
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}(u_{n}\eta^{2}) \rangle dX=\int_{\Omega}\{\eta^{2}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n} \rangle+2\eta u_{n}\langle A\nabla^{*}u_{n},\nabla\eta\rangle\} \tag{15}\]
From (14) and (15) we get
\[\int_{\Omega}\eta^{2}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n} \rangle dX=\int_{\Omega}\frac{f_{n}\eta^{2}}{C^{(\nu-1)}}dX-\int_{\Omega}2\eta u _{n}\langle A\nabla^{*}u_{n},\nabla\eta\rangle dX \tag{16}\]
Choose \(\epsilon>0\) and use Young's inequality; one has
\[|\int_{\Omega}2\eta u_{n}\langle A\nabla^{*}u_{n},\nabla\eta\rangle dX| \leq\int_{\Omega}2|\langle\eta\sqrt{A}\nabla^{*}u_{n},u_{n}\sqrt{A} \nabla\eta\rangle|dX\] \[\leq\frac{1}{\epsilon}\int_{\Omega}\eta^{2}|\sqrt{A}\nabla^{*}u_{ n}|^{2}dX+\epsilon\int_{\Omega}u_{n}^{2}|\sqrt{A}\nabla\eta|^{2}dX, \tag{17}\]
Put \(\epsilon=2\) then we get
\[|\int_{\Omega}2\eta u_{n}\langle A\nabla^{*}u_{n},\nabla\eta \rangle dX| \leq\frac{1}{2}\int_{\Omega}\eta^{2}|\sqrt{A}\nabla^{*}u_{n}|^{2} dX+2\int_{\Omega}u_{n}^{2}|\sqrt{A}\nabla\eta|^{2}dX\] \[=\frac{1}{2}\int_{\Omega}\eta^{2}\langle A\nabla^{*}u_{n},\nabla ^{*}u_{n}\rangle dX+2\int_{\Omega}u_{n}^{2}\langle A\nabla\eta,\nabla\eta \rangle dX \tag{18}\]
Using (16) and (18), we have
\[\int_{\Omega}\eta^{2}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n} \rangle dX \leq 2\int_{\Omega}\frac{f\eta^{2}}{C^{(\nu-1)}}dX+4\int_{\Omega}u _{n}^{2}\langle A\nabla\eta,\nabla\eta\rangle dX\] \[\leq\frac{2\|\eta\|_{\infty}^{2}\|f\|_{L^{1}(\Omega)}}{C^{\nu-1}} +4\|\langle A\nabla\eta,\nabla\eta\rangle\|_{\infty}\int_{\Omega}u_{n}^{2}dX\]
Since \(\{u_{n}\}\) is bounded in \(L^{s}(\Omega)\) and \(s>2\) So \(\{u_{n}\}\) is bounded in \(L^{2}(\Omega)\).
\[\int_{\Omega}\eta^{2}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n} \rangle dX \leq\frac{2\|\eta\|_{\infty}^{2}\|f\|_{L^{1}(\Omega)}}{C^{\nu-1}} +4\|\langle A\nabla\eta,\nabla\eta\rangle\|_{\infty}\int_{\Omega}u_{n}^{2}dX\] \[\leq C(f,\eta)\]
Now,
\[\int_{\Omega^{\prime}}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n} \rangle dX\leq\int_{\Omega}\eta^{2}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n} \rangle dX\leq C(f,\eta)\]
Hence, \(\{u_{n}\}\) is bounded in \(H^{1,\lambda}_{\text{loc}}(\Omega)\).
**Lemma 5.3**.: _Let \(u_{n}\) be the solution of (6) with \(\nu<1\) and \(f\in L^{r}\), \(r=(\frac{2^{*}_{\lambda}}{1-\nu})^{\prime}\) is a nonnegative (not identically zero) function. Then \(\{u_{n}\}\) is bounded in \(H^{1,\lambda}_{0}(\Omega)\)._
Proof.: Since \(r=(\frac{2^{*}_{\lambda}}{1-\nu})^{\prime}\), we can choose \(\nu=u_{n}\) in (9) and using Holder inequality, one has
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}\rangle dX= \int_{\Omega}\frac{f_{n}u_{n}}{(u_{n}+\frac{1}{n})^{\nu}}\leq\int_{\Omega}fu_ {n}^{1-\nu}dX \leq\|f\|_{L^{r}(\Omega)}\Big{(}\int_{\Omega}u_{n}^{(1-\nu)r^{ \prime}}dX\Big{)}^{\frac{1}{r^{\prime}}}\] \[\leq\|f\|_{L^{r}(\Omega)}\Big{(}\int_{\Omega}u_{n}^{2^{*}_{ \lambda}}dX\Big{)}^{\frac{1-\nu}{2^{*}_{\lambda}}}. \tag{19}\]
By Theorem 2.4 and using the above inequality, we get
\[\int_{\Omega}u_{n}^{2^{*}_{\lambda}}dX\leq C\Big{(}\int_{\Omega}\langle A \nabla^{*}u_{n},\nabla^{*}u_{n}\rangle dX\Big{)}^{\frac{2^{*}_{\lambda}}{2}} \leq C(\|f\|_{L^{r}(\Omega)}\Big{(}\int_{\Omega}u_{n}^{2^{*}_{\lambda}}dX \Big{)}^{\frac{1-\nu}{2^{*}_{\lambda}}}\Big{)}^{\frac{2^{*}_{\lambda}}{2}}. \tag{20}\]
So we have
\[\int_{\Omega}u_{n}^{2^{\star}_{\lambda}}dX\leq C\|f\|_{L^{\prime}( \Omega)}^{\frac{2^{\star}_{\lambda}}{1+\nu}}. \tag{21}\]
Hence, \(\{u_{n}\}\) is bounded \(L^{2^{\star}_{\lambda}}(\Omega)\). Using (19) and (21), we can conclude \(\|u_{n}\|_{H^{1,\lambda}_{0}(\Omega)}\leq C\|f\|_{L^{\prime}(\Omega)}^{\frac{1 }{1+\nu}}\) where \(C\) is independent of \(n\). Hence, \(\{u_{n}\}\) is bounded in \(H^{1,\lambda}_{0}(\Omega)\).
## 6 Proof of Main Results
### The case \(\nu=1\)
**Proof of Theorem 3.1:**
Proof.: Consider the above sequence \(\{u_{n}\}\) and define \(u\) as the pointwise limit of the sequence \(\{u_{n}\}\). Since \(H^{1,\lambda}_{0}(\Omega)\) is Hilbert space and \(\{u_{n}\}\) is bounded in \(H^{1,\lambda}_{0}(\Omega)\) so it admits a weakly convergent subsequence. Assume \(u_{n}\) weakly converges to \(v\) in \(H^{1,\lambda}_{0}(\Omega)\) and hence \(u_{n}\) converges to \(v\) in \(L^{2}(\Omega)\). So \(\{u_{n}\}\) has a subsequence that converges to \(v\) pointwise. Consequently, \(u=v\). So we may assume that the sequence \(\{u_{n}\}\) weakly converges to \(u\) in \(H^{1,\lambda}_{0}(\Omega)\). Choose \(v^{\prime}\in C^{1}_{0}(\Omega)\). By Lemma 4.2, there exists \(C>0\) such that \(u\geq u_{n}\geq C\) a.e in \(\operatorname{supp}(v^{\prime})\) and for all \(n\in\mathbb{N}\). So
\[|\frac{f_{n}v^{\prime}}{(u_{n}+\frac{1}{n})}|\leq\frac{\|v^{\prime}\|_{\infty }|f|}{C}\;\;\text{for all}n\in\mathbb{N}\]
By Dominated Convergence Theorem, we have
\[\lim_{n\to\infty}\int_{\Omega}\frac{f_{n}v^{\prime}}{(u_{n}+\frac {1}{n})}dX=\int_{\Omega}\lim_{n\to\infty}\frac{f_{n}v^{\prime}}{(u_{n}+\frac{ 1}{n})}dX=\int_{\Omega}\frac{fv^{\prime}}{u}dX. \tag{22}\]
As \(u_{n}\) is a solution of (6) so from (9) we get,
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla v^{\prime}\rangle dX =\int_{\Omega}\frac{f_{n}v^{\prime}}{(u_{n}+\frac{1}{n})}dX\]
Take \(n\to\infty\) and use (22) we obtain,
\[\int_{\Omega}\langle A\nabla^{*}u,\nabla v^{\prime}\rangle dX= \int_{\Omega}\frac{fv^{\prime}}{u}dX\]
Hence, \(u\in H^{1,\lambda}_{0}(\Omega)\) is a solution of (1).
Let \(u\) and \(v\) be two solutions of (1). The function \(w=(u-v)^{+}\in H^{1,\lambda}_{0}(\Omega)\) can be considered as a test function. Since \(u_{n}\) and \(v_{n}\) are two solutions of (1) so we have
\[\int_{\Omega}\langle A\nabla^{*}u,\nabla^{*}w\rangle dX =\int_{\Omega}\frac{fw}{u}dX\] and \[\int_{\Omega}\langle A\nabla^{*}v,\nabla^{*}w\rangle dX =\int_{\Omega}\frac{fw}{v}dX\]
By subtracting one from the other, we get
\[\int_{\Omega}\langle A\nabla^{*}(u-v),\nabla^{*}w\rangle\,dX=\int_{ \Omega}\frac{f(v-u)}{uv}wd\,X\leq 0.\]
Which ensures us
\[\int_{\Omega}\langle A\nabla^{*}w,\nabla^{*}w\rangle\,dX\leq 0.\]
Hence, \(w=0\) and so \((u-v)\leq 0\). By interchanging the role of \(u\) and \(v\), we get \((v-u)\leq 0\). Consequently, \(u=v\) a.e in \(\Omega\).
**Proof of Theorem 3.2:**
Proof.: \((i)\) Let \(k>1\) and define \(S(k)=\{x\in\Omega:u_{n}(x)\geq k\}\). We can treat the function
\[v(x)=\begin{cases}u_{n}(x)-k&x\in S(k)\\ o&\text{otherwise}\end{cases}\]
as a function in \(C^{1}_{0}(\Omega)\). So by (5) we have
\[\int_{S(k)}\langle A\nabla^{*}v,\nabla^{*}v\rangle\,dX=\int_{S(k) }\frac{f_{n}v}{(v+k+\frac{1}{n})}dX\leq\int_{S(k)}fv\,dX\leq\|f\|_{L^{\prime}( \Omega)}\|v\|_{L^{*}_{\lambda}(\Omega)}|S(k)|^{1-\frac{1}{2^{*}_{\lambda}}- \frac{1}{r}} \tag{23}\]
where \(2^{*}_{\lambda}=\frac{2Q}{Q-2}\). By Theorem 2.4, there exists \(C>0\) such that
\[\|v\|_{L^{*^{*}_{\lambda}}(\Omega)}^{2^{*}_{\lambda}}\leq C\int _{\Omega}\langle A\nabla^{*}v,\nabla^{*}v\rangle dX=C\int_{S(k)}\langle A \nabla^{*}v,\nabla^{*}v\rangle\,dX\leq C\|f\|_{L^{\prime}(\Omega)}\|v\|_{L^{* ^{*}_{\lambda}}(\Omega)}|S(k)|^{1-\frac{1}{2^{*}_{\lambda}}-\frac{1}{r}} \tag{24}\]
The last inequality follows from (23). Inequality (24) ensures us
\[\|v\|_{L^{*^{*}_{\lambda}}(\Omega)}\leq C\|f\|_{L^{\prime}(\Omega)}|S(k)|^{1- \frac{1}{2^{*}_{\lambda}}-\frac{1}{r}}\]
Assume \(1<k<h\). Using last inequality, we obtain
\[|S(h)|^{\frac{1}{2^{*}_{\lambda}}}(h-k)=(\int_{S(h)}(h-k)^{2^{*} _{\lambda}}\,dX)^{\frac{1}{2^{*}_{\lambda}}}\leq(\int_{S(k)}(v(x))^{2^{*}_{ \lambda}}\,dX)^{\frac{1}{2^{*}_{\lambda}}}\leq\|v\|_{L^{*^{*}_{\lambda}}( \Omega)}\leq C\|f\|_{L^{\prime}(\Omega)}|S(k)|^{1-\frac{1}{2^{*}_{\lambda}}- \frac{1}{r}}\]
So,
\[|S(h)|\leq(\frac{C\|f\|_{L^{\prime}(\Omega)}}{(h-k)})^{2^{*}_{\lambda}}|S(k)|^ {2^{*}_{\lambda}(1-\frac{1}{2^{*}_{\lambda}}-\frac{1}{r})}\]
As \(r>\frac{Q}{2}\) we have, \(2^{*}_{\lambda}(1-\frac{1}{2^{*}_{\lambda}}-\frac{1}{r})>1\). Let
\[d^{2^{*}_{\lambda}}=(C\|f\|_{L^{\prime}(\Omega)})^{2^{*}_{\lambda} }2^{*_{\lambda}}\frac{(2^{*}_{\lambda})^{2^{*}_{\lambda}(1-\frac{1}{2^{*}_{ \lambda}}-\frac{1}{r})}}{|2^{*_{\lambda}^{*}(1-\frac{1}{2^{*}_{\lambda}}-\frac {1}{r})-1}_{\lambda}|}|S(1)|^{2^{*}_{\lambda}(1-\frac{1}{2^{*}_{\lambda}}- \frac{1}{r})-2}\]
By Theorem 2.5 we have \(|S(1+d)|=0\). Hence, \(u_{n}(x)\leq 1+d\) a.e in \(\Omega\). We get a positive constant \(C\) independent of \(n\) such that \(u_{n}\leq C\|f\|_{L^{\prime}(\Omega)}\) a.e in \(\Omega\) for all \(n\in\mathbb{N}\). Hence, \(\|u\|_{L^{\infty}(\Omega)}\leq C\|f\|_{L^{\prime}(\Omega)}\)
\((ii)\) If \(r=1\) then \(s=2^{*}_{\lambda}\). Since \(u\in H^{1,\lambda}_{0}(\Omega)\) so by Theorem 2.4, we have \(u\in L^{s}(\Omega)\).
If \(1<r<\frac{Q}{2}\). Choose \(\delta>1\) (to be determined later). Consider the function \(w=u^{2\delta-1}\). By the density argument, \(w\) can be treated as a test function. Put \(w\) in (9), we have
\[\int_{\Omega}(2\delta-1)u_{n}^{(2\delta-2)}\langle A\nabla^{*}u_{n},\nabla^{*} u_{n}\rangle dX=\int_{\Omega}\frac{f_{n}w}{u_{n}+\frac{1}{n}}dX\leq\int_{\Omega}fu_{n} ^{2\delta-2}dX\]
By using Holder inequality on the RHS of the above inequality, we get
\[\int_{\Omega}\langle A\nabla^{*}u_{n}^{\delta},\nabla^{*}u_{n}^{\delta} \rangle dX=\int_{\Omega}\delta^{2}u_{n}^{(2\delta-2)}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}\rangle dX\leq\frac{\delta^{2}}{(2\delta-1)}\|f\|_{L^{r}( \Omega)}(\int_{\Omega}u_{n}^{(2\delta-2)r^{\prime}}dX)^{\frac{1}{r^{\prime}}} \tag{25}\]
where \(\frac{1}{r}+\frac{1}{r^{\prime}}=1\). By Theorem 2.4, we have
\[\int_{\Omega}u_{n}^{2^{*}_{\lambda}\delta} \leq C(\int_{\Omega}\langle A\nabla^{*}u_{n}^{\delta},\nabla^{*} u_{n}^{\delta}\rangle dX)^{\frac{2^{*}_{\lambda}}{2}}\] \[\leq C[\frac{\delta^{2}}{(2\delta-1)}\|f\|_{L^{r}(\Omega)}(\int_{ \Omega}u_{n}^{(2\delta-2)r^{\prime}}dX)^{\frac{1}{r^{\prime}}}]^{\frac{2^{*}_{ \lambda}}{2}},\,[\,\text{by (\ref{eq:25})}] \tag{26}\]
We choose \(\delta\) such that \(2^{*}_{\lambda}\delta=(2\delta-2)r^{\prime}\) so \(\delta=\frac{r(Q-2)}{(Q-2r)}\). Clearly, \(\delta>1\) and \(2^{*}_{\lambda}\delta=s\). By using (26), we have
\[(\int_{\Omega}u_{n}^{s}dX)^{(1-\frac{2^{*}_{\lambda}}{2r^{\prime}})}\leq C\]
Also, \((1-\frac{2^{*}_{\lambda}}{2r^{\prime}})>0\) as \(r<\frac{Q}{2}\). So we get
\[\int_{\Omega}u_{n}^{s}dX\leq C,\,\,C>0\,\,\text{is independent of $n$.}\]
By Dominated Convergence Theorem, we have
\[\int_{\Omega}u^{s}dX\leq C.\]
Hence we are done.
### The Case \(\nu>1\)
**Proof of Theorem 3.3:**
Proof.: Define \(u\) as the pointwise limit of \(\{u_{n}\}\). By Lemma 5.2, \(\{u_{n}\}\) and \(\{u_{n}^{\frac{\nu+1}{2}}\}\) are bounded in \(H^{1,\lambda}_{loc}(\Omega)\) and \(H^{1,\lambda}_{0}(\Omega)\) respectively. So by the similar argument as the proof of Theorem 3.1 we can prove \(u\in H^{1,\lambda}_{loc}(\Omega)\) and \(u^{\frac{\nu+1}{2}}\in H^{1,\lambda}_{0}(\Omega)\).
Let \(v\in C^{0}_{0}(\Omega)\) and \(\Omega^{\prime}=\text{supp}(v)\). Without loss of generality we can assume \(u_{n}\) weakly converges to \(u\) in \(H^{1,\lambda}(\Omega^{\prime})\). By Lemma 4.2, there exists \(C>0\) such that \(u_{n}(x)\geq C\) a.e \(x\in\Omega^{\prime}\) and for all \(n\in\mathbb{N}\). So, \(u\geq C>0\) a.e in \(\Omega^{\prime}\). Also,
\[|\frac{f_{n}v}{(u_{n}+\frac{1}{n})^{\nu}}|\leq\frac{\|v\|_{\infty}|f|}{C^{\nu} },\,\,\text{for all}n\in\mathbb{N}\]
By the Dominated Convergence Theorem, we have
\[\lim_{n\to\infty}\int_{\Omega^{\prime}}\frac{f_{n}v}{(u_{n}+\frac{1}{n})^{v}}dX= \int_{\Omega^{\prime}}\lim_{k\to\infty}\frac{f_{n}v}{(u_{n}+\frac{1}{n})^{v}}dX= \int_{\Omega^{\prime}}\frac{fv}{u^{v}}dX. \tag{27}\]
As \(u_{n}\) is a solution of (6) so
\[\int_{\Omega^{\prime}}\langle A\nabla^{*}u_{n},\nabla v\rangle dX=\int_{\Omega ^{\prime}}\frac{f_{n}v}{(u_{n}+\frac{1}{n})^{v}}dX\]
Take \(n\to\infty\) and use (27), we get
\[\int_{\Omega}\langle A\nabla^{*}u,\nabla v\rangle dX=\int_{\Omega}\frac{fv}{u^ {v}}dX\]
Hence, \(u\in H^{1,\lambda}_{loc}(\Omega)\) is a solution of (1). \(\Box\)
**Proof of Theorem 3.4:**
Proof.: (i) The same proof of Theorem (3.2) will work.
(ii) If \(r=1\) then \(s=\frac{\frac{2}{2}^{*}(v+1)}{2}\). Also, \(u^{\frac{v+1}{2}}\in H^{1,\lambda}_{0}(\Omega)\). By Theorem 2.4, we have \(u\in L^{s}(\Omega)\).
If \(1<r<\frac{Q}{2}\). Choose \(\delta>\frac{v+1}{2}\). By the density argument, \(v=u_{n}^{2\delta-1}\) can be considered a test function. From (9), we have
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}^{2\delta-1}\rangle\,dX= \int_{\Omega}\frac{f_{n}u_{n}^{2\delta-1}}{(u_{n}+\frac{1}{n})^{v}}\,dX\]
which gives us
\[\int_{\Omega}(2\delta-1)u_{n}^{2\delta-2}\langle A\nabla^{*}u_{n},\nabla^{*}u_ {n}\rangle dX\leq\int_{\Omega}fu_{n}^{2\delta-v-1}dX\leq\|f\|_{L^{r}(\Omega)}( \int_{\Omega}u_{n}^{(2\delta-v-1)r^{\prime}}dX)^{\frac{1}{r^{\prime}}} \tag{28}\]
By Theorem 2.4, there exists \(C>0\) such that
\[\int_{\Omega}u_{n}^{\delta 2^{*}_{\lambda}}dX\leq C(\int_{\Omega}\langle A \nabla^{*}u_{n}^{\delta},\nabla^{*}u_{n}^{\delta}\rangle dX)^{\frac{2^{*}_{ \lambda}}{2}}\leq C(\int_{\Omega}\delta^{2}u_{n}^{2\delta-2}\langle A\nabla^{* }u_{n},\nabla^{*}u_{n}\rangle dX)^{\frac{2^{*}_{\lambda}}{2}} \tag{29}\]
By using (28) and (29), we get
\[\int_{\Omega}u_{n}^{\delta 2^{*}_{\lambda}}dX\leq C\{\frac{\delta^{2}}{(2 \delta-1)}\|f\|_{L}^{r}(\Omega)\}^{\frac{2^{*}_{\lambda}}{2}}(\int_{\Omega}u_{ n}^{(2\delta-v-1)r^{\prime}}dX)^{\frac{2^{*}_{\lambda}}{2r^{\prime}}}\]
Choose \(\delta\) such that \(\delta 2^{*}_{\lambda}=(2\delta-v-1)r^{\prime}\) then \(2^{*}_{\lambda}\delta=s\). As \(r<\frac{Q}{2}\) so \(1-\frac{2^{*}_{\lambda}}{2r^{\prime}}>0\). we have \(\int_{\Omega}u_{n}^{s}dX\leq C\). Hence, by Dominated Convergence Theorem we get \(u\in L^{s}(\Omega)\). \(\Box\)
### The Case \(\nu<1\)
**Proof of Theorem 3.5:**
Proof.: Since \(\{u_{n}\}\) is bounded in \(H^{1,\lambda}_{0}(\Omega)\) so it has a subsequence which converges to u weakly in \(H^{1,\lambda}_{0}(\Omega)\). Without loss of generality we can assume \(u_{n}\rightharpoonup u\)in \(H^{1,\lambda}_{0}(\Omega)\). Let \(v\in C^{1}_{0}(\Omega)\). By the Lemma 4.2, there exists \(C>0\) such that \(u_{n}(x)\geq C\) a.e \(x\in\operatorname{supp}(v)\) and for all \(n\in\mathbb{N}\). So
\[|\frac{f_{n}v}{(u_{n}+\frac{1}{n})^{v}}|\leq\frac{\|v\|_{\infty}|f|}{C^{v}} \text{ for all }n\in\mathbb{N}\]
By the Dominated Convergence Theorem, we have
\[\lim_{n\to\infty}\int_{\Omega}\frac{f_{n}v}{(u_{n}+\frac{1}{n})^{v}}dX=\int_{ \Omega}\lim_{k\to\infty}\frac{f_{n}v}{(u_{n}+\frac{1}{n})^{v}}dX=\int_{\Omega }\frac{fv}{u^{v}}dX. \tag{30}\]
As \(u_{n}\) is a solution of (6) so,
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla v\rangle dX=\int_{\Omega}\frac{f_ {n}v}{(u_{n}+\frac{1}{n})^{v}}dX\]
Take \(n\to\infty\) and (30) we get
\[\int_{\Omega}\langle A\nabla^{*}u,\nabla v\rangle dX=\int_{\Omega}\frac{fv}{u^ {v}}dX\]
Hence, \(u\in H^{1,\lambda}_{0}(\Omega)\) is a solution of (1) with \(v<1\). The proof of uniqueness is similar to Theorem 3.1.
**Proof of Theorem 3.6:**
Proof.: (i) The proof is similar to the proof of Theorem 3.2.
(ii) If \(r=(\frac{2^{*}_{\lambda}}{1-v})^{\prime}\) then \(s=2^{*}_{\lambda}\). By the embedding theorem and (9), we have
\[(\int_{\Omega}u_{n}^{2^{*}_{\lambda}}dX)^{\frac{1}{2^{*}_{\lambda }}}\leq C(\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}\rangle dX)^{ \frac{1}{2}} =C(\int_{\Omega}\frac{f_{n}u_{n}}{(u_{n}+\frac{1}{n})^{v}}dX)^{ \frac{1}{2}} \leq C(\int_{\Omega}fu_{n}^{1-v}dX)^{\frac{1}{2}}\] \[\leq C\|f\|_{L^{\prime}(\Omega)}^{\frac{1}{2}}(\int_{\Omega}u_{n }^{(1-v)r^{\prime}}dX)^{\frac{1}{2r^{\prime}}}\]
Since \(r^{\prime}=\frac{2^{*}_{\lambda}}{1-v}\) so using the above inequality we get
\[\int_{\Omega}u_{n}^{2^{*}_{\lambda}}dX\leq C\|f\|_{L^{\prime}(\Omega)}^{\frac{ 2^{*}_{\lambda}}{1+v}}\]
By Dominated Convergence Theorem we have \(u\in L^{2^{*}_{\lambda}}(\Omega)\).
Let \((\frac{2^{*}_{\lambda}}{1-v})^{\prime}<r<\frac{\Omega}{2}\). Choose \(\delta>1\) (to be determined later). We can treat the function \(v=u_{n}^{2\delta-1}\) as a test function and put it in (9), we obtain
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}^{2\delta-1}\rangle dX= \int_{\Omega}\frac{f_{n}u_{n}^{2\delta-1}}{(u_{n}+\frac{1}{n})^{v}}dX\leq\int _{\Omega}fu_{n}^{2\delta-v-1}dX\leq\|f\|_{L^{\prime}(\Omega)}(\int_{\Omega}u_ {n}^{(2\delta-v-1)r^{\prime}}dX)^{\frac{1}{r^{\prime}}} \tag{31}\]
Also,
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}^{2\delta-1}\rangle dX= \int_{\Omega}(2\delta-1)u_{n}^{2\delta-2}\langle A\nabla^{*}u_{n},\nabla^{*} u_{n}\rangle dX=\int_{\Omega}\frac{(2\delta-1)}{\delta^{2}}\langle A\nabla^{*}u_{n}, \nabla^{*}u_{n}^{\delta}\rangle dX \tag{32}\]
Using (31) and (32) we have
\[\int_{\Omega}\langle A\nabla^{*}u_{n}^{\delta},\nabla^{*}u_{n}^{\delta}\rangle dX \leq\frac{\delta^{2}}{(2\delta-1)}\|f\|_{L^{\prime}(\Omega)}(\int_{\Omega}u_{n} ^{(2\delta-\nu-1)r^{\prime}}dX)^{\frac{1}{r^{\prime}}}\]
By Theorem 2.4, there exists \(C>0\) such that
\[\int_{\Omega}u_{n}^{\delta 2^{*}_{\lambda}}dX \leq C(\int_{\Omega}\langle A\nabla^{*}u_{n}^{\delta},\nabla^{*}u _{n}^{\delta}\rangle dX)^{\frac{2^{*}_{\lambda}}{2}}\] \[\leq C\{\frac{\delta^{2}}{(2\delta-1)}\|f\|_{L^{\prime}(\Omega) }^{\frac{2^{*}_{\lambda}}{2}}\}(\int_{\Omega}u_{n}^{(2\delta-\nu-1)r^{\prime}} dX)^{\frac{2^{*}_{\lambda}}{2r^{\prime}}}\]
Choose \(\delta\) such that \(\delta 2^{*}_{\lambda}=(2\delta-\nu-1)r^{\prime}\) then \(2^{*}_{\lambda}\delta=s\). As \((\frac{2^{*}_{\lambda}}{1-\nu})^{\prime}<r<\frac{Q}{2}\) so \(\delta>1\) and \(\frac{2^{*}_{\lambda}}{2r^{\prime}}<1\). Hence, we have \(\int_{\Omega}u_{n}^{s}dX\leq C\). Hence, by Dominated Convergence Theorem, we get \(u\in L^{s}(\Omega)\).
**Proof of Theorem 3.7:**
Proof.: Let \(\epsilon<\frac{1}{n}\) and \(\nu=(u_{n}+\epsilon)^{2\delta-1}-\epsilon^{2\delta-1}\) with \(\frac{1+\nu}{2}\leq\delta<1\). We can treat \(\nu\) as a function in \(C^{1}_{0}(\Omega)\). Put \(\nu\) in (9) and we obtain
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}\rangle(u_{n}+\epsilon)^ {2\delta-2}dX\leq\frac{1}{(2\delta-1)}\int_{\Omega}\frac{fv}{(u_{n}+\frac{1}{n })^{\nu}}\]
As \(\epsilon<\frac{1}{n}\) so we have
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}\rangle(u_{n}+\epsilon)^ {2\delta-2}dX\leq\frac{1}{(2\delta-1)}\int_{\Omega}f(u_{n}+\epsilon)^{2\delta -1-\nu}\;dX \tag{33}\]
By some simple calculation, we get
\[\int_{\Omega}\langle A\nabla^{*}\nu,\nabla^{*}\nu\rangle dX\leq\frac{\delta^ {2}}{(2\delta-1)}\int_{\Omega}f(u_{n}+\epsilon)^{2\delta-1-\nu}dX\]
By Theorem 2.4, we have
\[(\int_{\Omega}v^{2^{*}_{\lambda}}dX)^{\frac{2^{*}}{2^{*}_{\lambda}}}\leq \frac{C\delta^{2}}{(2\delta-1)}\int_{\Omega}f(u_{n}+\epsilon)^{2\delta-1-\nu}\]
Take \(\epsilon\to 0\) and use Dominated convergence Theorem we have,
\[(\int_{\Omega}u_{n}^{2^{*}_{\lambda}\delta})^{\frac{2^{*}}{2^{*}_{\lambda}}} \leq\frac{C\delta^{2}}{(2\delta-1)}\int_{\Omega}fu_{n}^{2\delta-1-\nu} \tag{34}\]
If \(r=1\) then choose \(\delta=\frac{\nu+1}{2}\) and from the previous inequality we have \(\{u_{n}\}\) is bounded in \(L^{s}(\Omega)\) with \(s=\frac{Q(\nu+1)}{(Q-2)}\).
If \(r>1\) then choose \(\delta\) in such a way that \((2\delta-1-\nu)r^{\prime}=2^{*}_{\lambda}\delta\). Now, applying Holder inequality on RHS of (34) we have,
\[(\int_{\Omega}u_{n}^{2^{*}_{\lambda}\delta})^{\frac{2}{2^{*}_{ \lambda}}} \leq\frac{C\delta^{2}}{(2\delta-1)}\|f\|_{L^{\prime}(\Omega)}(\int_{ \Omega}u_{n}^{(2\delta-1-\nu)r^{\prime}})^{\frac{1}{r^{\prime}}}\] \[=\frac{C\delta^{2}}{(2\delta-1)}\|f\|_{L^{\prime}(\Omega)}(\int_{ \Omega}u_{n}^{2^{*}_{\lambda}\delta})^{\frac{1}{r^{\prime}}}\]
As \(1\leq r<\frac{2Q}{(Q+2)+\nu(Q-2)}<\frac{Q}{2}\) so \(\frac{2}{2^{*}_{\lambda}}>\frac{1}{r^{\prime}}\). Hence, \(\{u_{n}\}\) is bounded in \(L^{s}(\Omega)\) with \(s=2^{*}_{\lambda}\delta=\frac{Qr(\nu+1)}{(Q-2r)}\). Using Holder inequality in (33), we have
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}\rangle(u_{n}+\epsilon)^{ 2\delta-2}dX\leq\frac{1}{(2\delta-1)}\|f\|_{L^{r}(\Omega)}(\int_{\Omega}(u_{n} +\epsilon)^{2^{*}_{\lambda}\delta})^{\frac{1}{r^{\prime}}}\]
Since \(u_{n}\) is bounded in \(L^{s}(\Omega)\) so
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}\rangle(u_{n}+\epsilon)^ {2\delta-2}dX\leq C.\]
For \(q=\frac{Qr(\nu+1)}{Q-r(1-\nu)}\) and above chosen \(\delta\) satisfies the condition \((2-2\delta)q=(2-q)s\).
So,
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}\rangle^{ \frac{q}{2}}dX =\int_{\Omega}\frac{|\sqrt{A}\nabla^{*}u_{n}|^{q}}{(u_{n}+ \epsilon)^{q-q\delta}}(u_{n}+\epsilon)^{q-\delta q}dX\] \[\leq(\int_{\Omega}\frac{|\sqrt{A}\nabla^{*}u_{n}|^{2}}{(u_{n}+ \epsilon)^{2-2\delta}}dX)(\int_{\Omega}(u_{n}+\epsilon)^{s}dX)^{1-\frac{q}{2}}\]
since \(\{u_{n}\}\) is bounded in \(L^{s}(\Omega)\) and \(\epsilon<\frac{1}{n}\) so \(\{u_{n}+\epsilon\}\) is bounded in \(L^{s}(\Omega)\). Consequently, \(\{u_{n}\}\) is bounded in \(W^{1,\lambda,q}_{0}(\Omega)\). Hence \(u\in W^{1,\lambda,q}_{0}(\Omega)\).
## 7 Variable Singular Exponent
Consider the equation
\[-\Delta_{\lambda}u =\frac{f}{u^{\nu(x)}}\text{ in }\Omega\] \[u>0\text{ in }\Omega \tag{35}\] \[u =0\text{ on }\partial\Omega\]
where \(\nu\in C^{1}(\overline{\Omega})\) is a positive function.
**Theorem 7.1**.: _Let \(f\in L^{(2^{*}_{\lambda})^{\prime}}(\Omega)\) be a function. If there exists \(K\Subset\Omega\) such that \(0<\nu(x)\leq 1\) in \(K^{\epsilon}\) (complement of \(K\)) then (35) has an unique solution in \(H^{1,\lambda}_{0}(\Omega)\) provided \(\lambda\geq 1\)._
Proof.: The same approximation used in the earlier section yields the existence of a strictly positive function \(u\), which is the increased limit of the sequence \(\{u_{n}\}\subset H^{1,\lambda}_{0}(\Omega)\cap L^{\infty}(\Omega)\). Also, Lemma 4.2 is satisfied. As \(K\Subset\Omega\) so by Lemma 4.2, there exists \(C>0\) such that \(u_{n}(x)\geq C\) for a.e \(x\in K\) and for all \(n\in\mathbb{N}\). For each \(n\in\mathbb{N}\), \(u_{n}\) solves
\[-\Delta_{\lambda}u_{n} =\frac{f_{n}}{(u_{n}+\frac{1}{n})^{\nu(x)}}\text{in}\Omega\] \[u>0\text{ in }\Omega \tag{36}\] \[u =0\text{ on }\partial\Omega\]
By using Holder inequality and the embedding theorem, we have
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla^{*}u_{n}\rangle dx =\int_{\Omega}\frac{f_{n}u_{n}}{(u_{n}+\frac{1}{n})^{\nu(x)}}dx\] \[=\int_{K}\frac{f_{n}u_{n}}{(u_{n}+\frac{1}{n})^{\nu(x)}}dx+\int_{[ K^{c}\cap\Omega]}\frac{f_{n}u_{n}}{(u_{n}+\frac{1}{n})^{\nu(x)}}dx\] \[\leq\|\frac{1}{C^{\nu(x)}}\|_{\infty}\int_{K}fu_{n}dx+\int_{\{x \in K^{c}\cap\Omega:u_{n}(x)\leq 1\}}fu_{n}^{1-\nu(x)}dx+\int_{x\in K^{c}\cap \Omega:u_{n}(x)\geq 1}fu_{n}^{1-\nu(x)}dx\] \[\leq\|\frac{1}{C^{\nu(x)}}\|_{\infty}\int_{K}fu_{n}dx+\int_{\{x \in K^{c}\cap\Omega:u_{n}(x)\leq 1\}}fdx+\int_{x\in K^{c}\cap\Omega:u_{n}(x) \geq 1}fu_{n}dx\] \[\leq\|\frac{1}{C^{\nu(x)}}\|_{\infty}\|f\|_{L^{(x^{*}_{1})^{ \prime}}(\Omega)}\|u_{n}\|_{L^{*}_{2}}+\|f\|_{L^{1}(\Omega)}+\|f\|_{L^{(x^{*}_{ 1})^{\prime}}(\Omega)}\|u_{n}\|_{L^{*}_{2}(\Omega)}\] \[\leq C\|f\|_{L^{(x^{*}_{1})^{\prime}}(\Omega)}\|u_{n}\|_{H^{1, \lambda}_{0}(\Omega)}+\|f\|_{L^{1}(\Omega)}\]
We obtained
\[\|u_{n}\|_{H^{1,\lambda}_{0}(\Omega)}^{2}\leq C\|f\|_{L^{(x^{*}_{1})^{\prime}} (\Omega)}\|u_{n}\|_{H^{1,\lambda}_{0}(\Omega)}+\|f\|_{L^{1}(\Omega)}.\]
Hence, \(u_{n}\) is bounded in \(H^{1,\lambda}_{0}(\Omega)\). Without loss of generality we can assume that \(u_{n}\) weakly converges to \(u\) in \(H^{1,\lambda}_{0}(\Omega)\). Let \(w\in C^{1}_{c}(\Omega)\). Using Lemma 4.2, there exists \(c>0\) such that \(u_{n}\geq c\) for a.e \(x\) in \(\operatorname{supp}(w)\). Since \(u_{n}\) solves (36) so
\[\int_{\Omega}\langle A\nabla^{*}u_{n},\nabla w\rangle dx=\int_{\Omega}\frac{f_ {n}w}{(u_{n}+\frac{1}{n})^{\nu(x)}}dx\]
Taking \(n\to\infty\) and using the dominated convergence theorem, we get
\[\int_{\Omega}\langle A\nabla^{*}u,\nabla w\rangle dx=\int_{\Omega}\frac{fw}{u^ {\nu(x)}}dx\]
Hence, \(u\) is a solution of (35). The proof of the uniqueness part is identical to the one given in Theorem 3.1.
**Theorem 7.2**.: _Let \(u\) be the solution of equation (35) with \(f\in L^{r}(\Omega)\), \(r>\frac{Q}{2}\). Then \(u\in L^{\infty}(\Omega)\), where \(Q=(m+1)+\lambda m\)._
Proof.: The proof is similar to that of the Theorem 3.2 and is omitted here.
|
2309.15782 | Joint-YODNet: A Light-weight Object Detector for UAVs to Achieve Above
100fps | Small object detection via UAV (Unmanned Aerial Vehicle) images captured from
drones and radar is a complex task with several formidable challenges. This
domain encompasses numerous complexities that impede the accurate detection and
localization of small objects. To address these challenges, we propose a novel
method called JointYODNet for UAVs to detect small objects, leveraging a joint
loss function specifically designed for this task. Our method revolves around
the development of a joint loss function tailored to enhance the detection
performance of small objects. Through extensive experimentation on a diverse
dataset of UAV images captured under varying environmental conditions, we
evaluated different variations of the loss function and determined the most
effective formulation. The results demonstrate that our proposed joint loss
function outperforms existing methods in accurately localizing small objects.
Specifically, our method achieves a recall of 0.971, and a F1Score of 0.975,
surpassing state-of-the-art techniques. Additionally, our method achieves a
[email protected](%) of 98.6, indicating its robustness in detecting small objects across
varying scales | Vipin Gautam, Shitala Prasad, Sharad Sinha | 2023-09-27T16:57:04Z | http://arxiv.org/abs/2309.15782v1 | # Joint-YODNet: A Light-weight Object Detector for UAVs to Achieve Above 100fps
###### Abstract
Small object detection via UAV (Unmanned Aerial Vehicle) images captured from drones and radar is a complex task with several formidable challenges. This domain encompasses numerous complexities that impede the accurate detection and localization of small objects. To address these challenges, we propose a novel method called Joint-YODNet for UAVs to detect small objects, leveraging a joint loss function specifically designed for this task. Our method revolves around the development of a joint loss function tailored to enhance the detection performance of small objects. Through extensive experimentation on a diverse dataset of UAV images captured under varying environmental conditions, we evaluated different variations of the loss function and determined the most effective formulation. The results demonstrate that our proposed joint loss function outperforms existing methods in accurately localizing small objects. Specifically, our method achieves a recall of 0.971, and a F1Score of 0.975, surpassing state-of-the-art techniques. Additionally, our method achieves a [email protected](%) of 98.6, indicating its robustness in detecting small objects across varying scales.
Keywords:Object Detection SAR Ship Detection SAR Small Object Detection Unmanned Aerial Vehicle Deep Neural Networks.
## 1 Introduction
Unmanned aerial vehicles (UAVs) have garnered significant popularity across various domains, encompassing surveillance, search and rescue operations, and environmental monitoring [21, 27, 1]. The detection of small objects, such as vehicles or pedestrians, within UAV images and radar scans holds the utmost importance for numerous applications. However, accomplishing this task is inherently challenging due to a multitude of factors including limitations in resolution, object occlusion, background clutter, low contrast, motion blur, complexities in data annotation, and real-time processing requirements.
Despite these challenges, there has been significant progress in recent years in the development of small object detection algorithms for UAV/aerial imaging [23, 12]. These algorithms typically use a combination of techniques, such as
image segmentation, feature extraction, and machine learning, to identify and classify small objects. Some of the most promising approaches include deep learning methods, which have been shown to be very effective at extracting features from aerial images.
Advancements in deep learning have improved object detection methods used in various applications, including aerial imaging [7, 9]. However, these methods face challenges in detecting small objects in high-resolution aerial images. Detecting objects in aerial images presents several challenges due to the high resolution and presence of tiny objects. Challenges include information loss from rescaling, low tolerance to bounding box shifts, and noisy feature representations [3]. A slight shift in the bounding box can lead to false positives and a decrease in Intersection over Union (IoU) [3]. In the existing literature, various object detection methods have been proposed with different strategies to improve system performance. These strategies include enhancing deep network architectures [20], introducing new loss functions [16], and proposing innovative learning approaches [17]. Among these, the use of dedicated loss functions has shown significant improvements in overall performance, motivating the focus of this paper on introducing a new loss function.
Specifically, we propose a joint loss function tailored for small object detection in aerial views. This joint loss is integrated into a deep model, enhancing the learning capability of the network compared to conventional training methods. The optimization of gradients and convergence leads to higher detection accuracy. To further improve feature representation, we incorporate the Omnim-dimensional dynamic convolution (ODConv) [10], which enhances overall performance. Furthermore, we explore the training process using a smaller dataset, making our system more practical and easier to deploy in real-world scenarios.
In this study, we focus on detecting ships and vessels using drone imagery, see Fig. 1. These objects play crucial roles in maritime surveillance, security, and navigation applications. However, detecting these relatively large objects in UAV images and radar scans can be challenging due to varying scales, occlusion, and cluttered backgrounds [24, 26]. Small ships, in particular, can be easily confused with ocean streams due to ship motion. Additionally, occlusion from other ships,
Figure 1: Demonstration of Detection challenges in images obtained through UAV.
structures, or environmental factors, along with background clutter such as waves or coastal features, further complicates the detection process.
## 2 Methodology
This section presents a comprehensive and advanced overview of our proposed light-weight small object detector called Joint-YODNet (Joint-loss based YOLO Omni-dimensional Dynamic Convolution Network) and discusses its various modules in detail. Our innovative object detector enables us to efficiently extract and accurately classify multiple small objects.
### You Only Look Once (YOLO)
YOLO version 5 (YOLOv5), a lightweight CNN-based object detection model, is widely recognized for its efficiency [8]. The model comprises three main components: the Backbone, the Neck, and the Head. The Backbone utilizes a convolutional neural network (CNN) to extract high-quality features from the input image. Notably, it incorporates the CSP module (C3), inspired by the CSPNet design [22]. The Neck consists of additional convolutional layers that capture intricate details and spatial information in the feature maps. The Detection Head utilizes the processed feature maps to perform the final steps of object detection, including bounding box prediction and class probability estimation.
Loss function consists of three components: objectness loss, localization loss, and classification loss. These components are combined using weights to optimize the model for accurate object detection, precise bounding box prediction, and
Figure 2: Joint-YODNet Structure
correct classification during training. Overall loss function can be expressed as:
\[L_{loss}=\lambda_{1}L_{cls}+\lambda_{2}L_{obj}+\lambda_{3}L_{loc} \tag{1}\]
Binary cross entropy (BCE) loss is used for classification loss \(L_{cls}\) and object loss \(L_{obj}\) while for localization loss \(L_{loc}\) CIoU loss is used. Detailed discussion is done in section 2.3 which demonstrates the different types of localization loss functions used for bounding box regression.
### ODConvNeXt
We present YOLOv5-ODConvNeXt, a variant of YOLOv5 that incorporates the CovNext [15] and ODConv [10] modules within the backbone network, as shown in Fig. 2. SAR image analysis poses several challenges such as speckle noise, complex texture, limited training data, scale/resolution variations, shadow/layover effects, and data interpretation. To address these challenges, we leverage the CovNext and ODConv modules to enhance the feature maps obtained by YOLOv5-ODConvNeXt [2].
Additionally, we introduce a "Joint Loss" function specifically designed to improve the localization accuracy of bounding boxes and accelerate convergence. The effectiveness of the joint loss function in enhancing localization ability is evaluated through a comprehensive set of experiments, both quantitative and qualitative, as discussed in Section 4.
### Loss Functions for BBox
Bounding box (BBox) regression is a key technique in object detection that predicts the location of target objects using rectangular BBoxes. It aims to improve the accuracy of the predicted BBox.
To achieve this, the regression process uses loss functions based on the Intersection over Union (IoU), which measures the overlap between the predicted BBox and the ground truth BBox. The IoU is calculated as the ratio of the intersection area between the ground truth and predicted BBoxes to their union:
\[IoU=\frac{|GT\cap PD|}{|GT\cup PD|} \tag{2}\]
The IoU loss function is effective when there is an overlap between the predicted (\(PD\)) and ground truth (\(GT\)) BBoxes. However, it struggles to produce meaningful gradients and slow convergence when there is no overlap between the BBoxes.
**Generalized Intersection over Union** (GIoU) [19] loss maximizes the overlap between the predicted and ground truth BBoxes by gradually adjusting the size of the predicted box. It is particularly effective when the boxes initially do not overlap. The GIoU formula is defined as follows:
\[GIoU=IoU-\frac{|C-(GT\cup PD)|}{|C|} \tag{3}\]
Where C represents the minimum bounding box that encompasses both the predicted (PD) and ground truth (GT) BBoxes. It acts as a penalty term, guiding the predicted BBox towards the target ground truth BBox. The GIoU loss outperforms Mean Squared Error (MSE) loss and IoU loss in terms of precision. While it addresses the issue of vanishing gradients in non-overlapping scenarios, it may have slower convergence and less accurate regression for boxes with extreme aspect ratios.
**Distance IoU** (DIoU) [31] is a measure of the normalized distance between the center points of the predicted and ground truth BBoxes. By incorporating distance information, it enables faster convergence and more precise regression.
\[DIoU=IoU-\frac{d^{2}}{c^{2}} \tag{4}\]
In this formula, '\(d\)' represents the Euclidean distance between the center points of the predicted and ground truth BBoxes, while '\(c\)' denotes the diagonal length of the smallest enclosing box that covers both BBoxes. The inclusion of distance information in the loss function enhances optimization by enabling faster convergence. It also improves the accuracy of regression, resulting in better localization of objects in object detection tasks.
**Complete Intersection over Union** (CIoU) [32] loss incorporates three essential geometric factors: overlap area, distance, and aspect ratio. CIoU loss is a versatile approach to BBox regression, surpassing both GIoU and DIoU. However, when the aspect ratio of the ground truth BBox matches that of the predicted BBox, CIoU degenerates to DIoU.
**Efficient IOU**[29] addresses limitations of traditional IOU loss by incorporating additional components to reflect the closeness between BBoxes and improve convergence. The Efficient IOU loss consists of three terms: IOU loss (\(L_{iou}\)), distance loss (\(L_{dis}\)), and aspect ratio loss (\(L_{asp}\)). By combining these terms, the Efficient IOU loss enhances the training efficiency and achieves improved performance. Retaining the positive effects of CIoU loss, Efficient IOU demonstrates the potential for further improvement in neural network training. Thus, it is defined as:
\[L_{eiou}=1-IOU+\frac{\rho^{2}(b,b^{GT})}{\left(w^{c}\right)^{2}+\left(h^{c} \right)^{2}}+\frac{\rho^{2}(w,w^{GT})}{\left(w^{c}\right)^{2}}+\frac{\rho^{2 }(h,h^{GT})}{\left(h^{c}\right)^{2}} \tag{5}\]
where \(h^{w}\) and \(h^{c}\) are the width and height of the smallest enclosing box covering the two boxes. Variables \(b\) and \(b^{GT}\) are the center of the predicted and ground truth BBox.
#### 3.2.2 Proposed Joint Loss
We propose the "**Joint Loss**", an enhanced approach that combines multiple loss components to address existing limitations. The Joint Loss is computed using a formula involving coefficients \(\alpha,\beta,\gamma\), and \(\eta\), determined through empirical tests: \(\alpha=0.1\), \(\beta=0.1\), \(\gamma=0.1\), \(\eta=0.7\).
\[L_{joint}=\alpha L_{eiou}+\beta L_{diou}+\gamma L_{giou}+\eta L_{eiou} \tag{6}\]
Compared to a single loss, a joint loss function combines multiple components, enabling the model to optimize for multiple objectives simultaneously. It improves overall performance, captures different aspects of the problem, and addresses challenges through specific loss components. The joint loss helps overcome the gradient vanishing problem and facilitates generalization to unseen data. It allows customization and fine-tuning, incorporating domain-specific knowledge for better results. Mathematically, the joint loss is denoted as \(L_{joint}\) and consists of individual loss components \(L_{ciou}\), \(L_{diou}\), \(L_{giou}\), \(L_{eiou}\). The gradients of the joint loss are computed by summing the gradients of the individual components, as shown in Eq. 7. Binary cross entropy (BCE) is used for classification. In the following section, we present the experimental setup and evaluation of the proposed loss function.
\[\Delta L_{joint}=\alpha\Delta L_{ciou}+\beta\Delta L_{diou}+\gamma\Delta L_{giou }+\eta\Delta L_{eiou} \tag{7}\]
## 3 Implementation and Datasets
The hardware setup consists of 2 x Intel Xeon Gold 6248 processors, each with 20 cores running at a clock speed of 2.5GHz, and an NVIDIA DGX A100 GPU. All experiments are conducted using the Python language and the PyTorch framework. The proposed model is optimized using Stochastic Gradient Descent (SGD) with an initial learning rate of 0.01, a momentum factor of 0.937, and weight decay set to 0.0005. During training, a batch size of 32 is used, and the training process is carried out for a total of 500 epochs. The network input size used in this work is 640x640. Additionally, the Intersection over Union (IoU) threshold is set to 0.5 for the experiments.
### Evaluation Criterion
To evaluate the proposed method, we used standard performance metrics, including precision (P), recall (R), F1 measure (F1), and mean average precision (mAP). The mAP is calculated as the average of the average precision (AP) values for each category, which are obtained from the Precision-Recall (PR) curve. Precision and recall are computed using the formulas:
\[P=\frac{TP}{TP+FP},\ R=\frac{TP}{TP+FN},\ F1=2\cdot\frac{P\cdot R}{P+R} \tag{8}\]
where TP represents true positives, FP represents false positives, and FN represents false negatives. Frames per second (FPS) has been used to evaluate the real-time application of the model. We base FPS calculation on the formula provided by the official repository created by Ultralytics [8]. The formula used for FPS calculation can be defined as.
\[FPS=\frac{1000}{P+I+NMS} \tag{9}\]
where \(P\), \(I\), and \(NMS\) are preprocessing, inference, and Non Max suppression time, respectively.
### Datasets Description
For evaluating the proposed methods, we utilized two well-known benchmark datasets for Aerial Image Detection: SAR Ship Detection and Aerial Ship Detection datasets. These datasets are widely recognized and present significant challenges in the field of aerial image for small object detection.
#### 3.2.1 SAR Ship Detection Dataset
For the SAR Ship Detection (SAR-SD) dataset, we utilized the publicly available SSDD dataset [11]. This dataset contains 1160 SAR images with 2540 ship instances of varying resolutions (1 to 15 meters). The ships in this dataset are small, with some being only tens of pixels in size. We divided the SAR-SD dataset into a training set of 641 images, 271 for the validation set, and a separate test set of 232 images. Our focus was to assess the generalization capability of the proposed method using this dataset.
#### 3.2.2 Aerial Ship Detection Dataset
The Aerial Ship Detection (ASD) dataset, sourced from Roboflow [5], comprises 1395 JPEG aerial images. The images have a resolution of 600x400 pixels. We used an official split of a training set of 1224, a validation set of 113, and a test set of size 58 images. To ensure uniformity, the images underwent preprocessing steps including auto-orientation and resizing to 640x640 pixels. Data augmentation techniques were applied, including a noise augmentation called salt and pepper, which introduced up to 5% noise through pixel manipulation within the bounding boxes. These augmentations were implemented to enhance the robustness and generalizability of the ship detection model trained on this dataset.
## 4 Experimental Results
This section presents the experimental analysis conducted on the aforementioned public datasets. The results are comprehensively examined, encompassing both qualitative and quantitative aspects. We compare our method with existing state-of-the-art (SOTA) approaches, followed by conducting ablation studies to evaluate the impact of different components in our proposed method. Additionally, we assess the computational speed of our method to gauge its efficiency.
### Comparison with SOTA Methods:
Our proposed method is evaluated and compared to several SOTA networks for SAR-SD dataset, as summarized in Table 1. The results demonstrate that our method achieves competitive performance compared to existing networks. YOLOv5-ODConvNeXt [2] achieves remarkable results, with a precision of 0.971, recall of 0.96, and F1Score of 0.965. It also achieves an [email protected](%) of 98.10 and an [email protected]::95(%) of 72.70. However, FIERNet [24] and CR2A-Net [26] achieve the highest precision scores of 0.98. Our proposed Joint-YODNet achieves a precision of 0.979, demonstrating excellent performance comparable to FIERNet
and CR2A-Net. In terms of recall, F1Score, and [email protected](%), Joint-YODNet achieves a recall of 0.971, an F1Score of 0.975, and a [email protected](%) of 98.6, outperforming YOLOv5-ODConvNeXt and other compared methods. SOTA methods, including SSD [14], Faster R-CNN [18], YOLOv5 [8], RetinaNet [13], DDNet [30], Quad-FPN [28], SAR-ShipNet [4], and HA-SARSD [25], exhibit varying levels of performance in terms of precision, recall, F1Score, [email protected](%), and [email protected]:.95(%).
In summary, our proposed method shows competitive performance compared to existing state-of-the-art networks. It achieves high precision comparable to FIERNet and CR2A-Net, while surpassing YOLOv5-ODConvNeXt and other methods in terms of recall and F1Score. Furthermore, our method demonstrates strong performance in terms of [email protected](%) and [email protected]:.95(%).
### Ablation study
#### 4.2.1 Evaluation on SAR-SD
The Table 2 compares different loss variations for small object detection on the SAR-SD dataset. The baseline method, YOLOv5-ODConvNeXt, achieved a precision of 0.971, recall of 0.96, F1Score of 0.965, [email protected](%) of 98.10, and [email protected]:.95(%) of 72.70. The EIoU method improved precision (0.984) and F1Score (0.976), achieving the highest values in these metrics. The SIoU method also performed well, achieving a precision of 0.979 and an F1-score of 0.974.
The combination of EIoU and SIoU, denoted as EIoU + SIoU, resulted in a precision of 0.975, recall of 0.969, F1Score of 0.972, [email protected](%) of 98.50, and [email protected]:.95(%) of 72.70.
Our Joint-YODNet achieved a precision of 0.979, recall of 0.974, F1Score of 0.976, [email protected](%) of 98.60, and [email protected]:.95(%) of 72.60. These results show that our method outperforms the baseline YOLOv5-ODConvNeXt method and performs at par with the EIoU and SIoU methods.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline
**Networks** & **P** & **R** & **F1** & **[email protected](\%)** & **[email protected]:.95(\%)** \\ \hline \hline SSD [14] & 0.846 & 0.811 & 0.828 & – & – \\ Faster R-CNN [18] & 0.871 & 0.856 & 0.863 & – & – \\ YOLOv5 [8] & 0.964 & 0.897 & 0.929 & 95.2 & – \\ RetinaNet [13] & 0.901 & 0.891 & 0.896 & – & – \\ DDNet [30] & 0.931 & 0.912 & 0.921 & – & – \\ Quad-FPN [28] & 0.90 & 0.957 & 0.925 & 95.20 & – \\ SAR-ShipNet [4] & 0.95 & 0.763 & 0.847 & 89.08 & – \\ FIERNet [24] & **0.98** & 0.879 & 0.927 & 94.14 & – \\ CR2A-Net [26] & **0.98** & 0.878 & 0.927 & 89.8 & – \\ HA-SARSD [25] & 0.97 & 0.920 & 0.944 & 97.0 & – \\ YOLOv5-ODConvNeXt [2] & 0.971 & 0.96 & 0.965 & 98.10 & **72.70** \\ \hline \hline _Joint-YODNet_ & 0.979 & **0.971** & **0.975** & **98.6** & 72.6 \\ \hline \hline \end{tabular}
\end{table}
Table 1: State-of-the-art comparison with existing networks on SAR-SD.
Overall, the results in Table 2 show that our proposed method is a promising approach for small object detection in SAR images. It achieves superior performance compared to the baseline method and performs at par with the SOTA methods. Additionally, our method is computationally efficient and has a lightweight architecture, making it well-suited for real-time applications and deployment on resource-constrained edge devices (discussed in 4.4).
Evaluation on ASD DatasetThe experimental results presented in Table 3 illustrate the performance comparison of different loss variations on the ASD dataset. Notably, our proposed method exhibits promising results, showcasing competitive performance across various metrics. Our method achieves a precision of 0.84, outperforming other loss variations, including YOLOv5-ODConvNeXt (0.61), EIoU (0.62), and SIoU (0.62). Furthermore, our approach achieves a recall of 0.63, surpassing all other variations.
In terms of F1-score, our method achieves a value of 0.720, indicating a balanced trade-off between precision and recall. This outperforms other loss variations, including YOLOv5-ODConvNeXt (0.705), EIoU (0.699) and SIoU (0.707). Moreover, our method demonstrates superior performance in terms of [email protected](%), and [email protected]:.95(%) metrics. Our method achieves a [email protected](%) of 70.30, surpassing other variations, including YOLOv5-ODConvNeXt (68.90), EIoU (66.60), and SIoU (68.10). Similarly, our method achieves a [email protected]:.95(%) of 31.20, showcasing competitive performance compared to other loss variations, such as YOLOv5-ODConvNeXt (31.40), EIoU (31.10), and SIoU (29.10).
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline
**Loss Variations** & **P** & **R** & **F1** & **[email protected](\%)** & **[email protected]:.95(\%)** \\ \hline \hline YOLOv5-ODConvNeXt [2] & 0.971 & 0.96 & 0.965 & 98.10 & 72.70 \\ EIoU [29] & **0.984** & 0.968 & **0.976** & **98.60** & **72.80** \\ SIoU [6] & 0.979 & 0.969 & 0.974 & 98.40 & 72.10 \\ EIoU + SIoU & 0.975 & 0.969 & 0.972 & 98.50 & 72.70 \\ \hline _Joint-YODNet_ & 0.979 & **0.974** & **0.976** & **98.60** & 72.60 \\ \hline \multicolumn{5}{c}{**Bold**: best; Underlined: second-best results.} \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of various bounding box regression losses on SAR-SD.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline
**Loss Variations** & **P** & **R** & **F1** & **[email protected](\%)** & **[email protected]:.95(\%)** \\ \hline \hline YOLOv5-ODConvNeXt [2] & 0.85 & 0.61 & 0.705 & 68.90 & **31.40** \\ EIoU [29] & 0.81 & 0.62 & 0.699 & 66.60 & 31.10 \\ SIoU [6] & 0.83 & 0.62 & 0.707 & 68.10 & 29.10 \\ EIoU + SIoU- Combined & **0.88** & 0.60 & 0.716 & 68.20 & 29.90 \\ \hline _Joint-YODNet_ & 0.84 & **0.63** & **0.720** & **70.30** & 31.20 \\ \hline \multicolumn{5}{c}{**Bold**: best; Underlined: second-best results.} \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of various bounding box regression losses on ASD dataset.
### Qualitative Study
A qualitative analysis was undertaken to compare the detection performance of the proposed Joint-YODNet with that of YOLOv5-ODConvNeXt, as depicted in Fig. 3. The analysis involved examining visual results derived from images showcasing both the ground truth ship instances and the detections generated by the two methods. Specifically, (a) denotes the ground truth bounding boxes, (b) represents the detection results obtained from YOLOv5-ODConvNeXt, and (c) corresponds to the bounding box detections obtained through the proposed method. The visual comparison vividly demonstrates the superior performance of our proposed method in accurately localizing and recognizing ship instances despite their varying small sizes.
Conversely, the detections generated by YOLOv5-ODConvNeXt (a) exhibit certain limitations. Some instances are either missed or incorrectly identified, resulting in lower precision and recall, as evidenced in Fig. 3. These disparities in detection quality further emphasize the advantages of our proposed method in real-life scenarios.
Figure 3: Comparing Object Detection Performance on the SAR-SD Dataset: (a) Ground Truth Labels, (b) Detections by YOLOv5-ODConvNeXt, and (c) Detections achieved through the proposed method.
### Computational Analysis
Joint-YODNet not only achieves superior performance but also demonstrates remarkable computational efficiency, with a high frame rate (FPS) of 136 (used Eq. 9). This makes it ideal for real-time applications that require timely and accurate ship detection. Additionally, our method features a lightweight architecture, making it compatible with resource-constrained edge devices. This enhances its versatility for deployment in various practical scenarios. Joint-YODNet has a parameter size of 6.99M and a memory size of 14.4MB.
## 5 Conclusion
In this study, we proposed a novel method for small object detection in UAV imagery, utilizing a tailored joint loss function. Our approach, Joint-YODNet outperforms existing methods, achieving superior precision, recall, F1Score, and [email protected] scores, demonstrating its robustness across object scales. The qualitative analysis further validates the effectiveness of our method in real-life scenarios. Our research contributes to the field by addressing the challenges of small object detection in UAV imagery and enabling accurate localization and recognition. The results highlight its potential in surveillance, object tracking, and environmental monitoring in UAV-based systems. Future research can focus on refining the joint loss function and integrating additional techniques for further enhancement.
|
2309.17195 | Fossil and present-day stromatolite ooids contain a meteoritic polymer
of glycine and iron | Hemoglycin, a space polymer of glycine and iron, has been identified in the
carbonaceous chondritic meteorites Allende, Acfer 086, Kaba, Sutters Mill and
Orgueil. Its core form has a mass of 1494Da and is basically an antiparallel
pair of polyglycine strands linked at each end by an iron atom. The polymer
forms two- and three- dimensional lattices with an inter-vertex distance of
4.9nm. Here the extraction technique for meteorites is applied to a 2.1Gya
fossil stromatolite to reveal the presence of hemoglycin by mass spectrometry.
Intact ooids from a recent (3,000Ya) stromatolite exhibited the same visible
hemoglycin fluorescence in response to x-rays as an intact crystal from the
Orgueil meteorite. X-ray analysis confirmed the existence in ooids of an
internal 3-dimensional lattice of 4.9nm inter-vertex spacing, matching the
spacing of lattices in meteoritic crystals. FTIR measurements of acid-treated
ooid and a Sutters Mill meteoritic crystal both show the presence, via the
splitting of the Amide I band, of an extended anti-parallel beta sheet
structure. It seems probable that the copious in-fall of carbonaceous
meteoritic material, from Archaean times onward, has left traces of hemoglycin
in sedimentary carbonates and potentially has influenced ooid formation. | Julie E M McGeoch, Anton J Frommelt, Robin L Owen, Gianfelice Cinque, Arthur McClelland, David Lageson, Malcolm W McGeoch | 2023-09-29T12:44:36Z | http://arxiv.org/abs/2309.17195v2 | # Fossil and present-day stromatolite ooids contain a meteoritic polymer of glycine and iron.
###### Abstract
Hemoglycin, a space polymer of glycine and iron, has been identified in the carbonaceous chondritic meteorites Allende, Acfer 086, Kaba, Sutter's Mill and Orgueil. Its core form has a mass of 1494Da and is basically an antiparallel pair of polyglycine strands linked at each end by an iron atom. The polymer forms two and three dimensional lattices with an inter-vertex distance of 4.9nm. Here the extraction technique for meteorites is applied to a 2.1Gya fossil stromatolite to reveal the presence of hemoglycin by mass spectrometry. Intact ooids from a recent (3,000Ya) stromatolite exhibited the same visible hemoglycin fluorescence in response to x-rays as an intact crystal from the Orgueil meteorite. X-ray analysis of these ooids at wavelengths above and below the iron K absorption edge yielded a set of high order diffraction rings that confirmed the existence and nature of a three dimensional lattice of 4.9nm inter-vertex spacing. The lattice is filled by micro-crystals of the aragonite and calcite forms of calcium carbonate. It seems probable that the copious in-fall of carbonaceous meteoritic material, from Archaean times onward, has left traces of hemoglycin in sedimentary carbonates and potentially has influenced ooid formation.
## Introduction
This research was preceded by a study of the subunit C protein from the mitochondrial ATP synthase of eukaryotes. The rugged nature of this highly conserved protein, combined with its ability to entrap water, suggested an exploration of its possible earlier roles, leading to a theoretical demonstration of the
possibility of exothermic peptide condensation in warm, dense molecular clouds. This was followed by a mass spectrometry search within carbonaceous meteorites. initially Allende, of type CV3 and Murchison of type CM2. Evidence for a glycine/hydroxyglycine polymer was found in Allende, but polymer amide within Murchison could not be characterized [1]. More detailed mass spectrometry, principally in CV3 meteorite Acfer 086, revealed the existence of a set of glycine/iron polymers in the mass range around 1500Da, with a core polymer at 1494Da, named hemoglycin [2]. X-ray analysis confirmed the mass spectrometry results in regard to the polymer length, revealing the existence of polymer lattices [3, 4], while theoretical calculations predicted that there should be a chiral 480nm absorption in hemoglycin associated with the iron-glycine bond region. The chirality was only associated with hydroxyl groups on glycine. This absorption was seen experimentally [4] and its strength implied an excess of R-chirality hydroxyglycine, which was consistent with the template replication of hemoglycin. Because our technique of drilling into stony meteorites to create micron scale particles for solvent extraction was applicable to terrestrial rocks, and it was likely that certain fossil stromatolites had not exceeded a temperature of 300C between initial formation [5] and the present day, mass spectrometry was performed on a 2.1Gya fossil stromatolite, with the result that the same 1494Da molecule was observed in a direct comparison with extracts of crystals from the Orgueil meteorite, comprising the first subject of the present paper.
Ooids, the primary mineral component of present-day stromatolites, are small ovoid particles (Figure 1) of mainly calcium carbonate that make up the oolitic sand that underlies and often overlays their formations. Fossil stromatolites have few intact ooids probably due to pressure over thousands to billions of years since their initial formation. Ooids in present day stromatolites contain calcium carbonate in the aragonite form, but over geological time there can be a transition within ooids to the calcite form, which is more energetically favored. The formation of ooids, and whether they form during stromatolite growth, or are merely pre-existing entities accreted into stromatolites, is still being researched. It is considered [6] that there is precipitation of calcium carbonate within stromatolite microbial mats via a matrix of extracellular polymeric substances, firstly in a calcium carbonate gel followed by production of nanospheres and then the growth of aragonite crystals guided by an organic matrix. On the other hand, Trower et al. [7] have documented independent ooid growth that is faster in a turbulent shallow water shoal (Turks and Caicos islands) than in a more static lagoon with more extensive biofilm colonization. There is extensive evidence for organics within biologically produced aragonite, coming from a) crystal anisotropy [8, 9] and b) the presence in ooids of a blue fluorescence. In regard to b), Lin et al. [10] analyzed Holocene ooids (5,377\(\pm\)61Ya) from oolitic sand in the Western Qaidan Basin, Tibet. These were well-preserved, lacked microbial evidence, and contained 90-97% aragonite in fine crystals. The blue fluorescence under 365nm UV light was attributed to organic material within or between the crystals. In a study of present day stromatolites on Highborne Cay, Evuma, Bahamas, Paterson et al. [11] reported blue fluorescence of ooids under 405nm light, referring to it as "autofluorescence" to distinguish it from marker fluorescence. Elsewhere, Dravis et al. [12] reported strong fluorescence of ooids within a Pleistocene oolitic grainstone from West Caicos Island. The fluorescence was bright in well-preserved
aragonite grains but freshwater diagenesis, that replaces aragonite with more stable calcite, apparently destroyed organic material and removed the fluorescence. These reports did not give fluorescence spectra that would have allowed a detailed comparison with the x-ray induced fluorescence from meteorites [13] and ooids from a recent Shark Bay, Australia stromatolite, presented below as the second subject of this paper.
In prior work on crystals of meteoritic hemoglycin we had observed two-dimensional lattices via the strong x-ray scattering of iron atoms at the lattice vertices, which are spaced by the hemoglycin polymer length of 4.9nm. In [3] a floating 3-dimensional lattice was reported in the interphase region of the solvent extraction vial, and its structure was proposed to be the diamond 2H form. Subsequently, the tetrahedral angles of this form were seen in x-ray diffraction, but information was partial. As a matter of course the x-ray scattering of present day ooids was recorded, this time at wavelengths above and below the iron K-edge at 1.74 Angstroms so as to potentially know whether iron was involved in the organic internal lattice. At the longer 2.066 Angstrom wavelength used, where iron scattering was high and yet absorption was
Figure 1: Shark Bay stromatolite sample with drill hole (left) and Ooids (right) released from the sample by gentle etching. Ooid size range (39 yellow ooids): Major ooid axis 199\(\pm\)42\(\upmu\)m; Waist diameter 164\(\pm\)39\(\upmu\)m.
low, a dramatic series of high order diffraction rings was seen that did not appear at 0.979 Angstroms. Additionally, the anticipated aragonite and calcite rings at low "d" spacings were present in both cases. An analysis of this data revealed a diamond 2H lattice of vertex spacing 4.9nm with a small axial lengthening above the ideal geometric lattice. This constitutes the third subject of the paper.
In summary, mass spectrometry, x-ray induced visible fluorescence and x-ray diffraction point to the organic lattice of ooids being a molecule with the optical and spatial properties of meteoritic hemoglycin, arranged in a 3D lattice of the diamond 2H type. The bulk of an ooid, filling this lattice, is a crystalline mixture of aragonite and calcite. Finally, the discussion will assess meteoritic in-fall as the source of this lattice material.
## 1 Mass spectrometry on fossil stromatolite compared to Orgueil meteorite
### Approach
Our decision to employ MALDI mass spectrometry, in this and all our prior analysis was impelled by:
1. Unknown phase mixtures can be handled.
2. A useful degree of laser fragmentation contributes to the structural analysis.
3. There are two stand-out matrix molecules, CHCA and SA, that have reliable, yet very different protonation rates [14]. When their results coincide, uncertainty is removed.
4. Even when collections of small crystals are studied, as in this work, it is not necessary to grind the crystals as partial solvation is achieved in MALDI matrix solutions after 1 hour at room temperature.
We came into the measurements knowing that the crystals likely contained hemoglycin because:
1. They all derive from the interphase of the relevant Folch extractions, where hemoglycin was the dominant chemical [1, 2, 3].
2. X-ray diffraction has not been possible on these crystals due to their small size. However, on several larger crystals from both Orgueil and Sutter's Mill there were diffraction rings that formed a family related to specific iron atom spaces in the hemoglycin polymer junctions.
To anticipate the results, we have obtained confirmation in both Orgueil and fossil stromatolite that the 1494Da hemoglcyin core unit [2] comprises essentially 100% of the crystalline material that is able to be solubilized.
### Basic observations from Mass spectrometry.
A very consistent MS pattern was seen in all samples, indicating that the crystal preparation process had universally selected the same molecule, whether from Orgueil or Stromatolite. In every sample the MALDI spectrum contained only two main peaks, one at 1494 m/z, corresponding to the hemoglycin "core unit" [2, 3, 4]. and the second at 760 m/z, which was typically five times higher in summed ion count and was the exclusive fragment of 1494 m/z observed in the present work. The 760 m/z fragment comprises a single polyglycine strand from the antiparallel pair that comprise the central body of hemoglycin. The 760 m/z spectrum does not show \({}^{54}\)Fe
side peaks [2] beside the "monoisotopic" peak, and hence the fragment does not carry iron.
The fragmentation occurs because core unit 1494 m/z rods form different 2-D and 3-D meshes in different circumstances [2, 3, 4]. In Figure 2 we illustrate the simplest triskelion hexagonal mesh that forms, with a junction geometry that has been confirmed by X-ray analysis [unpublished]. The bonding is consistently strong, requiring the highest MALDI laser intensity before both the 1494 m/z and 760 m/z peaks appear at once.
Figure 3 shows for Orgueil crystal sample (O1_2, run1, SA) the 760 m/z peak system on the left, and the 1494 m/z system on the right. They were each fitted to a global \({}^{2}\)H enhancement, giving via 760 m/z analysis \(\delta\)\({}^{2}\)H = 54,000 \(\pm\) 3,000 per mil, and via the 1494 m/z analysis \(\delta\)\({}^{2}\)H = 51,000 \(\pm\) 3,000 per mil.
In summary, the strong polymer rod interconnections of hemoglobin rendered MS analysis for the 1494 m/z subunit itself fairly difficult. This was an inevitable side-effect of having a strong space polymer. The difficulty persisted in spite of mixing in MALDI solvent for 1hour, or 72hrs (run 2). Earlier it had been established that 1 hour in the matrix with 50% acetonitrile and 0.1%TFA was necessary to get the polymer solvated. Immediate MALDI approximately 15 minutes after applying to the MALDI plate, yielded no peaks at all with the polymer not leaving the crystals for the matrix.
Figure 2: **A hexagonal hemoglycin lattice of covalently bound 1494 m/z polymer core units. The unit length \(h\) is approximately 5nm. The lattice can have many superimposed layers.**
### Isotope analysis
The first identification of hemoglycin in meteorites via MALDI MS [2] showed that it carried heavy isotopes ratios such as \({}^{2}\)H/H, \({}^{13}\)C/\({}^{12}\)C, \({}^{15}\)N/\({}^{14}\)N, etc. at levels far above terrestrial standards, at least in the cases of \({}^{2}\)H and \({}^{15}\)N. An isotope fitting routine was written [2] with no internal approximations, in which trial values of the isotope ratios are input and the output is compared to experimental MALDI peak strengths. Figure 4, curve A, shows a stromatolite hemoglycin molecular peak complex at m/z 1494 (sample S2_1 (CHCA)). The highest isotopologue occurs at the n = +1 location relative to the n = (0) "monoisotopic" peak. Curve B in Figure 4 is the calculated complex for a "global" (i.e. equivalent \({}^{2}\)H) enhancement of 52,000 per mil, but it is expected that in practice there would be a significant \({}^{15}\)N component, as seen in Acfer 086 [2, 15] This would reduce the actual \({}^{2}\)H/H ratio in a predictable way. "Global \({}^{2}\)H" fitting to the stronger run2 data set, from both the 1494 and 760 components, is summarized in Table 1. In contrast, a simulation with wholly terrestrial isotope levels gives the very different curve (Figure 4 C), in which the highest peak is the (0) isotopologue.
Figure 3: **Data from sample 01_2, with sinapinic acid matrix. On the left is the 760 m/z fragment that is the sole product from the break-up of the 1494 molecules. On the right, the 1494 m/z peak system.**
The following isotope ratios (IAEA, Vienna 1995 [16]) are taken as terrestrial standards:
\begin{tabular}{l l} VSMW water & R\({}_{\rm H}\) = \({}^{2}\)H/\({}^{1}\)H = 155.76 \(\pm\) 0.05 x 10\({}^{\text{-}6}\) \\ VSMW water & R\({}_{\rm O}\) = \({}^{18}\)O/\({}^{16}\)O = 2,005.20 \(\pm\) 0.45 x 10\({}^{\text{-}6}\) \\ V-PDB & R\({}_{\rm C}\) = \({}^{13}\)C/\({}^{12}\)C = 11,237.2 x 10\({}^{\text{-}6}\) \\ Atmospheric Nitrogen & R\({}_{\rm N}\) = \({}^{15}\)N/\({}^{14}\)N = 3,612 \(\pm\) 7 x 10\({}^{\text{-}6}\) \\ \end{tabular}
Figure 4: Vertical axis is intensity and horizontal axis m/z. A: sample S2_1 peak complex at 1494 mz. B: variation of \({}^{2}\)H to fit the isotopologue intensities in curve A (the fit required \(\delta^{2}\)H = 52,000 per mil). C: the same molecule simulated at terrestrial (Vienna) isotope values.
### Visible ooid fluorescence induced by x rays
Modern ooids were subject to x ray diffraction on both the Advanced Photon Source, Argonne National Laboratory (APS) and the Diamond Light Source (Diamond), with diffraction results reported in Section 3. An ooid (sample LS2) from Hamlin Pool, Shark Bay, Western Australia of estimated age 2,000 - 3,000 years, that had been treated with acetic acid as described in the Methods section to partially remove calcium carbonate, was the subject of x ray analysis at Diamond Light Source.
Sample LS2 gave strong x ray induced fluorescence in a 1.000 Angstrom beam, under cryo-flow cooling. The fluorescence was associated with low temperatures (100K) and was absent at 300K. The fluorescence peaked at 480nm and carried the same 465nm absorption "dip" seen previously in x ray induced fluorescence [13] in a crystal from the Orgueil meteorite, the data for both cases being shown in Figure 5.
The fluorescence was analyzed as before [13] using five Gaussian components to obtain an exact comparison. Across the range of wavelengths the same five components appeared, with relatively minor changes to the center wavelength of any one component. The intensity distribution was different, however, with much reduced representation of the peaks at 505nm and 565nm relative to the corresponding ones in the Orgueil data at 489nm and 551nm. These distributions are compared in Table 2. Intensities are multiplied by widths and normalized to the 489nm Orgueil peak, which is assigned the value 100. In this ooid sample the
\begin{table}
\begin{tabular}{|l|l|c|c|} \hline & & Global \({}^{2}\)H & Global \({}^{2}\)H \\ & & 760 m/z & 1494 m/z \\ \hline \multirow{6}{*}{Orgueil} & O1\_1 (CHCA) & xx & 55,000 \(\pm\) 2,000 \\ \cline{2-4} & O1\_2 (SA) & xx & 54,000 \(\pm\) 3,000 \\ \cline{2-4} & O2\_1 (CHCA) & 50,000 \(\pm\) 2,000 & xx \\ \cline{2-4} & O2\_2 (SA) & 51,000 \(\pm\) 2,000 & xx \\ \cline{2-4} & O3\_1 (CHCA) & 51,000 \(\pm\) 2000 & 53,000 \(\pm\) 3,000 \\ \hline \multirow{6}{*}{Stromatolite} & \multirow{6}{*}{Orgueil Ave. 52,333 \(\sigma=1\),795 n=6} \\ \cline{2-4} & & \\ \cline{2-4} & S1\_1 (CHCA) & xx & 53,000 \(\pm\) 3,000 \\ \cline{2-4} & S1\_2 (SA) & xx & 53,000 \(\pm\) 3,000 \\ \cline{2-4} & S2\_1 (CHCA) & xx & 52,000 \(\pm\) 3,000 \\ \cline{2-4} & S2\_2 (SA) & xx & 54,000 \(\pm\) 2,000 \\ \cline{2-4} & S3\_1 (CHCA) & xx & 51,000 \(\pm\) 2,000 \\ \cline{2-4} & S3\_2 (SA) & xx & xx \\ \hline \multirow{6}{*}{“xx” represents saturated signal or too low signal relative to noise} \\ \cline{2-4} & & & \\ \cline{1-1} \cline{2-4} & & & \\ \end{tabular}
\end{table}
Table 1: Isotope analysis for run2 data set. Enhancements in parts per mil.
dominant emission is still associated with a peak close to 480nm. However, the next strongest emissions are at 407nm and 477nm, rather than at 551 (or 565) nm. All of these emissions are linked to the interaction of iron with glycine residues [4,13].
The ultraviolet and visible light absorbance of the sample (not shown) is featureless, not showing the characteristic 480nm absorbance seen in the Orgueil crystal [13] that is indicative of the chiral 480nm absorbance of hemoglycin [4]. This suggests that in the ooid sample there is probably a low population of 'R' chirality hydroxy-glycine
Figure 5: X-ray induced blue-green fluorescence from (Top) Orgueil meteorite crystal; (Bottom) Ooid from present day Shark Bay Stromatolite. Experimental data black, fit curves in red.
residues bonded to Fe at their C-termini. More specifically, reviewing the visible absorptions calculated in Table 2 of [4] we deduce that there appear to be low populations of {0,R}, {S,0}, {S,R}, {R,0}, {R,S}, {R,R} combinations of {N-terminus, C terminus} chiralities of hydroxyglycine, with zero representing plain glycine. It is possible that the acetic acid treatment of this sample caused chemical reduction of most of the hydroxyl groups, leading to un-observable 480nm absorbance. Without the full complement of hydroxyl groups in the ooid, the fluorescence routes corresponding to the main Orgueil bands at 489nm and 551nm involving hydroxyglycine would be less able to function, consistent with observation.
Based on these compelling similarities in the fluorescence of ooids and the previously characterized Orgueil crystal, it is very likely that a similar iron-glycine chemical compound is in each sample. Observation of the sharp 465nm absorption again in ooids suggests that there could be a "caged" iron atom at junctions of the ooid hemoglycin lattice, as previously proposed for the Orgueil crystal [13].
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Peak** & **1** & **2** & **3** & **4** & **5** \\ \hline
**ORGUEIL** & **408.5 \(\pm\)0.4** & **489.0 \(\pm\)0.3** & **551.2 \(\pm\)9.7** & **465.1 \(\pm\)0.2** & **488.5 \(\pm\)0.5** \\
**\(\lambda\)(nm)** & & & & \\ \hline
**1/e half** & **27** & **72** & **100** & **12.0** & **15.3** \\
**width (nm)** & & & & \\ \hline
**Peak** & **408 \(\pm\)16** & **2,826** & **714 \(\pm\)98** & **-759 \(\pm\)15** & **339 \(\pm\)10** \\
**intensity** & & **\(\pm\)135** & & & \\ \hline
**Integrated** & & & & \\ \cline{2-5}
**strength** & **5.4** & **100** & **35** & **-4.5** & **2.5** \\
**normalized** & & & & \\ \hline
**OOID** & **407.3 \(\pm\)0.9** & **504.9 \(\pm\)7.4** & **565 \(\pm\)15** & **464.8 \(\pm\)0.2** & **477.4 \(\pm\)0.8** \\
**\(\lambda\)(nm)** & & & & \\ \hline
**1/e half** & **39** & **79** & **166** & **12.7** & **37** \\
**width (nm)** & & & & \\ \hline
**Peak** & **389 \(\pm\)33** & **357 \(\pm\)33** & **59 \(\pm\)5** & **-202 \(\pm\)5** & **353 \(\pm\)42** \\
**intensity** & & & & \\ \hline
**Integrated** & & & & \\ \cline{2-5}
**strength** & **54** & **100** & **36** & **-9** & **46** \\
**normalized** & & & & \\ \hline \end{tabular}
\end{table}
Table 2: **Peak x ray induced visible fluorescence wavelengths, half widths at 1/e intensity, and relative integrated strengths, for an Orgueil crystal [13] and ooid from stromatolite. Errors were generated in least squares fitting of averages of 3 data traces taken at 100% beam intensity. Peak 4 is the absorption dip, represented by a negative intensity.**
### X ray derivation of the three-dimensional hemoglycin lattice in ooids
Intact ooids from the Shark Bay stromatolite sample were studied for lattice structure at APS using x ray wavelengths of 0.979 Angstroms and 2.066 Angstroms which straddled the 1.74 Angstrom K absorption edge of iron. At each wavelength there were diffraction rings between principally 1.61 Angstroms and 3.85 Angstroms in a superposition of calcite andragonite powder pattern rings (data in S1, Table S1.1 and Figure S1.2, part B). However, at 2.066 Angstroms, where Fe absorption is low, and not at 0.979 Angstroms, there was an intense and striking new set of rings (Figure 6 and Figures S1.1, S1.2) at nominal first order spacings of between 4.808 and 11.540 Angstroms, summarized in Table 3.
The left hand column of Table 3 contains the set of 18 rings that represented larger "d" spacing than 4 Angstroms. These rings did not match either the calcite orragonite values, but were reminiscent of the ladders of high order diffraction previously seen in hemoglycin lattices [3, 4]. In [3] the ladder contained orders 2 through 5, with a fitted first order lattice parameter of 48.38 \(\pm\) 0.2 Angstroms. In [4] the ladder contained orders 2 through 12, with a fitted first order parameter of 49.03 \(\pm\) 0.18 Angstroms. Quick inspection of the present 18 ring set yielded a fit to 49.0 Angstroms in 5\({}^{\mathrm{th}}\) 6\({}^{\mathrm{th}}\) 7\({}^{\mathrm{th}}\) and 9\({}^{\mathrm{th}}\) orders as follows (listed also in Table 3):
5 x 9.823 = 49.11; 6 x 8.127 = 48.76 ; 7 x 7.022 = 49.15; 9 x 5.445 = 49.00
Consequently, a complete scan was made to find clusters of orders that matched target "D" spacing values from 30 Angstroms to 140 Angstroms, rising in increments of 0.05Angstroms. Results were accumulated whenever a multiple of one of the 18 first order values matched one of these trial "D" values within less than 0.5% i.e. D = \(nd_{\mathrm{K}}\), with \(n\) = 1,2,3... where the measured first order spacing is \(d_{\mathrm{K}}=\lambda/(2\sin\Theta_{\mathrm{K}})\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Angstroms & **49.0** & **81.65** & **92.05** & **112.75** & **119.9** & **126.25** \\ \hline
4.80(8) & & **17**, 0.1\% & & & **25**, 0.2\% & \\ \hline
5.18(5) & & & & & & \\ \hline
5.24(3) & & & **17**, 0.1\% & & & **24**, 0.3\% \\ \hline
5.44(5) & **9**, 0\% & **15**, 0.2\% & & & **22**, 0.1\% & \\ \hline
5.63(3) & & & & **20**, 0.1\% & & \\ \hline
5.73(5) & & & & & **21**, 0.4\% & **22**, 0.1\% \\ \hline
5.94(3) & & & **15**, 0.1\% & **19**, 0.1\% & & \\ \hline
6.28(8) & & **13**, 0.4\% & & **18**, 0.4\% & **19**, 0.3\% & **20**, 0.4\% \\ \hline
6.57(8) & & & & & & \\ \hline
6.84(5) & & & **13**, 0.1\% & & & \\ \hline
7.02(2) & **7**, 0.3\% & & & **16**, 0.3\% & **17**, 0.4\% & **18**, 0.1\% \\ \hline
7.12(8) & & & & & & \\ \hline
7.48(5) & & & & **15**, 0.4\% & **16**, 0.1\% & \\ \hline
8.12(7) & **6**, 0.5\% & **10**, 0.1\% & **11**, 0.3\% & & & \\ \hline
9.05(3) & & **9**, 0.2\% & & & & **14**, 0.3\% \\ \hline
9.82(3) & **5**, 0.2\% & & & & & **14**, 0.3\% \\ \hline
10.21(7) & & & **8**, 0.1\% & & **11**, 0.3\% & & \\ \hline
11.54(0) & & & & & & \\ \hline \end{tabular}
\end{table}
Table 3: **Ooid diffraction rings in first order (left hand column). Higher order fits (top row) listed as diffraction order in bold with percentage mis-match.**
The results of this whole scan are plotted in Figure S1.3 with the most prominent clusters listed here in Table 3. Long sequences of higher order matches prompted the next stage of analysis in which the higher order fits were compared to "D" spacing expectations for the putative diamond 2H lattice [3] of hemoglobin, the detailed calculation of these spacings being laid out in S1. Initially, with an undistorted perfectly tetrahedral lattice there was moderately good agreement, however excellent agreement was obtained with a 4.5 degree increase (to \(23.97\pm 0.5\) deg) in the angle \(\alpha\) between the quasi-hexagonal "sides" and the plane perpendicular to the trigonal symmetry axis [3]. It is concluded that the sample ooid from a recent
Figure 6: X-ray diffraction at 2.066 Angstrom from ooid in present era stromatolite showing dark lattice rings between 11.54 and 4.81 Angstroms from the center outward, plus calcium carbonate rings in a faint outer pattern.
stromatolite contains an axially distorted diamond 2H open 3D lattice with an inter-vertex spacing of 49.0 \(\pm\)0.2 Angstroms, and that iron atoms at the vertices provide the strong x ray scattering necessary to observe the lattice. The lattice is filled with calcium carbonate in the crystal forms aragonite and calcite. Because we see rings rather than spots, the ooid comprises a polymer lattice with multiple small crystals in many orientations. The hemoglycin lattice itself is in many orientations, possibly aligned in regions associated with micro-crystals.
**Additional first order diffraction data from crystals and ooids.**
In Section S3 additional diffraction data is given on a) crystals in fossil stromatolite extract and b) ooids from both present day and fossil stromatolite. Out of the fossil samples, only fossil No.2 from Wyoming (provenance in S3.2) yielded any ooids.
In regard to a): In Table S3.1 there was a degree of commonality between different crystal samples of Wyoming Fossil No. 1, and a match between fossil stromatolite and the prior report of diffraction from a fiber crystal of meteorite Acfer 086. The agreement related to the proposed separation of iron atoms at the junction of hemoglycin rods in a rectangular lattice [3].
In regard to b): Table S3.2 compares ring patterns in ooids (provenance in Section 3.2). The Shark Bay ooids are 1. as found, and 2. acid treated as in Methods part 2. The fossil ooids came from Wyoming 2.1Gya stromatolite sample No. 2 (there were no ooids found in Wyoming sample No. 1). The San Salvador ooids were the most recent. Many of the rings were identified as aragonite. Interestingly the 2.1Gya fossil also contained aragonite, implying a "mild" thermal history in view of the tendency for aragonite to transition into more stable calcite at high temperatures [17].
**Discussion**
We find that three very different lines of evidence all point to the presence in stromatolites of the hemoglycin polymer previously only known from meteorites. In the first 2.1Gya fossil stromatolite sample the fraction of heavy isotopes in the hemoglycin mass spectrometry peak complex is comparable to that in the Orgueil meteorite, which seems to indicate that there was preservation of in-fall hemoglycin in the predominantly calcium carbonate fossil. Following studies of the Barberton Greenstone Belt of South Africa, Lowe et al. [18] have discussed revision of the temporal profile of meteoritic in-fall toward an ongoing, more gradual decline [19] than in the Late Heavy Bombardment theory in which in-fall declined abruptly to present day levels at about 3.9Gya. The in-fall rate at 2.5Gya, the end of the Archean, may have been 10 times greater than at present. Most of the in-falling material would likely resemble the Orgueil meteorite, which is known to be characteristic of solar system material [20]. Our present work with Orgueil has shown that relatively complex chemicals such as hemoglycin can survive in-fall, although not, presumably, in the heat and shock conditions of a major impact.
Calcium carbonate ooids are the primary mineral constituent of present-day stromatolites. However, through geological time there can be partial replacement of
calcium carbonate in ooids by silicates, as seen by for example [21] in 2.72Gya fossil ooids from Western Australia.
The survival of hemoglycin is attributed to its being an incredibly tough molecule, that, once formed in a protoplanetary disc, often becomes internally mineralized, remaining within the mineral as an extensive low density lattice. This state has now been observed for the first time in x ray analysis at 2Angstroms. In the protoplanetary disc, molecules circulate from hot, high ultraviolet, to cold regions and should an open hemoglycin lattice get near the new sun it would degrade and never feature as an infall polymer. Hemoglycin is found in many meteorites indicating that some hemoglycin from the colder regions of the disc does persist to seed planets via in-fall. Planets forming from a protoplanetary disc via gravity, as they enlarge, experience heating due to Al isotope decay, meaning that hemoglycin landing on such an early entity will not remain intact. A planet like Earth conducive to developing complex chemistry, going on to life forms, will rely on asteroid in-fall once it is sufficiently cool for the molecules to survive.
Beginning approximately 2.4Gya and reaching partial completion 2Gya, there was a build-up of oxygen in Earth's atmosphere from almost zero to a fraction of the present day value, known as the great oxygenation event (GOE) [22]. It has been proposed that gradually cyanobacteria dominated over anoxygenic photosynthetic bacteria, in a process dependent on geochemical changes together with locally increasing sources of oxygen [22]. However ultraviolet radiation reaching the Earth's surface prior to this event was as much as 400 times greater than in the present day [23, 24]. This was in the UV band approximately between 200nm and 300nm which is the most destructive to nucleotides, equivalent to an E. Coli mutation doubling dose every quarter second [23], much too high for any organism to survive. It requires several metres of water to attenuate the pre-GOE radiation levels to equate with present day surface exposure. The question is whether the earliest organisms were able to constitute themselves and thrive sufficiently to perform global oxidation in spite of an extremely hostile surface environment. Out of the present work comes an alternate hypothesis related to the possibility of reliable abiotic water splitting by hemoglycin in the presence of sunlight.
We have done initial quantum chemical modeling on a water-splitting reaction that hemoglycin can engage in, via direct absorption of UV light. We believe that it is a two-step reaction cycle that goes via
1. hemoglycin + H\({}_{2}\)0 + h \(\nu\)-> hemoglycin (OH) + H\({}_{2}\)
2. hemoglycin(OH) + H\({}_{2}\)0 + h \(\nu\)-> hemoglycin + H\({}_{2}\)O\({}_{2}\)
followed by the release of O\({}_{2}\) from H\({}_{2}\)O\({}_{2}\).
There are no other participants in this process, no catalysts and "room temperature" operation. The finding of hemoglycin in a fossil stromatolite therefore opens up the possibility that its oxygen producing ability could have "kick-started" the GOE, producing an increasing degree of ultraviolet protection for complex biology. Furthermore, it could potentially provide chemical energy to its surroundings.
The R-chirality of hemoglycin that lets it absorb visible light [4] sets it apart from S-based life involving amino acid protein. This may be its most important property allowing separation of systems with hemoglycin that are essentially abiotic to be very distinct from biochemical systems. On early Earth this system divide could have been maintained with fossil stromatolites forming their mineral parts abiotically [7, 25], in possible contrast to present day stromatolites [6]. However, there is evidence of organic material comparable to that in present day stromatolites having been present in the neo-Archean [21]. It would be of great interest to know whether hemoglycin present in modern day ooids still provides an energy source.
A 2023 report emphasizes that UV-driven chemistry in protoplanetary disks is a "signpost" for planet formation [26]. The hemoglycin polymer is not referenced in [26] with its response at both 480nm and 6um [4] but at least the factor of light on molecules is raised. In 2022 we suggested that hemoglycin was a factor in accretion [4]. To acknowledge the need for an abiotic factor, in this case light, acting on a molecule that allows energy transfer is a step forward to understanding paths to the evolution of accreted matter destined for planet formation and for biochemical evolution.
## Conclusions
Modern day stromatolite ooids and fossil stromatolite (2.1Gya) from the Medicine Bow Mountains of Wyoming contain hemoglycin, the space polymer. At 2.1Gya there was ongoing substantial asteroidal delivery, including hemoglycin, to Earth where water was present as tidal pools or early oceans, conditions that support stromatolite formation. Light is the important agent for hemoglycin in that it allows the molecule to potentially pass on energy to other chemistry, and in particular open up a path to atmospheric oxidation. Once the early Earth had hemoglycin it had-solar driven chemical energy and that may have led by paths unknown and to be investigated, to the first life forms, the stromatolites and their bacterial mats. The first events together (hemoglycin in-fall and formation of first stromatolites) could have been abiotic and may have preceded simple organisms like cyanobacteria.
## Acknowledgments
We wish to thank the late Guido Guidotti of Harvard who gave encouragement and advice for this extra-terrestrial polymer research. We thank Charles H. Langmuir and Zhongxing Chen of the Department of Earth and Planetary Science, Harvard, for use of their Hoffman clean room facilities, and Sunia Trauger, the senior director of the Harvard center for mass spectrometry for assisting in the MALDI analysis. This research used two synchrotron resources: 1) The Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357. Use of the Lilly Research Laboratories Collaborative Access Team (LRLCAT) beamline at Sector 31 of the Advanced Photon Source was provided by Eli Lilly
and Company, which operates the facility. 2) The Diamond Light Source, beamline I24, Harwell Science and Innovation Campus, Didcot, OX11 0DE, UK. The Orgueil meteorite samples n234, dispatch number ar813, Colhelper request number 170600, were provided by the Museum National D'Histoire Naturelle (MNHN) Paris by Beatrice Doisneau. We particularly thank BD for sending Orgueil samples that had high IOM. We thank Cfa, Harvard and Smithsonian for supporting the mass spectrometry analysis. Andrew Knoll of OEB, Harvard and Museum of Comparative Zoology, 26 Oxford Street, Cambridge, MA 02138 provided the present-day stromatolite samples from San Salvador and from Hamlin Pool, Shark Bay Australia, collected by Elso Barghoorn in 1971.
## Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request and at the Harvard Dataverse repository via a public URL once accepted.
## Methods
### 1. Mass Spectrometry
All experimental procedures were performed under clean laboratory conditions with operators wearing lab coats, hair cover and gloved hands as previously reported [1, 3] (Fig. 7) All chemicals were only used for these analyses and kept in separate laboratory areas.
Micron particles of the fossil stromatolite were etched as previously described [1] and then Folch extracted [1, 2, 3] for up to 5 months at room temperature. The Orgueil meteorite sample being a total of 2 x 100mg samples from MNHM with a very loose topology typical of this meteorite, was not etched to micron particles but soft, small pieces of a few mg each were Folch extracted for 4 months.
Crystals of hemoglycin were picked up from the liquid interphase layer of the Folch extraction and pipetted into a watch glass on the stage of a zoom microscope under x25 magnification. Clean empty Hampton crystallography loops were used to pick up the 100-200\(\upmu\)m crystals. The glycine rods of hemoglycin make the crystals slightly sticky allowing light adhesion to the loop. The Hampton loops were carefully placed into the inside of 500\(\upmu\)l Eppendorf tubes containing 20\(\upmu\)l of methanol. The loops were turned to make sure the crystals were delivered to methanol in the tubes and checked for this under the microscope. Several crystals were added to each Eppendorf tube. This technique of crystal transfer was used for both the fossil stromatolite and the Orgueil crystals. The fossil stromatolite crystals always had some adhering calcium carbonate particles. All stromatolite transfers were performed separate from those for Orgueil transfers to avoid any cross contamination. Each sample of crystals contained at least 5 separate crystals.
A separate Orgueil Eppendorf tube was set up from a tube where hemoglycin crystals derived from unprocessed insoluble organic matter (IOM) had dried out and the crystals had adhered to the wall on the inside of the tube to an extensive degree (sample O3_1). Attempts were made using Folch solvents to solubilize these wall crystals which failed. It was decided as those crystals were potentially good because they resembled those that absorbed light [4], to simply leave the crystals of O3_1 as is, and rely on their solubilization via trifluoracetic acid at the point they are added to the SA and CHCA MALDI matrices, defined below.
The fossil stromatolite sample was collected from the Medicine Bow formations of the Woming craton [27]. After etching, two separate fossil stromatolite Folch extractions of 24 hours, were also set up as S3_1 for CHCA matrix and S3_2 for SA matrix. From these, 100 ul aliquots from the interface layer were pipetted into Eppendorf tubes at 24 hours of extraction and allowed to reduce in volume to 20 ul by loose cap room temperature evaporation. 2 ul aliquots of each were used for the mass spectrometry analysis with paired CHCA and SA matrices.
In total there were 11 MALDI analyses performed, shown in (Table 4). Throughout, the sample prefix is "S" for stromatolite and "O" for Orgueil.:
Figure 7: **Etch of the fossil stromatolite to produce micron particles for Folch extract in a clean room. Left, before etch; Middle - the etch using a stepper motor (no brushes to avoid metal contamination) with a vacuum brazed diamond drill bit (to avoid animal origin glue on drill bit); Right - 3 drill holes are visible. The micron particles are decanted by inversion of the sample over a glass container.**
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Sample source** & **Tube label** & **Matrix** & **Crystal origin** \\ \hline Fossil Stromatolite & S1\_1 & CHCA & Mauve hexagonal crystals \\ & & & \\ & & & \\ \hline Fossil Stromatolite & S1\_2 & SA & Mauve hexagonal crystals \\ & & & \\ & & & \\ \hline Fossil Stromatolite & S2\_1 & CHCA & Pale crystals \\ & & & \\ & & & \\ \hline Fossil Stromatolite & S2\_2 & SA & Pale crystals \\ \hline Orgueil meteorite & 01\_1 & CHCA & Mauve hexagonal crystals \\ & & & \\ \hline Orgueil meteorite & 01\_2 & SA & Mauve hexagonal crystals \\ \hline Orgueil meteorite & 02\_1 & CHCA & Pale crystals \\ & & & \\ \hline Orgueil meteorite & 02\_2 & SA & Pale crystals \\ \hline Orgueil meteorite & 03\_1 & CHCA & IOM Folch crystals \\ \hline Fossil stromatolite & S3\_1 & CHCA & Folch interphase \\ \hline Fossil stromatolite & S3\_2 & SA & Folch interphase \\ \hline \end{tabular}
\end{table}
Table 4: Detail of the 11 samples analyzed by MALDI mass spectrometry.
Mass spectrometry was performed on a Bruker Ultraflextreme MALDI-TOF/TOF instrument. We used a-cyano-4-hydroxycinnamic acid (CHCA) and sinapinic acid (SA) matrix. Both were at 10 mg mL -1 in 50% acetonitrile in water, 0.1% trifluoroacetic acid in water. Our resolution was of the order of 10,000 and we looked in the range m/z = 0-5,000, finding most peaks from 750-2000. A sample volume of 2\(\upmu\)L was mixed with a matrix volume of 2\(\upmu\)L, vortexed and left for one hour at room temperature. This one hour wait before pipetting 1\(\upmu\)l quantities onto the MALDI plate is essential as it takes that long to partly solubilize hemoglycin in the matrix solvents.
**METHODS**
**2. X-ray induced visible fluorescence of ooids**
The Shark Bay recent stromatolite (Figure 1) was provided by Dr. Andrew Knoll. Drilling as above (into fossil stromatolite) did not produce micron particles because the material was friable, breaking up into 200-500 micron scale fragments, accompanied by intact ooids (Figure 1).
All experimental procedures were performed under clean laboratory conditions with operators wearing lab coats, hair cover and gloved hands as previously reported [1,2,3]. All chemicals were only used for these analyses and kept in separate laboratory areas.
Ooids were placed in a watch glass and manipulated under X25 magnification. Ethyl cyanoacrylate glue (1/5\({}^{\mathrm{th}}\) the volume of the ooid) was applied to a Hampton crystallography loop and the ooid attached to the loop by gently touching the glue to the crystal. After the glue solvents had evaporated (4 hours at 18C) the ooid post and loop assembly was capped, attached to a pad to prevent vibration and was send by FEDEX to Diamond Light Source. The internal structure of an ooid was obtained by treating ooids in a watch glass with 5% acetic acid. Within 5-10 minutes the internal vesicular structure was revealed (Fig. 8).
X-ray induced visible fluorescence data were collected at a wavelength of 1.000 Angstroms (12.40 keV) and with a beam size of 50\(\upmu\)m \(\times\) 50\(\upmu\)m. The flux of the unattenuated beam was \(8\times 10^{12}\) ph/s and fluorescence data was collected with the beam attenuations of 2%, 5%, 10%, 20%, 40%, 60%, 80%, 100%. Diffraction data were recorded using a Pilatus3 6M detector with a detector distance of 300mm using 0.1deg oscillation per frame, exposure times of 10ms and beam attenuated by 50%. UV-Visible data were collected _in situ_ using off axis reflective objectives and were recorded over the wavelength range 250-800nm using an Andor shamrock 303i spectrograph and CCD detector.
## References
* [1] McGeoch, J. E. M. and McGeoch, M. W. "Polymer amide in the Allende and Murchison meteorites". _Meteorit and Planet Sci._**50**, 1971-1983 (2015).
* [2] McGeoch, M. W., Dikler, S. and McGeoch, J. E. M. "Meteoritic proteins with glycine, iron and lithium", arXiv:2102.10700 (2021).
* [3] McGeoch, J. E. M. and McGeoch, M. W., "Structural organization of space polymers", _Phys. Fluids_**33**, 067118 (2021).
* [4] McGeoch, J. E. M. and McGeoch, M. W., "Chiral 480nm absorption in the hemoglycin space polymer: a possible link to replication", _Scientific Reports_, **12**:16198 (2022), [https://doi.org/10.1038/s41598-022-21043-4](https://doi.org/10.1038/s41598-022-21043-4)
* [5] Lageson, D. private communication.
* [6] Dupraz, C., Reid, R. P., Braissant, O. et al., "Processes of carbonate precipitation in modern microbial mats", _Earth-Science Reviews_**96**, 141-162 (2009).
Figure 8: 5% acetic acid treated ooid from present-day stromatolite, Shark Bay, Australia. The ooid (sample LS2) is on a crystallography loop for x-ray diffraction analysis. The individual vesicles revealed via the acid treatment are 20\(\upmu\)m diameter.
Trower, E. J., Cantine, M. D., Gomez, M. L., Grotzinger, J. P., Knoll, A. H., Lamb, M. P., Lingappa, U., O'Reilly, S. S., Present, T. M., Stein, N., Strauss, J. V. and Fischer, W. W. "Active ooid growth driven by sediment transport in a high-energy shoal, Little Ambergris Cay, Turks and Caicos islands", _J. Sed. Res._**88**, 1132-1151 (2018).
* [8] Pokroy, B., Quintana, J. P., Caspi, E. N., et al., "Anisotropic lattice distortions in biogenic aragonite", _Nature Materials_**3**, 900-902 (2004).
* [9] Berman, A., Hanson, J., Leiserowitz, L., et al. "Biological control of crystal texture: A widespread strategy for adapting crystal properties to function" _Science_**259**, 776-779 (1993).
* [10] Lin, Y., Power, I. M., and Chen, W. "Holocene Lacustrine Abiotic Aragonitic Ooids from the Western Qaidam Basin, Qinghai-Tibetan Plateau", _Minerals_**12**, 1400 (2022).
* [11] Paterson, D. M., Aspden, R. J., Visscher, P. T., et al., "Light-Dependent Biostabilization of Sediments by Stromatolite Assemblages", _PLoS ONE_, **3**, e3176 (2008).
* [12] Davis, J. J. and Yurewicz, D. A. "Enhanced Carbonate Petrography Using Fluorescence Microscopy", _J. Sed. Petrology_, **55**, 795-804 (1985).
* [13] McGeoch, M. W., Owen, R. L., Jiao, S. and McGeoch, J. E. M. "Hemoglycin visible fluorescence induced by x rays", _J. Chem. Phys._**158**, 114901 (2023).
* [14] Lu, I-C., Chu, K. Y., Lin, C. Y., et al. "Ion-to-Neutral Ratios and Thermal Proton Transfer in Matrix-Assisted Laser Desorption/Ionization". _J. Am. Soc. Mass Spectrom._, **26**:1242-1251 (2015). DOI: 10.1007/s13361-015-1112-3.
* [15] McGeoch, M. W., Samoril T., Zapotok D. and McGeoch J. E. M. (2023) Polymer amide as a carrier of 15N in Allende and Acfer 086 meteorites. Under review at _International Journal of Astrobiology_.
* [16][https://www.iaea.org/publications/4755/iaea-yearbook-1995](https://www.iaea.org/publications/4755/iaea-yearbook-1995).
* [17] Parker, J. E., Thompson, S. P., Lennie, A. R. et al., "A study of the aragonite-calcite transformation using Raman spectroscopy, synchrotron powder diffraction and scanning electron microscopy", _CrystEngComm_**12**, 1590-1599 (2010).
* [18] Lowe, D. R. and Byerly, G. R. "The terrestrial record of Late Heavy Bombardment", _New Astronomy Reviews_**81**, 39-61 (2018).
* [19] Neukum, G., Ivanov, B. A. and Hartmann, W., "Cratering records in the inner Solar System in relation to the lunar reference system", _Space Science Reviews_**96**, 55-86 (2001).
* [20] Gounelle, M. and Zolensky, M. "The Orgueil meteorite: 150 years of history". _Meteoritics & Planetary Science_**49**, Nr 10, 1769-1794 (2014). doi: 10.1111/maps.12351.
* [21] Flannery, D. T., Allwood, A. C., Hodyss, R., et al., "Microbially influenced formation of Neoarchean ooids", _Geobiology_, **1-10** (2018).
* [22] Olejarz, J., Iwasa, Y., Knoll, A. H. and Nowak, M. A. "The Great Oxygen Event as a consequence of ecological dynamics modulated by planetary change", _Nature Communications_**12**, 3985 (2021).
* implications for biological evolution", _Planetary and Space Science_**48**, 203-214 (2000).
* [24] Karam, P. A. "Inconstant Sun: How Solar evolution has affected cosmic and ultraviolet radiation exposure over the history of life on Earth." _Health Physics_**84**, 322-333 (2003).
* [25] Grotzinger, J. P. and Rothman, D. H. "An abiotic model for stromatolite morphogenesis", _Nature_**383** 424-435 (1996). (Cowles Lake Formation, Wopmay orogen, northwest Canada age 1.9Gya).
* [26] Calahan, J. K., Bergin,E. A., Bosman, A. D. et al. "UV-driven chemistry as a signpost of late-stage planet formation". _Nature Astron._**7** 49-56 (2023). [https://doi.org/10.1038/s41550-022-01831-8](https://doi.org/10.1038/s41550-022-01831-8)
* 364 (2003).
**Supplementary information to Fossil and present-day stromatolite ooids contain a meteoritic polymer of glycine and iron.**
\({}^{1}\)*Julie E M McGeoch, \({}^{2}\)Anton J Frommelt, \({}^{3}\)Robin L Owen, \({}^{4}\)David Lageson and \({}^{5}\)Malcolm W McGeoch
\({}^{1}\)Department of Molecular and Cellular Biology, Harvard University, 52 Oxford St, Cambridge MA 02138, USA & High Energy Physics Div, Smithsonian Astrophysical Observatory Center for Astrophysics Harvard & Smithsonian, 60 Garden St, Cambridge MA 02138, USA.
\({}^{2}\)LRL-CAT, Eli Lilly and Company, Advanced Photon Source, Argonne National Laboratory, 9700 S. Cass Avenue, Lemont, IL, 60439
\({}^{3}\)Diamond Light Source, Harwell Science and Innovation Campus, Didcot, OX11 0DE, UK.
\({}^{4}\)Department of Earth Sciences, 226 Traphagen Hall, P.O. Box 173480 Montana State University, Bozeman, MT 59717.
\({}^{5}\)PLEX Corporation, 275 Martine St., Suite 100, Fall River, MA 02723, USA.
*Corresponding author. E-mail: [email protected]
**SECTION S1: Lattice analysis via higher order diffraction**
In X-ray analysis of meteorite polymers of amino acids we observe high order diffraction "ladders" in thin samples that are sheet-like [1], or fibers apparently composed of rolled up sheets [2]. In addition there is evidence [ref.2 +unpublished lattice diffraction] for a truly three-dimensional lattice of the diamond 2H structure that represents the maximum volume that a 3D lattice of identical "rods" can enclose [2]. The rods in this case are polymers of glycine [2,3] comprising anti-parallel glycine chains of 11-residue length closed at each end by an iron atom [3] with hydroxylation of glycine residues adjacent to Fe, termed hemoglycin. X-ray diffraction from such lattices is from two main features: a) Fe atoms in groups at vertices, the groups spaced from each other by the 49A length of a rod [1, 2] and b) nano-crystals of any substance filling the lattice spaces.
Stromatolites are dominantly formed of ooids, which are small ovoid, predominantly calcium carbonate, grains of typical length a few hundred microns. Following the mass spectrometry finding that the 1494Da core unit of hemoglycin was present in both the Orgueil meteorite and fossil stromatolite, an X-ray study was made of modern day ooids found within a recent stromatolite sample from Shark Bay, Australia. Two X-ray wavelengths, 0.979 Angstroms and 2.066 Angstroms, were used on APS beam line 31-1D-D. These lay respectively at higher energy and lower energy than the Fe K edge absorption at 7.1keV (1.74 Angstroms). Differences in X-ray
scattering between these two wavelengths would potentially give information on the disposition of Fe atoms within the ooid. The diffraction patterns were indeed markedly different. At the shorter wavelength, a set of rings between 3.86 and 1.61 Angstroms, listed in Table S1.1, indicated that both the calcite and aragonite forms of calcium carbonate were present. No significant rings were observed at "d" spacing larger than 3.85 Angstroms.
In contrast, at 2.066 Angstroms, (Figure S1.1) an intense new set of large "d" spacing rings (central) was superimposed upon a weak version of the calcite and aragonite rings (outside), the latter having "d" spacing less than 3.86 Angstroms. The new set of 18 large "_d"_ spacing rings ranged from 4.808 to 11.54 Angstroms, when interpreted as first order (Figure S1.1 and data Table S1.2). We show here that these come from higher order diffraction off Fe groupings at the 5nm - spaced junctions of the diamond 2H open lattice structure already identified in meteoritic work [2, 3]. The present analysis shows that there is a slight distortion to the lattice involving a relative increase of the trigonal axis.
**Figure S1.1** X-ray diffraction at 2.066 Angstrom from ooid in present era stromatolite (Shark Bay) showing dark lattice rings between 11.54 and 4.81 Angstroms from the center outward, plus calcium carbonate rings in a faint outer pattern.
**Data reduction to lattice "d" spacings from the set of high order rings**
The inner rings in Figure S1.1 had the "d" parameters listed in Table S2. These were calculated automatically in XQuartz [4] on the assumption that they were in first order diffraction according to \(2d\)sin\(\mathcal{O}\) = \(n\lambda\) where \(n\) is the diffraction order, \(\mathcal{O}\) is one half of the deflection angle and \(\lambda\) is the wavelength. In this data the wavelength was 2.066 Angstroms.
The left hand column of Table S1.2 contains the set of 18 rings that represented larger "d" spacing than 4 Angstroms. Their relative intensities are shown in the scan of Figure S1.2, part A. These rings did not match either the calcite or aragonite values, but were reminiscent of the ladders of high order diffraction previously seen in hemoglycin lattices [1, 2]. In [2] the ladder contained orders 2 through 5, with a fitted first order lattice parameter of 48.38 \(\pm\) 0.2 Angstroms. In [1] the ladder contained orders 2 through 12, with a fitted first order parameter of 49.03 \(\pm\) 0.18 Angstroms. Quick inspection of the present 18 ring set yielded a fit to 49.0 Angstroms in 5\({}^{\mathrm{th}}\) 6\({}^{\mathrm{th}}\) 7\({}^{\mathrm{th}}\) and 9\({}^{\mathrm{th}}\) orders as follows:
5 x 9.823 Angstroms = 49.11;
6 x 8.127 Angstroms = 48.76 ;
7 x 7.022 Angstroms = 49.15
9 x 5.445 Angstroms = 49.00
When several additional sequences also fitted calculated inter-vertex distances in the (\(h\) = 49 Angstrom, diamond 2H) 3D lattice, a thorough programmed search was performed in which each of the 18 data values was assessed as a divisor of trial "_d_" spacings in the range 30 Angstroms to 140 Angstroms, rising in 0.05 Angstrom increments. The number of such divisors found in each 0.05 Angstrom region was plotted in the range 30 to 140 Angstroms in Figure S1.2, after binomial smoothing and raising to the second power to accentuate the longer sequences.
This discovery routine yielded about 10 principal candidates for "_d_" spacings, the main ones being listed in Table S1.1 together with the relevant diffraction orders and the accuracy of a match (always better than 0.5%). Initially not all of these strong candidates matched calculated vertex-to-vertex spacings [2] for the ideal diamond 2H lattice. A mathematical model was then constructed to handle a non-ideal lattice with a constant rod length "h" and variable deviations from the tetrahedral angle, that is, changes to the angle \(\alpha\) where (90 + \(\alpha\) = 109.471221 deg. is the tetrahedral angle). Changes to \(\alpha\) represent axial stretch or compression of the lattice. To anticipate the results, we are able to fit the principal "_d_" spacing candidates with an increase of \(\alpha\) by 4.5 deg., computed results shown by red bars in Figure S1.3.
Figure S1.2: A: Vertical intensity scan of Figure S1.1. Note that the 7.12(8) peak is obscured by a detector grid. B: Scan of expanded region. Additional peaks 2.88-2.48 Angstroms were measured on the diagonal.
Figure S1.3. (thin black line) The results of a computer search for higher order clusters or loci, represented as the square of the number of "hits" within any 0.05 Angstrom band, with binomial smoothing applied. The red bars are calculated “d” spacings for a slightly distorted diamond 2H lattice, discussed below.
**Table S1.2** **Ooid diffraction rings in first order (left hand column). Higher order fits (top row) listed as diffraction order in bold with percentage mis-match.**
**Mathematical construction of the diamond 2H lattice**
**Figure S1.4. Reproduced from [ref S2, Figure 10]. Part A: a plane lattice of hexagons (beige) is distorted to have alternating vertices above or below the plane, with the quasi-hexagon sides making angle \(\alpha\) with the plane. Part B: two layers spaced along the vertical trigonal symmetry axis. High vertices in the lower layer connect to low vertices in the next layer up. Each connecting rod is identical.**
**Figure S1.5. View down the trigonal axis of the diamond 2H structure showing the structure's hexagonal projection with apparent length \(h\)cos\(\alpha\) of the connecting rods. Vertices are coded by X and Y numerals. Layers of the hexagonal projection are superimposed in a stack coming out of the page. From one layer to the next higher layer there are interchanged U and D characters that represent axial displacements of vertices upward and downward (along the z-axis). The layers are connected from a high point U (in a lower layer) to a low point D (in the layer above) by axial rods of length \(h\). Each layer is coded by a Z numeral.**
Rather than seek a unit cell, which is conceptually difficult for this structure, we created (x,y,z) coordinates for each lattice vertex as a function of the numbers in an intuitive labeling system illustrated in Figure S1.5, which is a view down the trigonal symmetry axis. This takes as a starting point the hexagonal projection layers that lie in alignment vertically above and below each other to fill space, then adds or subtracts height from alternating vertices around the hexagons in the manner discussed in [2]. We need to have twice the number of "x" values as "y" and "z" values to uniquely specify the vertices of an approximately cubic lattice volume (Figure S1.5). In Figure S1.5 we are looking down upon fore-shortened sides to the hexagons, of apparent length \(h\)cos\(\alpha\), where \(h\) is the true length of a connecting rod and \(\alpha\) is the angle that each rod takes relative to the horizontal plane (illustrated in Fig. S1.4). When sin\(\alpha\) = 1/3 the exact tetrahedral symmetry exists at every junction in the structure [2]. Here we allow angle \(\alpha\) to be variable, giving access to the parameters of axially distorted structures.
Where _J,K,L_ are the indices (coding numerals) along the _x,y,z_ axes, the position of any vertex is given by
\[x=(J-1)\frac{\sqrt{3}}{2}h\cos\alpha\]
\[y=(K-1)\frac{3h}{2}\cos\alpha+(-1)^{J}.(-1)^{K-1}).\frac{h\cos\alpha}{4}\]
\[z=(L-1)(h+h\sin\alpha)+(-1)^{J}.(-1)^{(K-1)}.(-1)^{(L-1)}.\frac{h\sin\alpha}{2}\]
Here for clarity in the superscripts we replaced the indices \(J_{\rm X}\,,J_{\rm Y}\,,J_{\rm Z}\) used in the program listed below by _J,K,L_.
With this apparatus we can consider index ranges suitable for a cube of side N quasi-cells (defined in [2]):
\(1<J<2N\)
\(1<K<N\)
\(1<L<N\)
A central point in the lattice is chosen, say at coordinates (N, N/2, N/2) with N even, then nearby lattice distances can be evaluated, for example in ranges for ], K, L
\(N-4<J<N+4\)
\(N/2-3<K<N/2+3\)
\(N/2-3<L<N/2+3\)
When run with a maximum spacing cut-off of 140 Angstroms, this produces 69 spacings, many of which are duplicates. Further ordering and pruning to yield unique "\(d\)" values reduces this to a short list. In the comparison of data to theory that follows we chose a cutoff of 140Angstroms. When the exact tetrahedral angle is used, there are 9 distinct lattice "d" values less than 140 Angstroms. If an axial distortion is applied the degeneracy is broken and 12 values now appear under 140 Angstroms. Figure S1.6 plots the changes to these values as the deviation of \(\alpha\) from tetrahedral (\(\alpha=19.471221\) deg.) ranges through -5 to +5 degrees (the 49.0 Angstrom line, not shown, does not vary with \(\alpha\)).
**Fit including lattice distortion to observed set**
It is found that a lattice spacing of 49.0 \(\pm\)0.2 Angstroms combined with an angular deviation of +4.5 deg. gives the best fit to the reduced data of Figure S1.3 (fit shown as red bars). Only two major peaks in Figure S1.3 were not covered, at 98 and 130 Angstroms. Minor peaks were potentially due to random coincidences. The 98 Angstrom peak happens to be exactly two times the 49.0 Angstrom rod length. It can be produced automatically by doubling the diffraction orders {5,6,7 and 9} of column 2 in Table S1, therefore its presence is expected. The 130 Angstrom peak is not covered at +4.5 deg. in Figure S1.6, however it is covered precisely by the 0 deg.
deviation non-distorted lattice. It could be that some parts of the lattice are undistorted. The peak at 63 Angstrom is expected as a corollary of the 126.25 Angstrom peak which contains orders 14, 18, 20, 22, 24 that may be divided by 2 to give the valid orders 7, 9, 10, 11, 12, however no new lattice parameter is implied.
### S1 Discussion
The use of wavelengths spanning the Fe K edge absorption allowed the iron-dependent aspects of ooid diffraction to stand out due to decreased absorption at the longer wavelength. It also appears to have been important to have reduced diffraction order numbers by use of a factor of two longer wavelength, contributing to a simpler analysis. It is considered, in conjunction with the ooid fluorescence data (main text), that the identified lattice is that of hemoglycin, which in prior x-ray work [1, 2] has shown a "rod" length of 4.9nm. A clear analysis without competing structures suggests that this is the dominant and only iron-containing lattice within the ooid. A relatively small iron content of 0.15 wt % can produce this lattice, in conjunction with a glycine content of approximately 0.87 wt %. This considers a calcite-filled lattice with density 2.71. The density of aragonite varies from 2.93 to 2.95.
The lattice is filled with many calcium carbonate crystals, of a size that could be as small as the "cells" within the diamond 2H structure, i.e. between 5nm and 8nm across. Nano-crystals of nickel at this scale were observed in X-ray diffraction of a lattice sample from the Acfer 086 meteorite (analysis unpublished). Such fillings will vary depending upon the mix of atoms available as the lattice forms.
It is probable that a small degree of lattice distortion can be induced by any given crystal type within the cells. The cell volume is maximized for exact tetrahedral symmetry [2], but very little energy is required to induce an axial lattice distortion to accommodate a particular filling type of crystal. As the lattice is lengthened slightly along the trigonal axis the \(h\)cos\(\alpha\) projection decreases slightly, possibly providing an energy minimum for the combined system of lattice plus enclosed calcium carbonate crystals.
## References (all S sections)
* [1] McGeoch J. E. M. and McGeoch, M. W. "Chiral 480nm absorption in the hemoglycin space polymer: a possible link to replication", _Scientific Reports_, **12**:16198 (2022), [https://doi.org/10.1038/s41598-022-21043-4](https://doi.org/10.1038/s41598-022-21043-4)
* [2] McGeoch, J. E. M. and McGeoch, M. W. "Structural organization of space polymers", _Phys. Fluids_**33**, 067118 (2021).
* [3] McGeoch, M. W., Dikler, S. and McGeoch, J. E. M. "Meteoritic proteins with glycine, iron and lithium", arXiv:2102.10700 (2021).
* [4] XQuartz 2.8.5, X window system, 2003-2023 X.org Foundation, Inc.
McGeoch, M. W. and McGeoch, J. E. M. "Hexagonal Cladding of a Hemoglycin Vesicle in X-ray Diffraction", Unpublished.
**SECTION S2. Program to find diamond 2H lattice spacings**
'Programmed in qb64 for mac - qb64 is a generic Basic language PRINT "Diamond Lattice" PRINT "calculates x,y,z coordinates in a cube of side N" PRINT "output is in a new file called dspace in the same folder as the program" DIM Vx(22), Vy(11), Vz(11), set(15), fit(15), testall(100), testran(100), testwin(15)
INPUT "Variation (deg) from alpha at tetrahedral vertex ", dalpha INPUT "number of quasi-cells (even and less than or equal to 10) ", N INPUT "inter-vertex distance h (Angstroms) ", h DIM dspace AS STRING OPEN "dspace" FOR OUTPUT AS #1
pi = 3.14159265359 alp = (19.471221 + dalpha) * pi / 180 Salp = SIN(alp) Calp = COS(alp)
Nx = 2 * N Ny = N Nz = N
Fx = SQR(3) * h * Calp / 2 Fy1 = 3 * h * Calp / 2 Fy2 = h * Calp / 4 Fz1 = h + h * Salp Fz2 = h * Salp / 2
' set up central point N, N/2, N/2 N2 = N / 2
x0 = (N - 1) * Fx y0 = (N2 - 1) * Fy1 + (-1) ^ N * (-1) ^ (N2 - 1) * Fy2 z0 = (N2 - 1) * Fz1 + (-1) ^ N * (-1) ^ (N2 - 1) * (N2 - 1) * Fz2
'generate vertices around central point and distances from center index = 1 FOR Jx = N - 4 TO N + 4 Vx(Jx) = [Jx - 1) * Fx
FOR [y] = N2 - 3 TO N2 + 3
Vy([y]) = (Jy - 1) * Fy1 + (-1) ^ \(\wedge\) ]x * (-1) ^ (Jy - 1) * Py2
FOR [z] = N2 - 3 TO N2 + 3
Vz[Jz] = (Jz - 1) * Fz1 + (-1) ^ \(\wedge\) ]x * (-1) ^ (Jy - 1) * (-1) ^ (Jz - 1) * Fz2
test = SQR([x0 - Vx[Jx]) ^ 2 + (y0 - Vy([y])) ^ 2 + (z0 - Vz([z])) ^ 2)
IF test > 140 THEN GOTO 16
testall(index) = test
index = index + 1
PRINT #1, USING ".########":|x,Jy,Jz,test16'continue
NEXT Jz
NEXT Jx
PRINT "index = ", index
CLOSE #1
'rank list of "d" spacings in vector tectall
rank = 1
FOR k = 1 TO index
trial = testall(k)
FOR m = 1 TO index
IF trial > testall(m) + 0.0001 THEN rank = rank + 1
NEXT m
testran(rank) = trial
rank = 1
NEXT k
'winnow testran to obtain unique set
q = 1
FOR p = 1 TO index
IF testran(p) > 0 THEN testwin(q) = testran(p) ELSE GOTO 20
q = q + 1
20 'continue
NEXT p
FOR g = 1 TO q - 1
PRINT testwin(g)
NEXT g
END
## SECTION S3
### X-ray diffraction on crystals and ooids
#### S 3.1 Basic x-ray diffraction observations on fossil stromatolite and meteoritic crystals
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.2 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.3 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.4 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.5 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.6 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.7 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.8 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.9 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.1 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.2 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.3 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.4 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.5 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.6 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.7 Basic x-ray diffraction
In 2021 and 20222 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.8 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.9 Basic x-ray diffraction
In 2021 and 20222 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.1 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.1 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.2 Basic x-ray diffraction
In 2021 and 20222 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a wavelength of 0.9793 Angstroms. The vesicle hexagon data is from Diamond Light Source, at 1.000 Angstroms.
### S 3.3 Basic x-ray diffraction
In 2021 and 2022 diffraction was studied in a range of crystals derived as described above from fossil stromatolite No. 1 Wyoming (2.1Gya), provenance in Section S 3.2.
In all except one crystal the dominant features were rings, and not spots, indicating generally multiple crystalline nature (Table S3.1). The stromatolite and Acfer 086 data is from beam line 31-1D-D, APS, at a
The first five entries in column 3 under 289349 were processed differently via scanning and subtraction of a relatively large background, so as to find accurate peak positions. The errors for these five points are systematic and of the order of 0.03 Angstroms. All other errors are one standard deviation of multiple ring measurements. Although the crystals in columns 2 and 3 do not appear to be externally similar at all, there is a very good lattice parameter match across all the major rings. Crystal 275049 was thin, purplish and well-shaped whereas crystal 289349 was a white "blob" with much greater depth. A high featureless background in the central part of the 289349 pattern could be due to amorphous material within its lattice. The first five entries under 289349 were, as a set, very much more intense than the subsequent entries.
We searched our meteorite and stromatolite data for matches to the above pair. In the 4\({}^{\rm th}\) column we list the observed rings from a fiber crystal of Acfer 086 [2] the first two of which were identified with the Fe-Fe spacing at the four-way connection of hemoglycin rods. In the fifth column we list the single dominant spacing of Fe atoms at the three-way junction of hexagonal sheets of Sutter's Mill hemoglycin that formed a vesicle [5]. Although the latter may be a fortuitous match, these prior data resemblances to the stromatolite dominant rings could point to the presence of Fe-Fe spacings within fossil stromatolite similar to those of meteorite extract crystals.
**S 3.2 X-ray diffraction observations on present day and fossil ooids.**
At Diamond Light Source a series of x ray diffraction runs compared the crystal powder diffraction patterns of **ooids** from the following stromatolite sources:
**Sample Sources:** Present-day stromatolites are supplied by Andrew Knoll of the Museum of Comparative Zoology and Organismic and Evolutionary Biology (OEB) Harvard.
Present-day stromatolite details are:
1. Shark-Bay Western Australia - collected by Eiso Barghoorn 1971- estimated to be 2000-3000 years old.
2. **K-05 SS-1** from San Salvador Island Bahamas - collected by Andrew Knoll in 2005 -- A modern mineralized microbialite.
Fossil stromatolite details are:
1. 2.1Ga stromatolite No. 1 from Medicine Bow region, Wyoming - collected by David Lageson of Montana State University.
2. 2.1Ga stromatolite No. 2 from Medicine Bow region, Wyoming - collected by David Lageson of Montana State University.
X-ray diffraction results for ooids in the two present day stromatolites and in a second Wyoming sample, No. 2, are compared in Table S3.1. No ooids could be found in Fossil stromatolite No. 1.
**Table S3.2 Comparison of present day and fossil ooid diffraction patterns on Diamond Light Source. All mounted samples are multiple crystals, giving rings. The Shark Bay sample was acid treated as described in methods in the main text. The number of x ray runs that were averaged is given. Almost uniformly the rings identify as argonite, with one possible calcite ring.**
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**Ooids, x ray 1.000A, all data in cryo-stream at 100K, “d’ in Angstroms**} \\ \hline Shark Bay & Shark Bay & 2nd Fossil & San Salvador & Assignment \\ No acid & Acid treated & Stromatolite & & \(a\) = argonite \\ n = 2 & n = 2 & n = 3 & n = 1 & \(c\) = calcite \\ \hline & 16.605 & & & \\ \hline & 11.65 & & & \\ \hline & & 10.458 & & \\ \hline
10.325 & & & 10.32 & \\ \hline & 8.605 & & & \\ \hline & 8.246 & & & \\ \hline & 7.424 & & & \\ \hline & & 7.11 & 7.093 & \\ \hline & 6.905 & & & \\ \hline & 6.376 & & & \\ \hline & 5.505 & & & \\ \hline & & 4.38 & & \\ \hline
4.194 & & 4.194 & & \\ \hline
3.791 & 3.819 & 3.812 & & \\ \hline & 3.493 & & & \\ \hline
3.388 & 3.393 & 3.387 & & a \\ \hline
3.263 & 3.263 & 3.262 & 3.265 & a \\ \hline & 3.213 & & & \\ \hline
2.988 & 2.988 & 2.993 & 3.00 & \\ \hline
2.860 & 2.865 & & & \\ \hline
2.725 & & 2.70 & & a \\ \hline
2.690 & & 2.689 & 2.698 & a \\ \hline
2.480 & 2.47 & 2.472 & 2.475 & a \\ \hline
2.40 & & 2.407 & & \\ \hline
2.363 & 2.358 & 2.367 & 2.37 & a \\ \hline
2.328 & 2.328 & 2.324 & 2.325 & a \\ \hline
2.248 & 2.245 & & & \\ \hline
2.185 & & 2.182 & & \\ \hline
2.065 & & 2.08 & & \\ \hline
1.978 & 1.968 & 1.974 & 1.975 & a \\ \hline
1.877 & 1.875 & 1.876 & 1.878 & c,a \\ \hline
1.808 & 1.808 & 1.807 & 1.81 & \\ \hline & 1.738 & 1.737 & 1.74 & a \\ \hline & & 1.719 & 1.72 & \\ \hline
1.60 & & & & \\ \hline \end{tabular} |
2309.06098 | Adopting Dynamic VAR Compensators to Mitigate PV Impacts on Unbalanced
Distribution Systems | The growing integration of distributed energy resources into distribution
systems poses challenges for voltage regulation. Dynamic VAR Compensators
(DVCs) are a new generation of power electronics-based Volt/VAR compensation
devices designed to address voltage issues in distribution systems with a high
penetration of renewable generation resources. Currently, the IEEE Std.
1547-based Volt/VAR Curve (VV-C) is widely used as the local control scheme for
controlling a DVC. However, the effectiveness of this scheme is not well
documented, and there is limited literature on alternative control and
placement schemes that can maximize the effective use of a DVC. In this paper,
we propose an optimal dispatch and control mechanism to enhance the
conventional VV-C based localized DVC control. First, we establish a
multi-objective optimization framework to identify the optimal dispatch
strategy and suitable placement for the DVC. Next, we introduce two supervisory
control strategies to determine the appropriate instances for adjusting the
VV-C when the operating condition changes. The outlined scheme comprises two
primary stages: time segmentation and VV-C fitting. Within this framework, each
time segment aims to produce optimized Q-V trajectories. The proposed method is
tested on a modified IEEE 123-bus test system using OpenDSS for a wide range of
operating scenarios, including sunny and cloudy days. Simulation results
demonstrate that the proposed scheme effectively reduces voltage variations
compared to the standard VV-C specified in IEEE Std. 1547. | Han Pyo Lee, Keith DSouza, Ke Chen, Ning Lu, Mesut Baran | 2023-09-12T10:04:12Z | http://arxiv.org/abs/2309.06098v1 | # Adopting Dynamic VAR Compensators to Mitigate PV Impacts on Unbalanced Distribution Systems
###### Abstract
The growing integration of distributed energy resources into distribution systems poses challenges for voltage regulation. Dynamic VAR Compensators (DVCs) are a new generation of power electronics-based Volt/VAR compensation devices designed to address voltage issues in distribution systems with a high penetration of renewable generation resources. Currently, the IEEE Std. 1547-based Volt/VAR Curve (VV-C) is widely used as the local control scheme for controlling a DVC. However, the effectiveness of this scheme is not well documented, and there is limited literature on alternative control and placement schemes that can maximize the effective use of a DVC. In this paper, we propose an optimal dispatch and control mechanism to enhance the conventional VV-C based localized DVC control. First, we establish a multi-objective optimization framework to identify the optimal dispatch strategy and suitable placement for the DVC. Next, we introduce two supervisory control strategies to determine the appropriate instances for adjusting the VV-C when the operating condition changes. The outlined scheme comprises two primary stages: time segmentation and VV-C fitting. Within this framework, each time segment aims to produce optimized Q-V trajectories. The proposed method is tested on a modified IEEE 123-bus test system using OpenDSS for a wide range of operating scenarios, including sunny and cloudy days. Simulation results demonstrate that the proposed scheme effectively reduces voltage variations compared to the standard VV-C specified in IEEE Std. 1547.
DER impact mitigation, Distribution system, Dynamic VAR Compensator (DVC), High Penetration PV, Smart inverter, Volt/VAR control.
## 1 Introduction
The integration of distributed energy resources (DERs), particularly photovoltaics (PVs), into distribution systems [1] poses challenges for voltage regulation. The high penetration of PVs introduces power fluctuations caused by factors like cloud movements, leading to rapid voltage fluctuations. Conventional voltage control devices, such as Voltage Regulators (VRs), are forced to switch frequently in response to these deviations [2], resulting in a shortened device lifespan and an increased risk of premature failure. To address this emerging challenge, Dynamic VAR Compensators (DVCs) are being evaluated as a solution. In addition to resolving voltage regulation issues, DVCs offer potential benefits such as enhancing power losses, mitigating voltage flicker, and reducing voltage imbalances [3, 4].
DVCs are power electronics-based reactive power (Q) compensators. While widely used in transmission voltage regulation, their application in distribution system operation is still in its early stages. DVCs offer fast and continuous control of reactive current, making them a suitable complement to capacitor banks and tap changing regulators. In a study conducted by DSOuza et al. [5], it was demonstrated that DVCs effectively mitigate problems such as excessive tap changes and frequent voltage violations caused by variable PV generation. Additionally, DVCs enable precise and rapid power control on a per-phase basis, ensuring that the voltage across the feeder remains within the limits specified by ANSI standards [6].
Table 1 presents a comprehensive overview of existing methods for addressing the optimal placement and control schemes for DVCs, along with a comparison of their strengths and weaknesses in relation to the proposed approach introduced in this paper.
Power electronics-based voltage regulation devices in
distribution systems include Dynamic Voltage Restorer (DVR) [7, 8], distribution static synchronous compensator (DSTATCOM) [9, 10], and DVC [4]. Among these devices, DVRs are not suitable for systems experiencing prolonged reactive power deficiencies, while DSTATCOM was primarily designed to address Fault-Induced Delayed Voltage Recovery (FIDVR) issues [11]. In comparison, DVCs are designed to complement existing Volt/VAR Control (VVC) devices by effectively managing feeder voltage within the ANSI-prescribed limits [6]. The primary objective of the DVC is to mitigate voltage violations and fluctuations resulting from intermittent PV outputs, providing necessary voltage boost or reduction.
The challenge in deploying the DVC lies in determining the optimal location(s) for its installation. While various approaches have been proposed for placing devices like distributed generations, most of these approaches rely on analytical methods [12, 13, 14], meta-heuristic techniques [15, 16], or a combination of both [17, 18]. Although these methods can be adapted for DVC placement, modifications are necessary as they primarily address balanced systems. To address this challenge and facilitate DVC placement, it is crucial to employ a comprehensive 3-phase distribution system model that considers unbalanced system conditions and the operation of per-phase DVCs. Another challenge is to select an appropriate control scheme for maximizing the DVC benefit. Existing approaches rely on the standard Volt/VAR Characteristics (VVAR-C) based local control, as specified in IEEE Std. 1547 [19]. However, this type of control does not fully exploit the potential advantages offered by DVCs, such as their fast response, usually measured in cycles, and the capability to independently inject corrected reactive power into each phase without dependence on the other phases. Previous literature has presented solutions related to smart inverters including delayed VVC [20], scaled VVC [21], and adaptive VVC [22], but these approaches have limitations and drawbacks, as highlighted in Table 1. Moreover, a dedicated tool is needed to optimize the placement and control scheme of DVCs so that distribution planning engineers can plan and deploy these devices more effectively on their systems.
This paper focuses on both the control dispatching and placement problems associated with the adoption of a DVC on a distribution feeder. The paper first proposes a novel DVC dispatching scheme designed to mitigate voltage fluctuations on a feeder with high PV penetration. This scheme adopts a simple dispatch objective, allowing the DVC to react to voltage violations while minimizing excessive voltage regulator operation. The dispatching approach is then integrated into a placement method to identify an optimal location for the DVC, ensuring its effectiveness in voltage regulation. Furthermore, the paper proposes a more practical supervisory control scheme to minimize the frequent dispatches considered in the initial dispatching scheme. This supervisory control scheme periodically adjusts the local VV-C to enable the DVC to adapt to changing operating conditions. This approach addresses the constraints imposed by communication infrastructure limitations, where frequent updates for optimal dispatch (e.g., every 1 minute) are not feasible.
The paper offers three key contributions. Firstly, an optimal DVC dispatching scheme is proposed to minimize voltage variations and reduce the number of VR switching operations. Secondly, a novel method is introduced to identify suitable DVC deployment locations, considering the locational impact on voltage profiles to effectively mitigate voltage variations. Lastly, a supervisory dispatch scheme is proposed to adjust DVC control parameters based on the Q-V trajectory derived from the optimal dispatch. Simulation results demonstrate that the proposed methods surpass standard VVC in reducing voltage variations and regulator operations.
\begin{table}
\begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline
The paper is structured as follows: Section II introduces the optimal dispatch scheme and identifies the suitable location for the DVC. Section III outlines the proposed practical dispatching scheme. Section IV presents simulation results to evaluate the performance of the proposed approach. Finally, Section V concludes the paper.
## II Optimal DVC dispatch and placement
As discussed in the previous section, the main benefit of utilizing a DVC is the mitigation of voltage variations on a distribution feeder. Voltage variation is directly associated with the degree of voltage fluctuation at each node along the feeder. To maintain voltage variations within the desired limits, typically defined by voltage violation thresholds specified in ANSI standards [6], utilities employ Line Voltage Regulators (LVRs) and Capacitor Banks (CAPs). The Category I limits, commonly adopted by utilities, range between 0.95 and 1.05 pu. However, with the implementation of Conservation Voltage Reduction (CVR), utilities aim to further reduce voltages on feeders, necessitating tighter control over voltage variations [23]. The DVC proves valuable in achieving this objective by ensuring that voltages remain within a specific target voltage band. This paper considers a voltage band of 0.98 \(\sim\) 1.03 pu, as depicted in Fig. 1.
### Dynamic VAR Compensator (DVC)
The schematic of the novel power electronics-based DVC [4], which is capable of independently adjusting VAR injection on each phase and exhibits a rapid response time, is depicted in Fig. 2. These characteristics render the DVC highly efficient in mitigating fast voltage variations and reducing excessive voltage regulator operations resulting from PV systems.
### Optimal DVC dispatch
The dispatching of a DVC entails determining the desired VAR injection to be provided by the DVC in order to maintain the voltages on the feeder within the specified voltage band, denoted as \(\Delta V_{\text{ind}}\). This dispatching problem can be formulated as an optimization problem, where the objective function quantifies the deviation of the node voltages from the \(\Delta V_{\text{ind}}\) illustrated in Fig. 1. Thus, the objective function can be expressed as follows:
\[\text{f}_{\text{j,t}}^{\mu}=\sum_{i\in\mathcal{N},i\notin\mathcal{ K}}\biggl{(}max(V_{\text{i,j,t}}-V^{\text{upper}},0)\\ +max(V^{\text{lower}}-V_{\text{i,j,t}},0)\biggr{)},\forall j\in \mathcal{P},\forall t\in\mathcal{T} \tag{1}\]
where \(\mathcal{N}\) represents the set of nodes, \(\mathcal{K}\) denotes the set of voltage regulators, \(\mathcal{P}\) is the number of phases, \(\mathcal{T}\) indicates the scheduling period. \(V_{\text{i,j,t}}\) is the voltage on phase \(j\) at node \(i\) at time \(t\). The lower and upper limits (i.e., \(V^{\text{lower}}\) and \(V^{\text{upper}}\)) can be set based on voltage variation on the feeder before the DVC is added.
Due to the potential increase in LVR operations caused by PV intermittency and VAR injection from the DVC, an additional objective function can be introduced to mitigate excessive LVR operation. This objective function is defined as the sum of tap movements of the LVRs, as shown below:
\[\text{f}_{\text{j,t}}^{\theta}=\sum_{k\in\mathcal{K}}|\theta_{\text{k,j,t}}- \theta_{\text{k,j,t}-1}|,\forall j\in\mathcal{P},\forall t\in\mathcal{T} \tag{2}\]
where \(\theta_{\text{k,j,t}}\) is the tap position of regulator on phase \(j\) at node \(k\) at time \(t\).
By incorporating these objective functions, the problem of optimal dispatch can be formulated as follows:
\[\min_{\text{Q}_{\text{j,t}}^{\text{inj}}}\left(w_{\mu}f_{\text{j,t }}^{\mu}+w_{\theta}f_{\text{j,t}}^{\theta}\right)\] (3) s.t. \[0\leq|\text{Q}_{\text{j,t}}^{\text{inj}}|\leq 1,\forall j\in \mathcal{P},\forall t\in\mathcal{T} \tag{4}\]
The first objective function, which aims to reduce voltage variation, is assigned higher weights to emphasize its importance. To solve this problem, an iterative search method is used to determine the optimal \(Q^{\text{inj}}\) from the DVC for a given feeder operating condition, considering load and PV levels.
### DVC dispatch performance
To assess how much the DVC reduced the voltage variations and limited the voltage regulator operations, four performance metrics are used: lower voltage violations (\(\text{V}_{\text{out}}^{\text{lower}}\)), within a target voltage band (\(\text{V}_{\text{in}}\)), upper voltage violations (\(\text{V}_{\text{out}}^{\text{upper}}\)), and voltage regulator operations (\(\text{Tap}_{\text{k}}\)).
\[\text{V}_{\text{out}}^{\text{lower}}=\sum_{\tau\in\mathcal{T}_{ 1}}\tau,\mathcal{T}_{1}=\{t\in\mathcal{T}\mid V_{t}<V^{\text{lower}}\} \tag{5}\] \[\text{V}_{\text{in}}=\sum_{\tau\in\mathcal{T}_{2}}\tau,\mathcal{ T}_{2}=\{t\in\mathcal{T}\mid V^{\text{lower}}\leq V_{t}\leq V^{\text{upper}}\}\] (6) \[\text{V}_{\text{out}}^{\text{upper}}=\sum_{\tau\in\mathcal{T}_{3}} \tau,\mathcal{T}_{3}=\{t\in\mathcal{T}\mid V^{\text{upper}}<V_{t}\}\] (7) \[\text{Tap}_{\text{k}}=\sum_{t\in\mathcal{T}}|\theta_{\text{k,t}}- \theta_{\text{k,t}-1}|,\forall k\in\mathcal{K} \tag{8}\]
Figure 1: Voltage variation limits considered for DVC.
Figure 2: A Schematic Diagram of the DVC [4].
### Dvc Placement
Since the DVC injects reactive power, it primarily influences the voltages in the zone in which it is placed. To illustrate this, examine the sample feeder depicted in Fig. 3. In this system, the LVR (i.e., 160R) on the main feeder divides the feeder into two distinct voltage zones, as indicated in Fig. 3. Zone 1 represents the first voltage zone (highlighted in orange), while Zone 2 corresponds to the second zone (highlighted in green).
Time-series power flow simulations are first conducted on the feeder with no DVC, which serves as the base case. Figure 4 shows the phase-wise voltage distribution, sorted in descending order based on average voltage. The figure effectively demonstrates the contrasting voltage variations observed in the two zones. Zone 1 exhibits significantly larger voltage variations compared to Zone 2, mainly due to large PV farms. Furthermore, Zone 1 experiences greater voltage imbalance between phases compared to Zone 2. Consequently, our objective is to examine the effectiveness of the DVC in mitigating voltage variations specifically within Zone 1. Considering that the DVC influences voltages in the vicinity of its placement node, we identified the node with the highest voltage variations within the targeted zone. For the given sample feeder, candidate nodes were selected by evaluating the voltage variation profiles. The dispatching scheme uses a binary search algorithm [25] to determine the appropriate VAR injection/absorption required by the DVC on a per-phase basis. The following straightforward search procedure for candidate nodes determines which node yields optimal DVC performance:
1. Place the DVC at a candidate node.
2. Perform time series power flow simulation on the feeder over the sample days. Time resolution is 1 minute. The DVC is dispatched at every time step of the simulation by using the optimal DVC dispatch scheme introduced in Section II. B.
3. Repeat the process by moving DVC to a new candidate bus.
## III Supervisory dispatch for Dvc
### _Optimal Q-V Trajectories_
Figure 13 shows the optimal Q-V trajectories obtained by using the proposed optimal dispatch scheme on the sample system. The figure clearly illustrates that these optimal Q-V trajectories can be quite different than the VV-C proposed in IEEE Std. 1547-2018 [19] for local control. The standard VV-C as shown in Fig. 5(a), is a piecewise linear curve with negative slope. As formulated in (9), when the voltage exceeds an upper limit (i.e., \(V^{\rm upper}\)), the DVC absorbs the reactive power to prevent further voltage rise. On the other hand, when the drops below a specific threshold (i.e., \(V^{\rm lower}\)), the DVC injects the reactive power to increase the voltage.
\[Q_{inj}=\begin{cases}Q^{lim},&V_{dev}(t)\leq V_{1}\\ -m_{1}(V_{2}-V_{dev}(t)),&V_{1}<V_{dev}(t)<V_{2}\\ 0,&V_{2}\leq V_{dev}(t)\leq V_{3}\\ m_{2}(V_{dev}(t)-V_{3}),&V_{3}<V_{dev}(t)<V_{4}\\ -Q^{lim},&V_{4}\leq V_{dev}(t)\end{cases} \tag{9}\]
### _Supervisory dispatch for DVC_
In Section II.B, we considered the DVC as a dispatchable VAR source and employed an optimization-based dispatching scheme to continuously optimize its performance in terms of minimizing voltage variations. However, this approach faces a significant challenge due to the frequent dispatch signals required, which may not be practical in distribution systems with limited communication infrastructure [26, 27, 28]. To overcome this challenge, a local control scheme, initially proposed for smart inverters and utilizing the VV-C specified in IEEE Std. 1547 (shown in Fig. 5(a)), is currently utilized for the DVC. Nevertheless, to ensure the effectiveness of the DVC using this local control strategy, proper adjustment and setting of the VV-Cs are necessary. The optimal Q-V trajectories presented in the case study clearly illustrate the need for periodic adjustments. To address this issue, we investigated the problem and developed two supervisory control schemes that determine the optimal frequency of VV-C adjustments for the DVC to provide effective voltage support under varying operating conditions. These supervisory schemes monitor the performance of the DVC and make necessary adjustments to the VV-C, periodically sending the revised characteristics to the DVC. The proposed scheme involves two main steps: time segmentation and VV-C curve fitting based on the optimal Q-V profiles obtained for the respective time segment. The steps are outlined below.
#### Iii-B1 Time Segmentation
The objective of time segmentation is to identify shorter time segments that allow for a good fit between the Q-V trajectories observed during these segments and the VV-C characteristics. Based on the results obtained from the optimal dispatch, it was observed that the voltage variations on the feeder are considerable during periods of highly variable
Figure 3: IEEE 123 node feeder for test [24].
PV output. Consequently, the Q dispatch of the DVC is adjusted accordingly to mitigate these variations. Conversely, when the PV output is low, the change in Q dispatch is not substantial. Therefore, the time segmentation is determined based on the PV output. In Fig. 6, Segment 1 represents a period of low PV output when the PV generation is less than 25% of the load, while Segment 2 corresponds to a period of high PV output (highlighted in yellow) when the PV generation exceeds 25% of the load. By dividing the time into these distinct segments, we can better align the VV-C characteristics with the observed Q-V trajectories during different PV output conditions.
#### 3.2.2 Volt/VAR Curve (V-C) Fitting
We propose two schemes for updating the VV-C for the DVC. The first scheme, called _curve shifting_, involves shifting the midpoint of the standard VV-C (i.e., \(\mathrm{V}_{ref}\)) to align with the average Q-V point (i.e., \(\mathrm{\hat{V}}_{ref}\)) obtained from the optimal Q-V trajectory. Only the \(\mathrm{V}_{ref}\) value is adjusted while maintaining the slope of the existing curve. In the second approach, called _fitted VV-C_, we use linear regression [29] to determine the slope (i.e., \(\Delta Q_{S2}/\Delta V_{S2}\)) that best fits the VV-C to closely match the optimal Q-V trajectory. The curve settings are provided in Table 2.
The next step is to determine the frequency at which the VV-C should be updated to ensure effective voltage support under varying operating conditions. As illustrated in Fig. 6, Segment 1 experiences low PV output, and thus the IEEE Std. 1547 VV-C is adopted. In Segment 2, with significant PV output, the VV-C is updated using the optimal dispatch results obtained for this segment. It is worth noting that the ideal approach would involve utilizing the optimal Q dispatch and voltage for the subsequent interval. Established methods, such as statistical or neural network-based approaches [30, 31], can be employed for short-term load and solar PV forecasting to facilitate this process. However, this is not the focus of this paper, therefore the simplest prediction available is to assume that we already know the predictions for the next interval.
Figure 4: Node voltage distribution by phase in descending order (a) in Zone 1, (b) in Zone 2.
Figure 5: Volt/VAR Curves (V-C) for (a) Standard [19], (b) Shifted, and (c) Fitted.
Figure 6: Time segmentation based on PV outputs, (a) Winter, (b) Spring, (c) Summer, and (d) Fall.
## IV Case Study
The IEEE 123 node test system shown in Fig. 3 is used to test and demonstrate the effectiveness of the proposed DVC optimal dispatching scheme in unbalanced scenario. This feeder is rated at 4.16kV and the substation transformer is equipped with a load tap changer (LTC). Additionally, there are 6 single-phase load voltage regulators (LVRs) for voltage regulation. To simulate high PV penetration on the feeder, five 1 MW PV farms are placed at nodes 18, 47, 54, 76, and 101, and a 1 MVAR 3-phase DVC is also considered. OpenDSS is used to do the time series power flow simulations and the DVC is modelled as three single-phase impedance banks with independent control on each phase. The load and PV profiles utilized in this study are obtained from two different data sources. The 1-minute smart meter data sets are sourced from the Pecan Street data repository [32], while the 1-minute PV data sets are collected from Duke Energy in North Carolina. The ZIP load model in OpenDSS is implemented with model 8 by setting ZIPV = [0.24, 0.36, 0.40, 0.24, 0.36, 0.40, 0.80]. Figure 7 presents the normalized load and PV profiles for four selected sample days.
### DVC Placement
The proposed approach is applied to determine the optimal location for the DVC in the system. Firstly, the node voltage variations in Zone 1, where the DVC is intended to be placed, are obtained without the DVC (i.e., base case). The node voltage profiles obtained are depicted in Fig. 4. Based on these profiles, three candidate nodes (i.e., nodes 7, 8, and 13) are selected as they have the largest voltage variations. Subsequently, the DVC is positioned at these candidate locations, and the optimal dispatch is used to evaluate the effectiveness of the DVC in mitigating voltage variations on the feeder. To determine the DVC placement, only the voltage variation (i.e., \(f^{\mu}\) with \(w_{\mu}=1\) and \(f^{\theta}\) with \(w_{\theta}=0\)) is considered as the main objective for the DVC dispatch in (3). Table 3 presents the performance metrics obtained for these three scenarios. At each time step, the optimal Q dispatch of the DVC is determined to minimize voltage variations while monitoring the voltage levels of all nodes in the test system. The total number of voltage points (T) monitored during the scheduling period is 1,549,440.
Figure 8 shows the node voltage histograms for the three scenarios, revealing the impact of placing the DVC at these locations on reducing voltage variations among the feeder nodes. The results demonstrate a notable improvement, as a significant portion of node voltages now fall within the desired voltage band. Specifically, the percentage of node voltages outside the band decreases from 28.98% in the base case to 14.90% when the DVC is placed at node 8. Moreover, the voltage variation statistics show slight variations across the different phases of the circuit. In all scenarios, the average (\(\mu\)) voltage decreases compared to the base case, while the standard deviation (\(\sigma\)) shows varying changes. Ultimately, after considering the performance metrics, node 8 is chosen as the optimal location since it yields the most favorable statistics for both the lower and upper voltage bands.
Figure 9 provides an evaluation of the performance of the DVC by examining voltage variations at three selected nodes (29, 66, and 8) in Zone 1, both with and without the presence of the DVC at node 8. Node 8 represents the location where the DVC is placed, while nodes 29 and 66 are the farthest nodes connected to the main
\begin{table}
\begin{tabular}{c c c c c} Node & \(y_{out}^{lower}\) & \(V_{in}\) & \(V_{out}^{upper}\) & Out of limits (\%) \\ No. & (**1**) & (**2**) & (**3**) & (**1**)+(**3**)/(T) \\ \hline Base & 3,744 & **1,100,393** & **445,303** & **28.98** \\
7 & **1,392** & **1,266,338** & **281,710** & **18.27** \\
8 & **2,496** & **1,318,507** & **228,437** & **14.90** \\
**13** & **4,950** & **1,288,103** & **256,387** & **16.87** \\ \end{tabular}
\end{table} TABLE III: Voltage Violations of 3 Candidate Nodes.
Figure 8: Voltage distribution by DVC placement, (a) no DVC, (b) node 7, (c) node 8, and (d) node 13.
Figure 7: Real power of (a) Feeder load, (b) PV outputs.
Figure 9: Distributions of voltage variations without and with DVC at node 8 for (a) node 29, (b) node 66, and (c) node 8.
depicted in the Fig. 9, the DVC demonstrates a noticeable reduction in the occurrences of low voltages (< 0.98) and high voltages (> 1.03). However, note that the impact of the DVC on nodes 29 and 66 is minimal, with only slight changes observed. Conversely, the DVC significantly diminishes voltage variations at the node to which it is connected. This observation suggests that the DVC is particularly effective in reducing voltages at the bus it is connected and neighboring buses.
### _Optimal dispatch_
The placement of the DVC at node 8 (i.e., Case 1) introduces an undesirable effect, leading to an increase in LTC and LVR operations compared to the base case (i.e., Case 0), as shown in Table 5 and Figure 11. The results highlight a significant increase in tap operations. This issue emphasizes the need for an optimal DVC dispatching approach that considers two objectives: \(f^{\mu}\), the primary objective aimed at minimizing voltage variations, and \(f^{\theta}\), the secondary objective aimed at limiting LVR tap changes. Since the number of tap operations is numerically a large value compared to \(f^{\mu}\), we tried with two different weights for \(f^{\theta}\): 1 and 0.1. To determine the most suitable option among these alternatives, we simulated the following four cases:
* Case 0 (Base Case): This is the base case which corresponds to the system without the DVC.
* Case 1: This case only considers the voltage variation (\(f^{\mu}\)) as the main objective for the DVC dispatch. The dispatching scheme is employed to determine the appropriate VAR injection/absorption required for the DVC to minimize voltage variations.
* Case 2: In this case, the objective for the DVC dispatch combines both the voltage variation metric \(f^{\mu}\) with \(w_{\mu}=1\) and tap changes metric \(f^{\theta}\) with \(w_{\theta}=1\).
* Case 3: This case is the same as Case 2 but the weight for the LVR tap metric \(f^{\theta}\) is reduced to \(w_{\theta}=0.1\).
Simulation results for these four cases are summarized in Tables 4 and 5. The key observations are summarized below:
* Compared to Case 0 (base case), Cases 1, 2, and 3 all reduce node voltage variations, as indicated by the performance statistics presented in Table 4. Figure 10 shows the voltage distribution for the four cases, highlighting how the voltages are shifted closer to the desired voltage band.
* Figure 11 compares the number of LTC and LVR operations across different cases. The results demonstrate that focusing only on voltage variation in the dispatch (Case 1) leads to an increase in LVR operations. However, Case 3, which incorporates the revised objective, provides a good compromise by reducing LVR operations compared to Case 1, without degrading the voltage variation performance of the DVC.
* Figure 12 shows the optimal Q dispatch results and _combined objectives_
Based on the aforementioned findings, it can be inferred that the voltage variation outcomes are influenced by the weight assigned to tap change metrics. Therefore, a sensitivity analysis is performed to assess the effects of varying tap change weights on the results. The simulations are repeated using different weights of \(w_{\theta}\)={0.01, 0.05, 0.1, 0.5}.
The results presented in Tables 6 and 7 demonstrate the importance of adjusting the weight parameter to achieve an optimal compromise solution. It is evident that finding the right balance between reducing voltage variation and limiting the increase in LVR tap operations is crucial. In the case of this system, a weight value of \(w_{\theta}\) (= 0.05) provides a favorable trade-off, effectively minimizing voltage variation while limiting the increase in LVR tap operations.
These results demonstrate a substantial reduction in voltage variations compared to the standard VV-C when using the revised curves. Comparing these new statistics with those obtained from optimal dispatch in Table 6, we observe that the improvement in reducing voltage variation is not as significant as with optimal dispatch. However, it is still notably more effective than applying the standard VV-C.
### _SUNNY VS. CLOUDY Days_
The impact of PV output variability on voltage variation is more pronounced on cloudy days compared to sunny days. Figure 15 presents the normalized load and PV profiles for both sunny and cloudy days. We examined the effectiveness of the DVC in mitigating high voltage variations caused by cloud cover. For this analysis, we employed a 120-minute update frequency, which demonstrated the best performance according to Tables 8 and 9. The total voltage points (T) monitored on both sunny and cloudy days are 387,360, respectively. The main observations from the simulation analysis can be summarized as follows:
* The DVC shows greater effectiveness in reducing voltage variations on cloudy days compared to sunny days due to its rapid response to PV variability. Table 10 demonstrates the performance of the DVC with fitted VV-C, showing a 1.7% reduction in voltage variations on sunny day and a 3.9% reduction on cloudy day when compared to the base case without DVC.
* The proposed local dispatch schemes, namely the shifted and fitted VV-Cs, outperform the standard VV-C (i.e., IEEE Std. 1547). On sunny day, the shifted VV-C
\begin{table}
\begin{tabular}{c c c c c c c} \multirow{2}{*}{Day} & \multirow{2}{*}{VVC} & \multirow{2}{*}{\(V_{out}^{lower}\)} & \multirow{2}{*}{\(V_{in}\)} & \multirow{2}{*}{\(V_{out}^{upper}\)} & \multirow{2}{*}{Out of limits (\%)} \\ & & (1) & (2) & (3) & ((1)+(3))/(T) \\ \hline \multirow{4}{*}{V-C} & Base & **1,558** & **271,602** & **114,200** & **29.88** \\ & IEEE **1547** & **1,441** & **271,743** & **14,4176** & **29.85** \\ & Shifted & **1,441** & **271,943** & **113,976** & **29.80** \\ & Fitted & **1,441** & **273,542** & **112,377** & **29.38** \\ \hline \multirow{4}{*}{V-C} & Base & **1,411** & **270,992** & **114,957** & **30.04** \\ & IEEE **1547** & **1,321** & **272,111** & **113,928** & **29.75** \\ \cline{1-1} & Shifted & **1,418** & **273,497** & **112,445** & **29.39** \\ \cline{1-1} & Fitted & **1,441** & **275,491** & **110,428** & **28.88** \\ \end{tabular}
\end{table} TABLE 10: Voltage Violation in Sunny and Cloudy Days.
Figure 14: Optimal G dispatch of the DVC at Phase C in winter and local control schemes for (a) 08:00-10:00, (b) 10:00-12:00, (c) 12:00-14:00, and (d) 14:00-16:00.
Figure 15: Real power profile of load and PV for (a) Sunny day, (b) Cloudy day.
reduced voltage variations by 0.2%, while the fitted VV-C mitigated them by 1.6%. Similarly, on cloudy day, the shifted VV-C reduced voltage variations by 1.2%, while the fitted VV-C achieved a greater reduction of 2.9%.
* The proposed scheme also effectively limits the increase in LVR operations. According to Table 11, the DVC with the fitted VV-C reduces voltage regulator operations from 97 to 92 (5.2% reduction) on sunny days and from 152 to 148 (2.6% reduction) on cloudy days, respectively.
## V Conclusion
This paper proposes a practical dispatching scheme designed to mitigate the rapid voltage variations caused by PV intermittency on a feeder. The proposed supervisory dispatch scheme adjusts the VV-C utilized by the local DVC controller, overcoming the limitations of existing methods. Through simulations conducted on a sample distribution feeder, the effectiveness of the proposed scheme is demonstrated. The simulations clearly indicate that using standard Volt-VAR curves for local DVC control may not effectively reduce voltage variations.
The paper highlights the significance of the proposed approach, which employs a supervisory dispatching scheme to modify these curves, ensuring that the DVC provides efficient voltage variation reduction while minimizing LVR tap operations. Additionally, the paper emphasizes the necessity of an optimal dispatching scheme to properly modify the VV-C. The case study demonstrates the need for adjusting the VV-C about every two hours, particularly during periods of high and variable PV output. Furthermore, the optimal dispatching scheme can be used to determine the optimal DVC placement on a distribution feeder with high PV generation. The case study results illustrate that the proposed heuristics-based scheme is highly effective in determining suitable candidate locations, while maintaining computational efficiency.
|